Layerwise Change of Knowledge in Neural Networks

Xu Cheng, Lei Cheng, Zhaoran Peng, Yang Xu, Tian Han, Quanshi Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:8038-8059, 2024.

Abstract

This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation. Up to now, although how to define knowledge encoded by the DNN has not reached a consensus so far, previous studies have derived a series of mathematical evidences to take interactions as symbolic primitive inference patterns encoded by a DNN. We extend the definition of interactions and, for the first time, extract interactions encoded by intermediate layers. We quantify and track the newly emerged interactions and the forgotten interactions in each layer during the forward propagation, which shed new light on the learning behavior of DNNs. The layer-wise change of interactions also reveals the change of the generalization capacity and instability of feature representations of a DNN.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-cheng24b, title = {Layerwise Change of Knowledge in Neural Networks}, author = {Cheng, Xu and Cheng, Lei and Peng, Zhaoran and Xu, Yang and Han, Tian and Zhang, Quanshi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {8038--8059}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/cheng24b/cheng24b.pdf}, url = {https://proceedings.mlr.press/v235/cheng24b.html}, abstract = {This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation. Up to now, although how to define knowledge encoded by the DNN has not reached a consensus so far, previous studies have derived a series of mathematical evidences to take interactions as symbolic primitive inference patterns encoded by a DNN. We extend the definition of interactions and, for the first time, extract interactions encoded by intermediate layers. We quantify and track the newly emerged interactions and the forgotten interactions in each layer during the forward propagation, which shed new light on the learning behavior of DNNs. The layer-wise change of interactions also reveals the change of the generalization capacity and instability of feature representations of a DNN.} }
Endnote
%0 Conference Paper %T Layerwise Change of Knowledge in Neural Networks %A Xu Cheng %A Lei Cheng %A Zhaoran Peng %A Yang Xu %A Tian Han %A Quanshi Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-cheng24b %I PMLR %P 8038--8059 %U https://proceedings.mlr.press/v235/cheng24b.html %V 235 %X This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation. Up to now, although how to define knowledge encoded by the DNN has not reached a consensus so far, previous studies have derived a series of mathematical evidences to take interactions as symbolic primitive inference patterns encoded by a DNN. We extend the definition of interactions and, for the first time, extract interactions encoded by intermediate layers. We quantify and track the newly emerged interactions and the forgotten interactions in each layer during the forward propagation, which shed new light on the learning behavior of DNNs. The layer-wise change of interactions also reveals the change of the generalization capacity and instability of feature representations of a DNN.
APA
Cheng, X., Cheng, L., Peng, Z., Xu, Y., Han, T. & Zhang, Q.. (2024). Layerwise Change of Knowledge in Neural Networks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:8038-8059 Available from https://proceedings.mlr.press/v235/cheng24b.html.

Related Material