Interpreting and Disentangling Feature Components of Various Complexity from DNNs

Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8971-8981, 2021.

Abstract

This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method decomposes and visualizes feature components of different complexity orders from the feature. The feature decomposition enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ren21b, title = {Interpreting and Disentangling Feature Components of Various Complexity from DNNs}, author = {Ren, Jie and Li, Mingjie and Liu, Zexu and Zhang, Quanshi}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8971--8981}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ren21b/ren21b.pdf}, url = {https://proceedings.mlr.press/v139/ren21b.html}, abstract = {This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method decomposes and visualizes feature components of different complexity orders from the feature. The feature decomposition enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation.} }
Endnote
%0 Conference Paper %T Interpreting and Disentangling Feature Components of Various Complexity from DNNs %A Jie Ren %A Mingjie Li %A Zexu Liu %A Quanshi Zhang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ren21b %I PMLR %P 8971--8981 %U https://proceedings.mlr.press/v139/ren21b.html %V 139 %X This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method decomposes and visualizes feature components of different complexity orders from the feature. The feature decomposition enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation.
APA
Ren, J., Li, M., Liu, Z. & Zhang, Q.. (2021). Interpreting and Disentangling Feature Components of Various Complexity from DNNs. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8971-8981 Available from https://proceedings.mlr.press/v139/ren21b.html.

Related Material