Internal representations of vision models through the lens of frames on data manifolds

Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey, Tegan Emerson
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:75-115, 2024.

Abstract

While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-kvinge24a, title = {Internal representations of vision models through the lens of frames on data manifolds}, author = {Kvinge, Henry and Jorgenson, Grayson and Brown, Davis and Godfrey, Charles and Emerson, Tegan}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {75--115}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/kvinge24a/kvinge24a.pdf}, url = {https://proceedings.mlr.press/v228/kvinge24a.html}, abstract = {While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.} }
Endnote
%0 Conference Paper %T Internal representations of vision models through the lens of frames on data manifolds %A Henry Kvinge %A Grayson Jorgenson %A Davis Brown %A Charles Godfrey %A Tegan Emerson %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-kvinge24a %I PMLR %P 75--115 %U https://proceedings.mlr.press/v228/kvinge24a.html %V 228 %X While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain. This is especially true when trying to understand the impact of model design choices, such as model architecture or training algorithm, on hidden representation geometry and dynamics. In this work we present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold. Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point, for example infinitesimal augmentations, noise perturbations, or perturbations produced by a generative model, and studying how these change as they pass through a network. Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint. Our results provide new perspectives on a number of phenomena, such as the manner in which training with augmentation produces model invariance or the proposed trade-off between adversarial training and model generalization.
APA
Kvinge, H., Jorgenson, G., Brown, D., Godfrey, C. & Emerson, T.. (2024). Internal representations of vision models through the lens of frames on data manifolds. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:75-115 Available from https://proceedings.mlr.press/v228/kvinge24a.html.

Related Material