Reasoning-Modulated Representations

Petar Veličković, Matko Bošnjak, Thomas Kipf, Alexander Lerchner, Raia Hadsell, Razvan Pascanu, Charles Blundell
Proceedings of the First Learning on Graphs Conference, PMLR 198:50:1-50:17, 2022.

Abstract

Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-velickovic22a, title = {Reasoning-Modulated Representations}, author = {Veli{\v c}kovi{\' c}, Petar and Bo{\v s}njak, Matko and Kipf, Thomas and Lerchner, Alexander and Hadsell, Raia and Pascanu, Razvan and Blundell, Charles}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {50:1--50:17}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/velickovic22a/velickovic22a.pdf}, url = {https://proceedings.mlr.press/v198/velickovic22a.html}, abstract = {Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors. } }
Endnote
%0 Conference Paper %T Reasoning-Modulated Representations %A Petar Veličković %A Matko Bošnjak %A Thomas Kipf %A Alexander Lerchner %A Raia Hadsell %A Razvan Pascanu %A Charles Blundell %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-velickovic22a %I PMLR %P 50:1--50:17 %U https://proceedings.mlr.press/v198/velickovic22a.html %V 198 %X Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors.
APA
Veličković, P., Bošnjak, M., Kipf, T., Lerchner, A., Hadsell, R., Pascanu, R. & Blundell, C.. (2022). Reasoning-Modulated Representations. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:50:1-50:17 Available from https://proceedings.mlr.press/v198/velickovic22a.html.

Related Material