Continual Learning for Unsupervised Concept Bottleneck Discovery

Luca Salvatore Lorello, Marco Lippi, Stefano Melacci
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:597-619, 2025.

Abstract

In the context of continual learning, little attention is dedicated to the problem of developing a layer of “concepts’‘, also known as “concept bottleneck”, to support the discrimination of higher-level task information, especially when concepts are not supervised. Concept bottleneck discovery in an unsupervised setting is thus largely unexplored, and this paper aims to move a step forward in such direction. We consider a neural network that faces a stream of binary tasks, with no further information on the relationships among them, i.e., no supervisions at the level of concepts. The learning of the concept bottleneck layer is driven by means of a triplet-based criterion, which is instantiated in conjunction with a specifically designed experience replay (concept replay). Such a novel criterion exploits fuzzy Hamming distances to treat vectors of concept probabilities as fuzzy bitstrings, encouraging different concept activations across different tasks, while also adding a regularization effect which pushes probabilities towards crisp values. Despite the lack of concept supervisions, we found that continually learning the streamed tasks in a progressive manner yields the development of inner concepts that are significantly better correlated with the higher-level tasks, compared to the case of joint-offline learning. This result is showcased in an extended experimental activity involving different architectures and newly created (and shared) datasets that are also well-suited to support further investigation of continual learning in concept-based models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-lorello25a, title = {Continual Learning for Unsupervised Concept Bottleneck Discovery}, author = {Lorello, Luca Salvatore and Lippi, Marco and Melacci, Stefano}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {597--619}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/lorello25a/lorello25a.pdf}, url = {https://proceedings.mlr.press/v274/lorello25a.html}, abstract = {In the context of continual learning, little attention is dedicated to the problem of developing a layer of “concepts’‘, also known as “concept bottleneck”, to support the discrimination of higher-level task information, especially when concepts are not supervised. Concept bottleneck discovery in an unsupervised setting is thus largely unexplored, and this paper aims to move a step forward in such direction. We consider a neural network that faces a stream of binary tasks, with no further information on the relationships among them, i.e., no supervisions at the level of concepts. The learning of the concept bottleneck layer is driven by means of a triplet-based criterion, which is instantiated in conjunction with a specifically designed experience replay (concept replay). Such a novel criterion exploits fuzzy Hamming distances to treat vectors of concept probabilities as fuzzy bitstrings, encouraging different concept activations across different tasks, while also adding a regularization effect which pushes probabilities towards crisp values. Despite the lack of concept supervisions, we found that continually learning the streamed tasks in a progressive manner yields the development of inner concepts that are significantly better correlated with the higher-level tasks, compared to the case of joint-offline learning. This result is showcased in an extended experimental activity involving different architectures and newly created (and shared) datasets that are also well-suited to support further investigation of continual learning in concept-based models.} }
Endnote
%0 Conference Paper %T Continual Learning for Unsupervised Concept Bottleneck Discovery %A Luca Salvatore Lorello %A Marco Lippi %A Stefano Melacci %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-lorello25a %I PMLR %P 597--619 %U https://proceedings.mlr.press/v274/lorello25a.html %V 274 %X In the context of continual learning, little attention is dedicated to the problem of developing a layer of “concepts’‘, also known as “concept bottleneck”, to support the discrimination of higher-level task information, especially when concepts are not supervised. Concept bottleneck discovery in an unsupervised setting is thus largely unexplored, and this paper aims to move a step forward in such direction. We consider a neural network that faces a stream of binary tasks, with no further information on the relationships among them, i.e., no supervisions at the level of concepts. The learning of the concept bottleneck layer is driven by means of a triplet-based criterion, which is instantiated in conjunction with a specifically designed experience replay (concept replay). Such a novel criterion exploits fuzzy Hamming distances to treat vectors of concept probabilities as fuzzy bitstrings, encouraging different concept activations across different tasks, while also adding a regularization effect which pushes probabilities towards crisp values. Despite the lack of concept supervisions, we found that continually learning the streamed tasks in a progressive manner yields the development of inner concepts that are significantly better correlated with the higher-level tasks, compared to the case of joint-offline learning. This result is showcased in an extended experimental activity involving different architectures and newly created (and shared) datasets that are also well-suited to support further investigation of continual learning in concept-based models.
APA
Lorello, L.S., Lippi, M. & Melacci, S.. (2025). Continual Learning for Unsupervised Concept Bottleneck Discovery. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:597-619 Available from https://proceedings.mlr.press/v274/lorello25a.html.

Related Material