Editable Concept Bottleneck Models

Lijie Hu, Chenyang Ren, Zhengyu Hu, Hongbin Lin, Cheng-Long Wang, Zhen Tan, Weimin Lyu, Jingfeng Zhang, Hui Xiong, Di Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:24678-24726, 2025.

Abstract

Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on cases where the data, including concepts, are clean. In many scenarios, we always need to remove/insert some training data or new concepts from trained CBMs due to different reasons, such as privacy concerns, data mislabelling, spurious concepts, and concept annotation errors. Thus, the challenge of deriving efficient editable CBMs without retraining from scratch persists, particularly in large-scale applications. To address these challenges, we propose Editable Concept Bottleneck Models (ECBMs). Specifically, ECBMs support three different levels of data removal: concept-label-level, concept-level, and data-level. ECBMs enjoy mathematically rigorous closed-form approximations derived from influence functions that obviate the need for re-training. Experimental results demonstrate the efficiency and effectiveness of our ECBMs, affirming their adaptability within the realm of CBMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-hu25u, title = {Editable Concept Bottleneck Models}, author = {Hu, Lijie and Ren, Chenyang and Hu, Zhengyu and Lin, Hongbin and Wang, Cheng-Long and Tan, Zhen and Lyu, Weimin and Zhang, Jingfeng and Xiong, Hui and Wang, Di}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {24678--24726}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/hu25u/hu25u.pdf}, url = {https://proceedings.mlr.press/v267/hu25u.html}, abstract = {Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on cases where the data, including concepts, are clean. In many scenarios, we always need to remove/insert some training data or new concepts from trained CBMs due to different reasons, such as privacy concerns, data mislabelling, spurious concepts, and concept annotation errors. Thus, the challenge of deriving efficient editable CBMs without retraining from scratch persists, particularly in large-scale applications. To address these challenges, we propose Editable Concept Bottleneck Models (ECBMs). Specifically, ECBMs support three different levels of data removal: concept-label-level, concept-level, and data-level. ECBMs enjoy mathematically rigorous closed-form approximations derived from influence functions that obviate the need for re-training. Experimental results demonstrate the efficiency and effectiveness of our ECBMs, affirming their adaptability within the realm of CBMs.} }
Endnote
%0 Conference Paper %T Editable Concept Bottleneck Models %A Lijie Hu %A Chenyang Ren %A Zhengyu Hu %A Hongbin Lin %A Cheng-Long Wang %A Zhen Tan %A Weimin Lyu %A Jingfeng Zhang %A Hui Xiong %A Di Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-hu25u %I PMLR %P 24678--24726 %U https://proceedings.mlr.press/v267/hu25u.html %V 267 %X Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer. However, most previous studies focused on cases where the data, including concepts, are clean. In many scenarios, we always need to remove/insert some training data or new concepts from trained CBMs due to different reasons, such as privacy concerns, data mislabelling, spurious concepts, and concept annotation errors. Thus, the challenge of deriving efficient editable CBMs without retraining from scratch persists, particularly in large-scale applications. To address these challenges, we propose Editable Concept Bottleneck Models (ECBMs). Specifically, ECBMs support three different levels of data removal: concept-label-level, concept-level, and data-level. ECBMs enjoy mathematically rigorous closed-form approximations derived from influence functions that obviate the need for re-training. Experimental results demonstrate the efficiency and effectiveness of our ECBMs, affirming their adaptability within the realm of CBMs.
APA
Hu, L., Ren, C., Hu, Z., Lin, H., Wang, C., Tan, Z., Lyu, W., Zhang, J., Xiong, H. & Wang, D.. (2025). Editable Concept Bottleneck Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:24678-24726 Available from https://proceedings.mlr.press/v267/hu25u.html.

Related Material