InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory

Feifei Li, Mi Zhang, Zhaoxiang Wang, Min Yang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:36868-36885, 2025.

Abstract

Interpretability of point cloud (PC) models becomes imperative given their deployment in safety-critical scenarios such as autonomous vehicles. We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud. To enable human-understandable diagnostics of model failures, an ideal critical subset should be faithful (preserving points that causally influence predictions) and conceptually coherent (forming semantically meaningful structures that align with human perception). We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts, enabling the examination of their causal effect on model predictions with learnable priors. We evaluate InfoCons on synthetic datasets for classification, comparing it qualitatively and quantitatively with four baselines. We further demonstrate its scalability and flexibility on two real-world datasets and in two applications that utilize critical scores of PC.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25dw, title = {{I}nfo{C}ons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory}, author = {Li, Feifei and Zhang, Mi and Wang, Zhaoxiang and Yang, Min}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {36868--36885}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25dw/li25dw.pdf}, url = {https://proceedings.mlr.press/v267/li25dw.html}, abstract = {Interpretability of point cloud (PC) models becomes imperative given their deployment in safety-critical scenarios such as autonomous vehicles. We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud. To enable human-understandable diagnostics of model failures, an ideal critical subset should be faithful (preserving points that causally influence predictions) and conceptually coherent (forming semantically meaningful structures that align with human perception). We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts, enabling the examination of their causal effect on model predictions with learnable priors. We evaluate InfoCons on synthetic datasets for classification, comparing it qualitatively and quantitatively with four baselines. We further demonstrate its scalability and flexibility on two real-world datasets and in two applications that utilize critical scores of PC.} }
Endnote
%0 Conference Paper %T InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory %A Feifei Li %A Mi Zhang %A Zhaoxiang Wang %A Min Yang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25dw %I PMLR %P 36868--36885 %U https://proceedings.mlr.press/v267/li25dw.html %V 267 %X Interpretability of point cloud (PC) models becomes imperative given their deployment in safety-critical scenarios such as autonomous vehicles. We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud. To enable human-understandable diagnostics of model failures, an ideal critical subset should be faithful (preserving points that causally influence predictions) and conceptually coherent (forming semantically meaningful structures that align with human perception). We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts, enabling the examination of their causal effect on model predictions with learnable priors. We evaluate InfoCons on synthetic datasets for classification, comparing it qualitatively and quantitatively with four baselines. We further demonstrate its scalability and flexibility on two real-world datasets and in two applications that utilize critical scores of PC.
APA
Li, F., Zhang, M., Wang, Z. & Yang, M.. (2025). InfoCons: Identifying Interpretable Critical Concepts in Point Clouds via Information Theory. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:36868-36885 Available from https://proceedings.mlr.press/v267/li25dw.html.

Related Material