Learning Interpretable Models using Soft Integrity Constraints

Khaled Belahcène, Nataliya Sokolovska, Yann Chevaleyre, Jean-Daniel Zucker
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:529-544, 2020.

Abstract

Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L1 penalty norm, and as the weights grow, it approaches the L2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularised empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-belahcene20a, title = {Learning Interpretable Models using Soft Integrity Constraints}, author = {Belahc{\`{e}}ne, Khaled and Sokolovska, Nataliya and Chevaleyre, Yann and Zucker, Jean-Daniel}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {529--544}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/belahcene20a/belahcene20a.pdf}, url = {https://proceedings.mlr.press/v129/belahcene20a.html}, abstract = {Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L1 penalty norm, and as the weights grow, it approaches the L2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularised empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.} }
Endnote
%0 Conference Paper %T Learning Interpretable Models using Soft Integrity Constraints %A Khaled Belahcène %A Nataliya Sokolovska %A Yann Chevaleyre %A Jean-Daniel Zucker %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-belahcene20a %I PMLR %P 529--544 %U https://proceedings.mlr.press/v129/belahcene20a.html %V 129 %X Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L1 penalty norm, and as the weights grow, it approaches the L2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularised empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.
APA
Belahcène, K., Sokolovska, N., Chevaleyre, Y. & Zucker, J.. (2020). Learning Interpretable Models using Soft Integrity Constraints. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:529-544 Available from https://proceedings.mlr.press/v129/belahcene20a.html.

Related Material