Model-Augmented Conditional Mutual Information Estimation for Feature Selection

Alan Yang, AmirEmad Ghassami, Maxim Raginsky, Negar Kiyavash, Elyse Rosenbaum
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:1139-1148, 2020.

Abstract

Markov blanket feature selection, while theoretically optimal, is generally challenging to implement. This is due to the shortcomings of existing approaches to conditional independence (CI) testing, which tend to struggle either with the curse of dimensionality or computational complexity. We propose a novel two-step approach which facilitates Markov blanket feature selection in high dimensions. First, neural networks are used to map features to low-dimensional representations. In the second step, CI testing is performed by applying the $k$-NN conditional mutual information estimator to the learned feature maps. The mappings are designed to ensure that mapped samples both preserve information and share similar information about the target variable if and only if they are close in Euclidean distance. We show that these properties boost the performance of the $k$-NN estimator in the second step. The performance of the proposed method is evaluated on both synthetic and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-yang20b, title = {Model-Augmented Conditional Mutual Information Estimation for Feature Selection}, author = {Yang, Alan and Ghassami, AmirEmad and Raginsky, Maxim and Kiyavash, Negar and Rosenbaum, Elyse}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {1139--1148}, year = {2020}, editor = {Peters, Jonas and Sontag, David}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/yang20b/yang20b.pdf}, url = {https://proceedings.mlr.press/v124/yang20b.html}, abstract = {Markov blanket feature selection, while theoretically optimal, is generally challenging to implement. This is due to the shortcomings of existing approaches to conditional independence (CI) testing, which tend to struggle either with the curse of dimensionality or computational complexity. We propose a novel two-step approach which facilitates Markov blanket feature selection in high dimensions. First, neural networks are used to map features to low-dimensional representations. In the second step, CI testing is performed by applying the $k$-NN conditional mutual information estimator to the learned feature maps. The mappings are designed to ensure that mapped samples both preserve information and share similar information about the target variable if and only if they are close in Euclidean distance. We show that these properties boost the performance of the $k$-NN estimator in the second step. The performance of the proposed method is evaluated on both synthetic and real data.} }
Endnote
%0 Conference Paper %T Model-Augmented Conditional Mutual Information Estimation for Feature Selection %A Alan Yang %A AmirEmad Ghassami %A Maxim Raginsky %A Negar Kiyavash %A Elyse Rosenbaum %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-yang20b %I PMLR %P 1139--1148 %U https://proceedings.mlr.press/v124/yang20b.html %V 124 %X Markov blanket feature selection, while theoretically optimal, is generally challenging to implement. This is due to the shortcomings of existing approaches to conditional independence (CI) testing, which tend to struggle either with the curse of dimensionality or computational complexity. We propose a novel two-step approach which facilitates Markov blanket feature selection in high dimensions. First, neural networks are used to map features to low-dimensional representations. In the second step, CI testing is performed by applying the $k$-NN conditional mutual information estimator to the learned feature maps. The mappings are designed to ensure that mapped samples both preserve information and share similar information about the target variable if and only if they are close in Euclidean distance. We show that these properties boost the performance of the $k$-NN estimator in the second step. The performance of the proposed method is evaluated on both synthetic and real data.
APA
Yang, A., Ghassami, A., Raginsky, M., Kiyavash, N. & Rosenbaum, E.. (2020). Model-Augmented Conditional Mutual Information Estimation for Feature Selection. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:1139-1148 Available from https://proceedings.mlr.press/v124/yang20b.html.

Related Material