C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation

Arnab Mondal, Arnab Bhattacharjee, Sudipto Mukherjee, Himanshu Asnani, Sreeram Kannan, Prathosh A P
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:849-858, 2020.

Abstract

Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications. Newly proposed neural estimators for these quantities have overcome severe drawbacks of classical $k$NN-based estimators in high dimensions. In this work, we focus on conditional mutual information (CMI) estimation by utilizing its formulation as a \textit{minmax} optimization problem. Such a formulation leads to a joint training procedure similar to that of generative adversarial networks. We find that our proposed estimator provides better estimates than the existing approaches on a variety of simulated datasets comprising linear and non-linear relations between variables. As an application of CMI estimation, we deploy our estimator for conditional independence (CI) testing on real data and obtain better results than state-of-the-art CI testers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-mondal20b, title = {C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation}, author = {Mondal, Arnab and Bhattacharjee, Arnab and Mukherjee, Sudipto and Asnani, Himanshu and Kannan, Sreeram and A P, Prathosh}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {849--858}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/mondal20b/mondal20b.pdf}, url = { http://proceedings.mlr.press/v124/mondal20b.html }, abstract = {Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications. Newly proposed neural estimators for these quantities have overcome severe drawbacks of classical $k$NN-based estimators in high dimensions. In this work, we focus on conditional mutual information (CMI) estimation by utilizing its formulation as a \textit{minmax} optimization problem. Such a formulation leads to a joint training procedure similar to that of generative adversarial networks. We find that our proposed estimator provides better estimates than the existing approaches on a variety of simulated datasets comprising linear and non-linear relations between variables. As an application of CMI estimation, we deploy our estimator for conditional independence (CI) testing on real data and obtain better results than state-of-the-art CI testers.} }
Endnote
%0 Conference Paper %T C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation %A Arnab Mondal %A Arnab Bhattacharjee %A Sudipto Mukherjee %A Himanshu Asnani %A Sreeram Kannan %A Prathosh A P %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-mondal20b %I PMLR %P 849--858 %U http://proceedings.mlr.press/v124/mondal20b.html %V 124 %X Estimation of information theoretic quantities such as mutual information and its conditional variant has drawn interest in recent times owing to their multifaceted applications. Newly proposed neural estimators for these quantities have overcome severe drawbacks of classical $k$NN-based estimators in high dimensions. In this work, we focus on conditional mutual information (CMI) estimation by utilizing its formulation as a \textit{minmax} optimization problem. Such a formulation leads to a joint training procedure similar to that of generative adversarial networks. We find that our proposed estimator provides better estimates than the existing approaches on a variety of simulated datasets comprising linear and non-linear relations between variables. As an application of CMI estimation, we deploy our estimator for conditional independence (CI) testing on real data and obtain better results than state-of-the-art CI testers.
APA
Mondal, A., Bhattacharjee, A., Mukherjee, S., Asnani, H., Kannan, S. & A P, P.. (2020). C-MI-GAN : Estimation of Conditional Mutual Information using MinMax formulation. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:849-858 Available from http://proceedings.mlr.press/v124/mondal20b.html .

Related Material