On Greedy Maximization of Entropy

Dravyansh Sharma, Ashish Kapoor, Amit Deshpande
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1330-1338, 2015.

Abstract

Submodular function maximization is one of the key problems that arise in many machine learning tasks. Greedy selection algorithms are the proven choice to solve such problems, where prior theoretical work guarantees (1 - 1/e) approximation ratio. However, it has been empirically observed that greedy selection provides almost optimal solutions in practice. The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e). Applications include, but are not limited to, sensor selection tasks which use both entropy and mutual information as a maximization criteria. We give a theoretical justification for the nearly optimal approximation ratio via detailed analysis of the curvature of these objective functions for Gaussian RBF kernels.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-sharma15, title = {On Greedy Maximization of Entropy}, author = {Sharma, Dravyansh and Kapoor, Ashish and Deshpande, Amit}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1330--1338}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/sharma15.pdf}, url = {https://proceedings.mlr.press/v37/sharma15.html}, abstract = {Submodular function maximization is one of the key problems that arise in many machine learning tasks. Greedy selection algorithms are the proven choice to solve such problems, where prior theoretical work guarantees (1 - 1/e) approximation ratio. However, it has been empirically observed that greedy selection provides almost optimal solutions in practice. The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e). Applications include, but are not limited to, sensor selection tasks which use both entropy and mutual information as a maximization criteria. We give a theoretical justification for the nearly optimal approximation ratio via detailed analysis of the curvature of these objective functions for Gaussian RBF kernels.} }
Endnote
%0 Conference Paper %T On Greedy Maximization of Entropy %A Dravyansh Sharma %A Ashish Kapoor %A Amit Deshpande %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-sharma15 %I PMLR %P 1330--1338 %U https://proceedings.mlr.press/v37/sharma15.html %V 37 %X Submodular function maximization is one of the key problems that arise in many machine learning tasks. Greedy selection algorithms are the proven choice to solve such problems, where prior theoretical work guarantees (1 - 1/e) approximation ratio. However, it has been empirically observed that greedy selection provides almost optimal solutions in practice. The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e). Applications include, but are not limited to, sensor selection tasks which use both entropy and mutual information as a maximization criteria. We give a theoretical justification for the nearly optimal approximation ratio via detailed analysis of the curvature of these objective functions for Gaussian RBF kernels.
RIS
TY - CPAPER TI - On Greedy Maximization of Entropy AU - Dravyansh Sharma AU - Ashish Kapoor AU - Amit Deshpande BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-sharma15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1330 EP - 1338 L1 - http://proceedings.mlr.press/v37/sharma15.pdf UR - https://proceedings.mlr.press/v37/sharma15.html AB - Submodular function maximization is one of the key problems that arise in many machine learning tasks. Greedy selection algorithms are the proven choice to solve such problems, where prior theoretical work guarantees (1 - 1/e) approximation ratio. However, it has been empirically observed that greedy selection provides almost optimal solutions in practice. The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e). Applications include, but are not limited to, sensor selection tasks which use both entropy and mutual information as a maximization criteria. We give a theoretical justification for the nearly optimal approximation ratio via detailed analysis of the curvature of these objective functions for Gaussian RBF kernels. ER -
APA
Sharma, D., Kapoor, A. & Deshpande, A.. (2015). On Greedy Maximization of Entropy. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1330-1338 Available from https://proceedings.mlr.press/v37/sharma15.html.

Related Material