Barron and Cover’s Theory in Supervised Learning and its Application to Lasso

Masanori Kawakita, Jun’ichi Takeuchi
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1958-1966, 2016.

Abstract

We study Barron and Cover’s theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron (2008) and Chatterjee and Barron (2014) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without boundedness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-kawakita16, title = {Barron and Cover's Theory in Supervised Learning and its Application to Lasso}, author = {Kawakita, Masanori and Takeuchi, Jun’ichi}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1958--1966}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/kawakita16.pdf}, url = {https://proceedings.mlr.press/v48/kawakita16.html}, abstract = {We study Barron and Cover’s theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron (2008) and Chatterjee and Barron (2014) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without boundedness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.} }
Endnote
%0 Conference Paper %T Barron and Cover’s Theory in Supervised Learning and its Application to Lasso %A Masanori Kawakita %A Jun’ichi Takeuchi %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-kawakita16 %I PMLR %P 1958--1966 %U https://proceedings.mlr.press/v48/kawakita16.html %V 48 %X We study Barron and Cover’s theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron (2008) and Chatterjee and Barron (2014) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without boundedness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.
RIS
TY - CPAPER TI - Barron and Cover’s Theory in Supervised Learning and its Application to Lasso AU - Masanori Kawakita AU - Jun’ichi Takeuchi BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-kawakita16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1958 EP - 1966 L1 - http://proceedings.mlr.press/v48/kawakita16.pdf UR - https://proceedings.mlr.press/v48/kawakita16.html AB - We study Barron and Cover’s theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron (2008) and Chatterjee and Barron (2014) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without boundedness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation. ER -
APA
Kawakita, M. & Takeuchi, J.. (2016). Barron and Cover’s Theory in Supervised Learning and its Application to Lasso. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1958-1966 Available from https://proceedings.mlr.press/v48/kawakita16.html.

Related Material