Learning with Non-Convex Truncated Losses by SGD

Yi Xu, Shenghuo Zhu, Sen Yang, Chi Zhang, Rong Jin, Tianbao Yang
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:701-711, 2020.

Abstract

Learning with a convex loss function has been a dominating paradigm for many years. It remains an interesting question how non-convex loss functions help improve the generalization of learning with broad applicability. In this paper, we study a family of objective functions formed by truncating traditional loss functions, which is applicable to both shallow learning and deep learning. Truncating loss functions has potential to be less vulnerable and more robust to large noise in observations that could be adversarial. More importantly, it is a generic technique without assuming the knowledge of noise distribution. To justify non-convex learning with truncated losses, we establish excess risk bounds of empirical risk minimization based on truncated losses for heavy-tailed output, and statistical error of an approximate stationary point found by stochastic gradient descent (SGD) method. Our experiments for shallow and deep learning for regression with outliers, corrupted data and heavy-tailed noise further justify the proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-xu20b, title = {Learning with Non-Convex Truncated Losses by SGD}, author = {Xu, Yi and Zhu, Shenghuo and Yang, Sen and Zhang, Chi and Jin, Rong and Yang, Tianbao}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {701--711}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/xu20b/xu20b.pdf}, url = {https://proceedings.mlr.press/v115/xu20b.html}, abstract = {Learning with a convex loss function has been a dominating paradigm for many years. It remains an interesting question how non-convex loss functions help improve the generalization of learning with broad applicability. In this paper, we study a family of objective functions formed by truncating traditional loss functions, which is applicable to both shallow learning and deep learning. Truncating loss functions has potential to be less vulnerable and more robust to large noise in observations that could be adversarial. More importantly, it is a generic technique without assuming the knowledge of noise distribution. To justify non-convex learning with truncated losses, we establish excess risk bounds of empirical risk minimization based on truncated losses for heavy-tailed output, and statistical error of an approximate stationary point found by stochastic gradient descent (SGD) method. Our experiments for shallow and deep learning for regression with outliers, corrupted data and heavy-tailed noise further justify the proposed method.} }
Endnote
%0 Conference Paper %T Learning with Non-Convex Truncated Losses by SGD %A Yi Xu %A Shenghuo Zhu %A Sen Yang %A Chi Zhang %A Rong Jin %A Tianbao Yang %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-xu20b %I PMLR %P 701--711 %U https://proceedings.mlr.press/v115/xu20b.html %V 115 %X Learning with a convex loss function has been a dominating paradigm for many years. It remains an interesting question how non-convex loss functions help improve the generalization of learning with broad applicability. In this paper, we study a family of objective functions formed by truncating traditional loss functions, which is applicable to both shallow learning and deep learning. Truncating loss functions has potential to be less vulnerable and more robust to large noise in observations that could be adversarial. More importantly, it is a generic technique without assuming the knowledge of noise distribution. To justify non-convex learning with truncated losses, we establish excess risk bounds of empirical risk minimization based on truncated losses for heavy-tailed output, and statistical error of an approximate stationary point found by stochastic gradient descent (SGD) method. Our experiments for shallow and deep learning for regression with outliers, corrupted data and heavy-tailed noise further justify the proposed method.
APA
Xu, Y., Zhu, S., Yang, S., Zhang, C., Jin, R. & Yang, T.. (2020). Learning with Non-Convex Truncated Losses by SGD. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:701-711 Available from https://proceedings.mlr.press/v115/xu20b.html.

Related Material