Estimating the Error of Randomized Newton Methods: A Bootstrap Approach

Jessie X.T. Chen, Miles Lopes
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1649-1659, 2020.

Abstract

Randomized Newton methods have recently become the focus of intense research activity in large-scale and distributed optimization. In general, these methods are based on a “computation-accuracy trade-off”, which allows the user to gain scalability in exchange for error in the solution. However, the user does not know how much error is created by the randomized approximation, which can be detrimental in two ways: On one hand, the user may try to assess the unknown error with theoretical worst-case error bounds, but this approach is impractical when the bounds involve unknown constants, and it often leads to excessive computation. On the other hand, the user may select the “sketch size” and stopping criteria in a heuristic manner, but this can lead to unreliable results. Motivated by these difficulties, we show how bootstrapping can be used to directly estimate the unknown error, which prevents excessive computation, and offers more confidence about the quality of a randomized solution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-chen20o, title = {Estimating the Error of Randomized {N}ewton Methods: A Bootstrap Approach}, author = {Chen, Jessie X.T. and Lopes, Miles}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1649--1659}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chen20o/chen20o.pdf}, url = { http://proceedings.mlr.press/v119/chen20o.html }, abstract = {Randomized Newton methods have recently become the focus of intense research activity in large-scale and distributed optimization. In general, these methods are based on a “computation-accuracy trade-off”, which allows the user to gain scalability in exchange for error in the solution. However, the user does not know how much error is created by the randomized approximation, which can be detrimental in two ways: On one hand, the user may try to assess the unknown error with theoretical worst-case error bounds, but this approach is impractical when the bounds involve unknown constants, and it often leads to excessive computation. On the other hand, the user may select the “sketch size” and stopping criteria in a heuristic manner, but this can lead to unreliable results. Motivated by these difficulties, we show how bootstrapping can be used to directly estimate the unknown error, which prevents excessive computation, and offers more confidence about the quality of a randomized solution.} }
Endnote
%0 Conference Paper %T Estimating the Error of Randomized Newton Methods: A Bootstrap Approach %A Jessie X.T. Chen %A Miles Lopes %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-chen20o %I PMLR %P 1649--1659 %U http://proceedings.mlr.press/v119/chen20o.html %V 119 %X Randomized Newton methods have recently become the focus of intense research activity in large-scale and distributed optimization. In general, these methods are based on a “computation-accuracy trade-off”, which allows the user to gain scalability in exchange for error in the solution. However, the user does not know how much error is created by the randomized approximation, which can be detrimental in two ways: On one hand, the user may try to assess the unknown error with theoretical worst-case error bounds, but this approach is impractical when the bounds involve unknown constants, and it often leads to excessive computation. On the other hand, the user may select the “sketch size” and stopping criteria in a heuristic manner, but this can lead to unreliable results. Motivated by these difficulties, we show how bootstrapping can be used to directly estimate the unknown error, which prevents excessive computation, and offers more confidence about the quality of a randomized solution.
APA
Chen, J.X. & Lopes, M.. (2020). Estimating the Error of Randomized Newton Methods: A Bootstrap Approach. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1649-1659 Available from http://proceedings.mlr.press/v119/chen20o.html .

Related Material