Orthogonalized SGD and Nested Architectures for Anytime Neural Networks

Chengcheng Wan, Henry Hoffmann, Shan Lu, Michael Maire
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9807-9817, 2020.

Abstract

We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wan20a, title = {Orthogonalized {SGD} and Nested Architectures for Anytime Neural Networks}, author = {Wan, Chengcheng and Hoffmann, Henry and Lu, Shan and Maire, Michael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9807--9817}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wan20a/wan20a.pdf}, url = {https://proceedings.mlr.press/v119/wan20a.html}, abstract = {We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.} }
Endnote
%0 Conference Paper %T Orthogonalized SGD and Nested Architectures for Anytime Neural Networks %A Chengcheng Wan %A Henry Hoffmann %A Shan Lu %A Michael Maire %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wan20a %I PMLR %P 9807--9817 %U https://proceedings.mlr.press/v119/wan20a.html %V 119 %X We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.
APA
Wan, C., Hoffmann, H., Lu, S. & Maire, M.. (2020). Orthogonalized SGD and Nested Architectures for Anytime Neural Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9807-9817 Available from https://proceedings.mlr.press/v119/wan20a.html.

Related Material