Robust error bounds for quantised and pruned neural networks

Jiaqi Li, Ross Drummond, Stephen R. Duncan
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:361-372, 2021.

Abstract

A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-li21a, title = {Robust error bounds for quantised and pruned neural networks}, author = {Li, Jiaqi and Drummond, Ross and Duncan, Stephen R.}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {361--372}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/li21a/li21a.pdf}, url = {https://proceedings.mlr.press/v144/li21a.html}, abstract = {A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.} }
Endnote
%0 Conference Paper %T Robust error bounds for quantised and pruned neural networks %A Jiaqi Li %A Ross Drummond %A Stephen R. Duncan %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-li21a %I PMLR %P 361--372 %U https://proceedings.mlr.press/v144/li21a.html %V 144 %X A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.
APA
Li, J., Drummond, R. & Duncan, S.R.. (2021). Robust error bounds for quantised and pruned neural networks. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:361-372 Available from https://proceedings.mlr.press/v144/li21a.html.

Related Material