Robust error bounds for quantised and pruned neural networks
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:361-372, 2021.
A new focus in machine learning is concerned with understanding the issues faced with imple- menting neural networks on low-cost and memory-limited hardware, for example smart phones. This approach falls under the umbrella of “decentralised” learning and, compared to the “cen- tralised” case where data is collected and acted upon by a large server held offline, offers greater privacy protection and a faster reaction speed to incoming data . However, when neural networks are implemented on limited hardware there are no guarantees that their outputs will not be signifi- cantly corrupted. This problem is addressed in this talk where a semi-definite program is introduced to robustly bound the error induced by implementing neural networks on limited hardware. The method can be applied to generic neural networks and is able to account for the many nonlinearities of the problem. It is hoped that the computed bounds will give certainty to software/control/ML engineers implementing these algorithms efficiently on limited hardware.