TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing

Augustus Odena, Catherine Olsson, David Andersen, Ian Goodfellow
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4901-4911, 2019.

Abstract

Neural networks are difficult to interpret and debug. We introduce testing techniques for neural networks that can discover errors occurring only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how approximate nearest neighbor (ANN) algorithms can provide this coverage metric for neural networks. We then combine these methods with techniques for property-based testing (PBT). In PBT, one asserts properties that a function should satisfy and the system automatically generates tests exercising those properties. We then apply this system to practical goals including (but not limited to) surfacing broken loss functions in popular GitHub repositories and making performance improvements to TensorFlow. Finally, we release an open source library called TensorFuzz that implements the described techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-odena19a, title = {{T}ensor{F}uzz: Debugging Neural Networks with Coverage-Guided Fuzzing}, author = {Odena, Augustus and Olsson, Catherine and Andersen, David and Goodfellow, Ian}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4901--4911}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/odena19a/odena19a.pdf}, url = {https://proceedings.mlr.press/v97/odena19a.html}, abstract = {Neural networks are difficult to interpret and debug. We introduce testing techniques for neural networks that can discover errors occurring only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how approximate nearest neighbor (ANN) algorithms can provide this coverage metric for neural networks. We then combine these methods with techniques for property-based testing (PBT). In PBT, one asserts properties that a function should satisfy and the system automatically generates tests exercising those properties. We then apply this system to practical goals including (but not limited to) surfacing broken loss functions in popular GitHub repositories and making performance improvements to TensorFlow. Finally, we release an open source library called TensorFuzz that implements the described techniques.} }
Endnote
%0 Conference Paper %T TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing %A Augustus Odena %A Catherine Olsson %A David Andersen %A Ian Goodfellow %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-odena19a %I PMLR %P 4901--4911 %U https://proceedings.mlr.press/v97/odena19a.html %V 97 %X Neural networks are difficult to interpret and debug. We introduce testing techniques for neural networks that can discover errors occurring only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how approximate nearest neighbor (ANN) algorithms can provide this coverage metric for neural networks. We then combine these methods with techniques for property-based testing (PBT). In PBT, one asserts properties that a function should satisfy and the system automatically generates tests exercising those properties. We then apply this system to practical goals including (but not limited to) surfacing broken loss functions in popular GitHub repositories and making performance improvements to TensorFlow. Finally, we release an open source library called TensorFuzz that implements the described techniques.
APA
Odena, A., Olsson, C., Andersen, D. & Goodfellow, I.. (2019). TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4901-4911 Available from https://proceedings.mlr.press/v97/odena19a.html.

Related Material