Training-Free Neural Active Learning with Initialization-Robustness Guarantees

Apivich Hemachandra, Zhongxiang Dai, Jasraj Singh, See-Kiong Ng, Bryan Kian Hsiang Low
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:12931-12971, 2023.

Abstract

Existing neural active learning algorithms have aimed to optimize the predictive performance of neural networks (NNs) by selecting data for labelling. However, other than a good predictive performance, being robust against random parameter initializations is also a crucial requirement in safety-critical applications. To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness. Importantly, our EV-GP criterion is training-free, i.e., it does not require any training of the NN during data selection, which makes it computationally efficient. We empirically demonstrate that our EV-GP criterion is highly correlated with both initialization robustness and generalization performance, and show that it consistently outperforms baseline methods in terms of both desiderata, especially in situations with limited initial data or large batch sizes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-hemachandra23a, title = {Training-Free Neural Active Learning with Initialization-Robustness Guarantees}, author = {Hemachandra, Apivich and Dai, Zhongxiang and Singh, Jasraj and Ng, See-Kiong and Low, Bryan Kian Hsiang}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {12931--12971}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/hemachandra23a/hemachandra23a.pdf}, url = {https://proceedings.mlr.press/v202/hemachandra23a.html}, abstract = {Existing neural active learning algorithms have aimed to optimize the predictive performance of neural networks (NNs) by selecting data for labelling. However, other than a good predictive performance, being robust against random parameter initializations is also a crucial requirement in safety-critical applications. To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness. Importantly, our EV-GP criterion is training-free, i.e., it does not require any training of the NN during data selection, which makes it computationally efficient. We empirically demonstrate that our EV-GP criterion is highly correlated with both initialization robustness and generalization performance, and show that it consistently outperforms baseline methods in terms of both desiderata, especially in situations with limited initial data or large batch sizes.} }
Endnote
%0 Conference Paper %T Training-Free Neural Active Learning with Initialization-Robustness Guarantees %A Apivich Hemachandra %A Zhongxiang Dai %A Jasraj Singh %A See-Kiong Ng %A Bryan Kian Hsiang Low %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-hemachandra23a %I PMLR %P 12931--12971 %U https://proceedings.mlr.press/v202/hemachandra23a.html %V 202 %X Existing neural active learning algorithms have aimed to optimize the predictive performance of neural networks (NNs) by selecting data for labelling. However, other than a good predictive performance, being robust against random parameter initializations is also a crucial requirement in safety-critical applications. To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness. Importantly, our EV-GP criterion is training-free, i.e., it does not require any training of the NN during data selection, which makes it computationally efficient. We empirically demonstrate that our EV-GP criterion is highly correlated with both initialization robustness and generalization performance, and show that it consistently outperforms baseline methods in terms of both desiderata, especially in situations with limited initial data or large batch sizes.
APA
Hemachandra, A., Dai, Z., Singh, J., Ng, S. & Low, B.K.H.. (2023). Training-Free Neural Active Learning with Initialization-Robustness Guarantees. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:12931-12971 Available from https://proceedings.mlr.press/v202/hemachandra23a.html.

Related Material