Deep Active Learning: Unified and Principled Method for Query and Training

Changjian Shui, Fan Zhou, Christian Gagné, Boyu Wang
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1308-1318, 2020.

Abstract

In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning. We are providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching, by adopting the Wasserstein distance. As a consequence, we derived a new training loss from the theoretical analysis, which is decomposed into optimizing deep neural network parameters and batch query selection through alternative optimization. In addition, the loss for training a deep neural network is naturally formulated as a min-max optimization problem through leveraging the unlabeled data information. Moreover, the proposed principles also indicate an explicit uncertainty-diversity trade-off in the query batch selection. Finally, we evaluate our proposed method on different benchmarks, consistently showing better empirical performances and a better time-efficient query strategy compared to the baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-shui20a, title = {Deep Active Learning: Unified and Principled Method for Query and Training}, author = {Shui, Changjian and Zhou, Fan and Gagn\'e, Christian and Wang, Boyu}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1308--1318}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/shui20a/shui20a.pdf}, url = {https://proceedings.mlr.press/v108/shui20a.html}, abstract = {In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning. We are providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching, by adopting the Wasserstein distance. As a consequence, we derived a new training loss from the theoretical analysis, which is decomposed into optimizing deep neural network parameters and batch query selection through alternative optimization. In addition, the loss for training a deep neural network is naturally formulated as a min-max optimization problem through leveraging the unlabeled data information. Moreover, the proposed principles also indicate an explicit uncertainty-diversity trade-off in the query batch selection. Finally, we evaluate our proposed method on different benchmarks, consistently showing better empirical performances and a better time-efficient query strategy compared to the baselines.} }
Endnote
%0 Conference Paper %T Deep Active Learning: Unified and Principled Method for Query and Training %A Changjian Shui %A Fan Zhou %A Christian Gagné %A Boyu Wang %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-shui20a %I PMLR %P 1308--1318 %U https://proceedings.mlr.press/v108/shui20a.html %V 108 %X In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning. We are providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching, by adopting the Wasserstein distance. As a consequence, we derived a new training loss from the theoretical analysis, which is decomposed into optimizing deep neural network parameters and batch query selection through alternative optimization. In addition, the loss for training a deep neural network is naturally formulated as a min-max optimization problem through leveraging the unlabeled data information. Moreover, the proposed principles also indicate an explicit uncertainty-diversity trade-off in the query batch selection. Finally, we evaluate our proposed method on different benchmarks, consistently showing better empirical performances and a better time-efficient query strategy compared to the baselines.
APA
Shui, C., Zhou, F., Gagné, C. & Wang, B.. (2020). Deep Active Learning: Unified and Principled Method for Query and Training. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1308-1318 Available from https://proceedings.mlr.press/v108/shui20a.html.

Related Material