Auxiliary Modality Learning with Generalized Curriculum Distillation

Yu Shen, Xijun Wang, Peng Gao, Ming Lin
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31057-31076, 2023.

Abstract

Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in test, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define “Auxiliary Modality Learning” (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shen23f, title = {Auxiliary Modality Learning with Generalized Curriculum Distillation}, author = {Shen, Yu and Wang, Xijun and Gao, Peng and Lin, Ming}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31057--31076}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shen23f/shen23f.pdf}, url = {https://proceedings.mlr.press/v202/shen23f.html}, abstract = {Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in test, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define “Auxiliary Modality Learning” (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.} }
Endnote
%0 Conference Paper %T Auxiliary Modality Learning with Generalized Curriculum Distillation %A Yu Shen %A Xijun Wang %A Peng Gao %A Ming Lin %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shen23f %I PMLR %P 31057--31076 %U https://proceedings.mlr.press/v202/shen23f.html %V 202 %X Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in test, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define “Auxiliary Modality Learning” (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.
APA
Shen, Y., Wang, X., Gao, P. & Lin, M.. (2023). Auxiliary Modality Learning with Generalized Curriculum Distillation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31057-31076 Available from https://proceedings.mlr.press/v202/shen23f.html.

Related Material