Characterizing and Overcoming the Greedy Nature of Learning in Multi-modal Deep Neural Networks

Nan Wu, Stanislaw Jastrzebski, Kyunghyun Cho, Krzysztof J Geras
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:24043-24055, 2022.

Abstract

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models’ generalization, as we observe empirically. To estimate the model’s dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model’s generalization on three datasets: Colored MNIST, ModelNet40, and NVIDIA Dynamic Hand Gesture.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wu22d, title = {Characterizing and Overcoming the Greedy Nature of Learning in Multi-modal Deep Neural Networks}, author = {Wu, Nan and Jastrzebski, Stanislaw and Cho, Kyunghyun and Geras, Krzysztof J}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {24043--24055}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wu22d/wu22d.pdf}, url = {https://proceedings.mlr.press/v162/wu22d.html}, abstract = {We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models’ generalization, as we observe empirically. To estimate the model’s dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model’s generalization on three datasets: Colored MNIST, ModelNet40, and NVIDIA Dynamic Hand Gesture.} }
Endnote
%0 Conference Paper %T Characterizing and Overcoming the Greedy Nature of Learning in Multi-modal Deep Neural Networks %A Nan Wu %A Stanislaw Jastrzebski %A Kyunghyun Cho %A Krzysztof J Geras %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wu22d %I PMLR %P 24043--24055 %U https://proceedings.mlr.press/v162/wu22d.html %V 162 %X We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models’ generalization, as we observe empirically. To estimate the model’s dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model’s generalization on three datasets: Colored MNIST, ModelNet40, and NVIDIA Dynamic Hand Gesture.
APA
Wu, N., Jastrzebski, S., Cho, K. & Geras, K.J.. (2022). Characterizing and Overcoming the Greedy Nature of Learning in Multi-modal Deep Neural Networks. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:24043-24055 Available from https://proceedings.mlr.press/v162/wu22d.html.

Related Material