AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models

Zhiqiang Tang, Haoyang Fang, Su Zhou, Taojiannan Yang, Zihan Zhong, Cuixiong Hu, Katrin Kirchhoff, George Karypis
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:15/1-35, 2024.

Abstract

AutoGluon-Multimodal (AutoMM) is introduced as an open-source AutoML library designed specifically for multimodal learning. Distinguished by its exceptional ease of use, AutoMM enables fine-tuning of foundational models with just three lines of code. Supporting various modalities including image, text, and tabular data, both independently and in combination, the library offers a comprehensive suite of functionalities spanning classification, regression, object detection, semantic matching, and image segmentation. Experiments across diverse datasets and tasks showcases AutoMM’s superior performance in basic classification and regression tasks compared to existing AutoML tools, while also demonstrating competitive results in advanced tasks, aligning with specialized toolboxes designed for such purposes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v256-tang24a, title = {AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models}, author = {Tang, Zhiqiang and Fang, Haoyang and Zhou, Su and Yang, Taojiannan and Zhong, Zihan and Hu, Cuixiong and Kirchhoff, Katrin and Karypis, George}, booktitle = {Proceedings of the Third International Conference on Automated Machine Learning}, pages = {15/1--35}, year = {2024}, editor = {Eggensperger, Katharina and Garnett, Roman and Vanschoren, Joaquin and Lindauer, Marius and Gardner, Jacob R.}, volume = {256}, series = {Proceedings of Machine Learning Research}, month = {09--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v256/main/assets/tang24a/tang24a.pdf}, url = {https://proceedings.mlr.press/v256/tang24a.html}, abstract = {AutoGluon-Multimodal (AutoMM) is introduced as an open-source AutoML library designed specifically for multimodal learning. Distinguished by its exceptional ease of use, AutoMM enables fine-tuning of foundational models with just three lines of code. Supporting various modalities including image, text, and tabular data, both independently and in combination, the library offers a comprehensive suite of functionalities spanning classification, regression, object detection, semantic matching, and image segmentation. Experiments across diverse datasets and tasks showcases AutoMM’s superior performance in basic classification and regression tasks compared to existing AutoML tools, while also demonstrating competitive results in advanced tasks, aligning with specialized toolboxes designed for such purposes.} }
Endnote
%0 Conference Paper %T AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models %A Zhiqiang Tang %A Haoyang Fang %A Su Zhou %A Taojiannan Yang %A Zihan Zhong %A Cuixiong Hu %A Katrin Kirchhoff %A George Karypis %B Proceedings of the Third International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Katharina Eggensperger %E Roman Garnett %E Joaquin Vanschoren %E Marius Lindauer %E Jacob R. Gardner %F pmlr-v256-tang24a %I PMLR %P 15/1--35 %U https://proceedings.mlr.press/v256/tang24a.html %V 256 %X AutoGluon-Multimodal (AutoMM) is introduced as an open-source AutoML library designed specifically for multimodal learning. Distinguished by its exceptional ease of use, AutoMM enables fine-tuning of foundational models with just three lines of code. Supporting various modalities including image, text, and tabular data, both independently and in combination, the library offers a comprehensive suite of functionalities spanning classification, regression, object detection, semantic matching, and image segmentation. Experiments across diverse datasets and tasks showcases AutoMM’s superior performance in basic classification and regression tasks compared to existing AutoML tools, while also demonstrating competitive results in advanced tasks, aligning with specialized toolboxes designed for such purposes.
APA
Tang, Z., Fang, H., Zhou, S., Yang, T., Zhong, Z., Hu, C., Kirchhoff, K. & Karypis, G.. (2024). AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models. Proceedings of the Third International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 256:15/1-35 Available from https://proceedings.mlr.press/v256/tang24a.html.

Related Material