A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts

Huy Nguyen, Pedram Akbarian, Trungtin Nguyen, Nhat Ho
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:37617-37648, 2024.

Abstract

Mixture-of-experts (MoE) model incorporates the power of multiple submodels via gating functions to achieve greater performance in numerous regression and classification applications. From a theoretical perspective, while there have been previous attempts to comprehend the behavior of that model under the regression settings through the convergence analysis of maximum likelihood estimation in the Gaussian MoE model, such analysis under the setting of a classification problem has remained missing in the literature. We close this gap by establishing the convergence rates of density estimation and parameter estimation in the softmax gating multinomial logistic MoE model. Notably, when part of the expert parameters vanish, these rates are shown to be slower than polynomial rates owing to an inherent interaction between the softmax gating and expert functions via partial differential equations. To address this issue, we propose using a novel class of modified softmax gating functions which transform the input before delivering them to the gating functions. As a result, the previous interaction disappears and the parameter estimation rates are significantly improved.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-nguyen24b, title = {A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts}, author = {Nguyen, Huy and Akbarian, Pedram and Nguyen, Trungtin and Ho, Nhat}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {37617--37648}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24b/nguyen24b.pdf}, url = {https://proceedings.mlr.press/v235/nguyen24b.html}, abstract = {Mixture-of-experts (MoE) model incorporates the power of multiple submodels via gating functions to achieve greater performance in numerous regression and classification applications. From a theoretical perspective, while there have been previous attempts to comprehend the behavior of that model under the regression settings through the convergence analysis of maximum likelihood estimation in the Gaussian MoE model, such analysis under the setting of a classification problem has remained missing in the literature. We close this gap by establishing the convergence rates of density estimation and parameter estimation in the softmax gating multinomial logistic MoE model. Notably, when part of the expert parameters vanish, these rates are shown to be slower than polynomial rates owing to an inherent interaction between the softmax gating and expert functions via partial differential equations. To address this issue, we propose using a novel class of modified softmax gating functions which transform the input before delivering them to the gating functions. As a result, the previous interaction disappears and the parameter estimation rates are significantly improved.} }
Endnote
%0 Conference Paper %T A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts %A Huy Nguyen %A Pedram Akbarian %A Trungtin Nguyen %A Nhat Ho %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-nguyen24b %I PMLR %P 37617--37648 %U https://proceedings.mlr.press/v235/nguyen24b.html %V 235 %X Mixture-of-experts (MoE) model incorporates the power of multiple submodels via gating functions to achieve greater performance in numerous regression and classification applications. From a theoretical perspective, while there have been previous attempts to comprehend the behavior of that model under the regression settings through the convergence analysis of maximum likelihood estimation in the Gaussian MoE model, such analysis under the setting of a classification problem has remained missing in the literature. We close this gap by establishing the convergence rates of density estimation and parameter estimation in the softmax gating multinomial logistic MoE model. Notably, when part of the expert parameters vanish, these rates are shown to be slower than polynomial rates owing to an inherent interaction between the softmax gating and expert functions via partial differential equations. To address this issue, we propose using a novel class of modified softmax gating functions which transform the input before delivering them to the gating functions. As a result, the previous interaction disappears and the parameter estimation rates are significantly improved.
APA
Nguyen, H., Akbarian, P., Nguyen, T. & Ho, N.. (2024). A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:37617-37648 Available from https://proceedings.mlr.press/v235/nguyen24b.html.

Related Material