Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts

Fanqi Yan, Huy Nguyen, Le Quang Dung, Pedram Akbarian, Nhat Ho
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4501-4509, 2025.

Abstract

We conduct the convergence analysis of parameter estimation in the contaminated mixture of experts. This model is motivated from the prompt learning problem where ones utilize prompts, which can be formulated as experts, to fine-tune a large-scale pre-trained model for learning downstream tasks. There are two fundamental challenges emerging from the analysis: (i) the proportion in the mixture of the pre-trained model and the prompt may converge to zero during the training, leading to the prompt vanishing issue; (ii) the algebraic interaction among parameters of the pre-trained model and the prompt can occur via some partial differential equations and decelerate the prompt learning. In response, we introduce a distinguishability condition to control the previous parameter interaction. Additionally, we also investigate various types of expert structure to understand their effects on the convergence behavior of parameter estimation. In each scenario, we provide comprehensive convergence rates of parameter estimation along with the corresponding minimax lower bounds. Finally, we run several numerical experiments to empirically justify our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-yan25c, title = {Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts}, author = {Yan, Fanqi and Nguyen, Huy and Dung, Le Quang and Akbarian, Pedram and Ho, Nhat}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4501--4509}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/yan25c/yan25c.pdf}, url = {https://proceedings.mlr.press/v258/yan25c.html}, abstract = {We conduct the convergence analysis of parameter estimation in the contaminated mixture of experts. This model is motivated from the prompt learning problem where ones utilize prompts, which can be formulated as experts, to fine-tune a large-scale pre-trained model for learning downstream tasks. There are two fundamental challenges emerging from the analysis: (i) the proportion in the mixture of the pre-trained model and the prompt may converge to zero during the training, leading to the prompt vanishing issue; (ii) the algebraic interaction among parameters of the pre-trained model and the prompt can occur via some partial differential equations and decelerate the prompt learning. In response, we introduce a distinguishability condition to control the previous parameter interaction. Additionally, we also investigate various types of expert structure to understand their effects on the convergence behavior of parameter estimation. In each scenario, we provide comprehensive convergence rates of parameter estimation along with the corresponding minimax lower bounds. Finally, we run several numerical experiments to empirically justify our theoretical findings.} }
Endnote
%0 Conference Paper %T Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts %A Fanqi Yan %A Huy Nguyen %A Le Quang Dung %A Pedram Akbarian %A Nhat Ho %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-yan25c %I PMLR %P 4501--4509 %U https://proceedings.mlr.press/v258/yan25c.html %V 258 %X We conduct the convergence analysis of parameter estimation in the contaminated mixture of experts. This model is motivated from the prompt learning problem where ones utilize prompts, which can be formulated as experts, to fine-tune a large-scale pre-trained model for learning downstream tasks. There are two fundamental challenges emerging from the analysis: (i) the proportion in the mixture of the pre-trained model and the prompt may converge to zero during the training, leading to the prompt vanishing issue; (ii) the algebraic interaction among parameters of the pre-trained model and the prompt can occur via some partial differential equations and decelerate the prompt learning. In response, we introduce a distinguishability condition to control the previous parameter interaction. Additionally, we also investigate various types of expert structure to understand their effects on the convergence behavior of parameter estimation. In each scenario, we provide comprehensive convergence rates of parameter estimation along with the corresponding minimax lower bounds. Finally, we run several numerical experiments to empirically justify our theoretical findings.
APA
Yan, F., Nguyen, H., Dung, L.Q., Akbarian, P. & Ho, N.. (2025). Understanding Expert Structures on Minimax Parameter Estimation in Contaminated Mixture of Experts. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4501-4509 Available from https://proceedings.mlr.press/v258/yan25c.html.

Related Material