Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency

Guojun Zhu, Sanguo Zhang, Mingyang Ren
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:217-225, 2025.

Abstract

Multi-source generative models have gained significant attention due to their ability to capture complex data distributions across diverse domains. However, existing approaches often struggle with limitations such as negative transfer and an over-reliance on large pre-trained models. To address these challenges, we propose a novel method that effectively handles scenarios with outlier source domains, while making weaker assumptions about the data, thus ensuring broader applicability. Our approach enhances robustness and efficiency, supported by rigorous theoretical analysis, including non-asymptotic error bounds and asymptotic guarantees. In the experiments, we validate our methods through numerical simulations and realworld data experiments, showcasing their practical effectiveness and adaptability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-zhu25a, title = {Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency}, author = {Zhu, Guojun and Zhang, Sanguo and Ren, Mingyang}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {217--225}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/zhu25a/zhu25a.pdf}, url = {https://proceedings.mlr.press/v258/zhu25a.html}, abstract = {Multi-source generative models have gained significant attention due to their ability to capture complex data distributions across diverse domains. However, existing approaches often struggle with limitations such as negative transfer and an over-reliance on large pre-trained models. To address these challenges, we propose a novel method that effectively handles scenarios with outlier source domains, while making weaker assumptions about the data, thus ensuring broader applicability. Our approach enhances robustness and efficiency, supported by rigorous theoretical analysis, including non-asymptotic error bounds and asymptotic guarantees. In the experiments, we validate our methods through numerical simulations and realworld data experiments, showcasing their practical effectiveness and adaptability.} }
Endnote
%0 Conference Paper %T Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency %A Guojun Zhu %A Sanguo Zhang %A Mingyang Ren %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-zhu25a %I PMLR %P 217--225 %U https://proceedings.mlr.press/v258/zhu25a.html %V 258 %X Multi-source generative models have gained significant attention due to their ability to capture complex data distributions across diverse domains. However, existing approaches often struggle with limitations such as negative transfer and an over-reliance on large pre-trained models. To address these challenges, we propose a novel method that effectively handles scenarios with outlier source domains, while making weaker assumptions about the data, thus ensuring broader applicability. Our approach enhances robustness and efficiency, supported by rigorous theoretical analysis, including non-asymptotic error bounds and asymptotic guarantees. In the experiments, we validate our methods through numerical simulations and realworld data experiments, showcasing their practical effectiveness and adaptability.
APA
Zhu, G., Zhang, S. & Ren, M.. (2025). Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:217-225 Available from https://proceedings.mlr.press/v258/zhu25a.html.

Related Material