Reevaluating Meta-Learning Optimization Algorithms Through Contextual Self-Modulation

Roussel Desmond Nzoyem, David A.W. Barton, Tom Deakin
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:500-524, 2026.

Abstract

Contextual Self-Modulation (CSM) (Nzoyem et al. 2025) is a potent regularization mechanism for Neural Context Flows (NCFs) which demonstrates powerful meta-learning on physical systems. However, CSM has limitations in its applicability across different modalities and in high-data regimes. In this work, we introduce two extensions: $i$CSM which expands CSM to infinite-dimensional variations by embedding the contexts into a function space, and StochasticNCF which improves scalability by providing a low-cost approximation of meta-gradient updates through a sampled set of nearest environments. These extensions are demonstrated through comprehensive experimentation on a range of tasks, including dynamical systems, computer vision challenges, and curve fitting problems. Additionally, we incorporate higher-order Taylor expansions via Taylor-Mode automatic differentiation, revealing that higher-order approximations do not necessarily enhance generalization. Finally, we demonstrate how CSM can be integrated into other meta-learning frameworks with FlashCAVIA, a computationally efficient extension of the CAVIA meta-learning framework (Zintgraf et al. 2019). Together, these contributions highlight the significant benefits of CSM and indicate that its strengths in meta-learning and out-of-distribution tasks are particularly well-suited to physical systems. Our open-source library, designed for modular integration of self-modulation into contextual meta-learning workflows, is available at https://anonymous.4open.science/r/contextual-self-mod.

Cite this Paper


BibTeX
@InProceedings{pmlr-v330-nzoyem26a, title = {Reevaluating Meta-Learning Optimization Algorithms Through Contextual Self-Modulation}, author = {Nzoyem, Roussel Desmond and Barton, David A.W. and Deakin, Tom}, booktitle = {Proceedings of The 4th Conference on Lifelong Learning Agents}, pages = {500--524}, year = {2026}, editor = {Chandar, Sarath and Pascanu, Razvan and Eaton, Eric and Liu, Bing and Mahmood, Rupam and Rannen-Triki, Amal}, volume = {330}, series = {Proceedings of Machine Learning Research}, month = {11--14 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v330/main/assets/nzoyem26a/nzoyem26a.pdf}, url = {https://proceedings.mlr.press/v330/nzoyem26a.html}, abstract = {Contextual Self-Modulation (CSM) (Nzoyem et al. 2025) is a potent regularization mechanism for Neural Context Flows (NCFs) which demonstrates powerful meta-learning on physical systems. However, CSM has limitations in its applicability across different modalities and in high-data regimes. In this work, we introduce two extensions: $i$CSM which expands CSM to infinite-dimensional variations by embedding the contexts into a function space, and StochasticNCF which improves scalability by providing a low-cost approximation of meta-gradient updates through a sampled set of nearest environments. These extensions are demonstrated through comprehensive experimentation on a range of tasks, including dynamical systems, computer vision challenges, and curve fitting problems. Additionally, we incorporate higher-order Taylor expansions via Taylor-Mode automatic differentiation, revealing that higher-order approximations do not necessarily enhance generalization. Finally, we demonstrate how CSM can be integrated into other meta-learning frameworks with FlashCAVIA, a computationally efficient extension of the CAVIA meta-learning framework (Zintgraf et al. 2019). Together, these contributions highlight the significant benefits of CSM and indicate that its strengths in meta-learning and out-of-distribution tasks are particularly well-suited to physical systems. Our open-source library, designed for modular integration of self-modulation into contextual meta-learning workflows, is available at https://anonymous.4open.science/r/contextual-self-mod.} }
Endnote
%0 Conference Paper %T Reevaluating Meta-Learning Optimization Algorithms Through Contextual Self-Modulation %A Roussel Desmond Nzoyem %A David A.W. Barton %A Tom Deakin %B Proceedings of The 4th Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2026 %E Sarath Chandar %E Razvan Pascanu %E Eric Eaton %E Bing Liu %E Rupam Mahmood %E Amal Rannen-Triki %F pmlr-v330-nzoyem26a %I PMLR %P 500--524 %U https://proceedings.mlr.press/v330/nzoyem26a.html %V 330 %X Contextual Self-Modulation (CSM) (Nzoyem et al. 2025) is a potent regularization mechanism for Neural Context Flows (NCFs) which demonstrates powerful meta-learning on physical systems. However, CSM has limitations in its applicability across different modalities and in high-data regimes. In this work, we introduce two extensions: $i$CSM which expands CSM to infinite-dimensional variations by embedding the contexts into a function space, and StochasticNCF which improves scalability by providing a low-cost approximation of meta-gradient updates through a sampled set of nearest environments. These extensions are demonstrated through comprehensive experimentation on a range of tasks, including dynamical systems, computer vision challenges, and curve fitting problems. Additionally, we incorporate higher-order Taylor expansions via Taylor-Mode automatic differentiation, revealing that higher-order approximations do not necessarily enhance generalization. Finally, we demonstrate how CSM can be integrated into other meta-learning frameworks with FlashCAVIA, a computationally efficient extension of the CAVIA meta-learning framework (Zintgraf et al. 2019). Together, these contributions highlight the significant benefits of CSM and indicate that its strengths in meta-learning and out-of-distribution tasks are particularly well-suited to physical systems. Our open-source library, designed for modular integration of self-modulation into contextual meta-learning workflows, is available at https://anonymous.4open.science/r/contextual-self-mod.
APA
Nzoyem, R.D., Barton, D.A. & Deakin, T.. (2026). Reevaluating Meta-Learning Optimization Algorithms Through Contextual Self-Modulation. Proceedings of The 4th Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 330:500-524 Available from https://proceedings.mlr.press/v330/nzoyem26a.html.

Related Material