[edit]
Using Partition-Tree Weighting and MAML for Continual and Online Learning
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:598-611, 2026.
Abstract
Learning from experience requires adapting and responding to errors over time. However, gradient- based deep learning can fail dramatically in the continual, online setting. In this work, we address this shortcoming by combining two meta-learning methods: the purely online Partition Tree Weight- ing (PTW) mixture-of-experts algorithm, and a novel variant of the Model-Agnostic Meta-Learning (MAML) initialization-learning procedure. We demonstrate our approach, Replay-MAML PTW, in a piecewise stationary classification task in which the task distribution is unknown and the context changes are unobserved and random. We refer to this continual, online, task-agnostic setting as experiential learning. In this setting, Replay-MAML PTW matches and even outperforms an aug- mented learner that is allowed to train offline from the environment’s task distribution and is given explicit notification when the environment context changes. Replay-MAML PTW thus provides a base learner with the benefits of offline training, access to the true task distribution, and direct observation of context-switches, but requires only a O(log T ) increase in computation and memory.