[edit]
CD-IMM: The Benefits of Domain-based Mixture Models in Bayesian Continual Learning
Proceedings of the 1st ContinualAI Unconference, 2023, PMLR 249:25-36, 2024.
Abstract
Real-world streams of data are characterised by the continuous occurrence of new and old classes, possibly on novel domains. Bayesian non-parametric mixture models provide a natural solution for continual learning due to their ability to create new components on the fly when new data are observed. However, popular class-based and time-based mixtures are often tested on simplified streams (eg class-incremental), where shortcuts can be exploited to infer drifts. We hypothesise that domain-based mixtures are more effective on natural streams. Our proposed method, the CD-IMM, exemplifies this approach by learning an infinite mixture of domains for each class. We experiment on a natural scenario with a mix of class repetitions and novel domains to validate our hypothesis. The experimental results confirm our hypothesis and we find that CD-IMM beats state-of-the-art bayesian continual learning methods.