[edit]
Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual Learning
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:938-958, 2025.
Abstract
In this study, we delve into a learning paradigm known as Online Unsupervised Continual Learning (O-UCL) which involves a learner observing an evolving, unlabeled data stream and progressively learning to identify an increasing number of classes of objects. This new paradigm is meant to more closely model many real world applications such as an AI-powered drone that explores a forest attempting to identify the presence of various wildlife species, both new and previously identified. In contrast with prior works in the areas of unsupervised, continual, or online learning, O-UCL focuses on combining all three of these areas into a single challenging and realistic learning environment where agents are evaluated frequently and must aim to have the best possible representation at any point throughout the stream, instead of at the end of each offline “task”. In this new setting we assess our proposed approach, Patch-Based Contrastive Learning and Memory Consolidation (PCMC). PCMC builds a compositional understanding of data through identifying and clustering patch-level features. Embeddings for these patch-level features are extracted with an encoder, trained via a novel patch-based contrastive learning approach. PCMC incorporates new data into it’s distribution while avoiding catastrophic forgetting and consolidating memory examples during a periodic sleep cycle. We evaluate PCMC’s performance on several different streams created from the ImageNet and Places365 datasets. Additionally, we explore various versions of the PCMC algorithm and compare against several existing and simple baselines.