Improving Continual Learning Robustness in Medical Imaging via Illumination Adaptive Transformer

Thanh-Ngoc Phan, Quynh-Trang Thi Pham, Duc-Trong Le
Reliable and Trustworthy Artificial Intelligence 2025, PMLR 310:90-101, 2025.

Abstract

Continual learning (CL) refers to the capability of a model to learn progressively from an evolving stream of data, retaining previously acquired knowledge while integrating new information. This capability is pivotal in advancing medical image classification, especially when data availability fluctuates. Beyond investigating CL performance under standard clean-data conditions, this paper systematically evaluates the robustness of representative CL strategies in uncertain imaging contexts, where visual quality is degraded by varying degrees of low-light conditions and over- or under exposure. In this paper, we augment the training and evaluation data with controlled, simulated low-light and contrast perturbations to model these uncertain conditions, which mimic real-world variability frequently encountered in clinical acquisition environments. Our method integrates an automatic illumination calibration module, termed the Illumination Adaptive Transformer (IAT), within existing CL frameworks to mitigate the adverse effects of such degradations. This module dynamically adjusts the image illumination and contrast, aiming to enhance the visibility of critical features in a data-driven, end-to-end manner without requiring manual tuning or image-specific heuristics. Experiments demonstrate that incorporating the IAT module consistently improves final classification accuracy and robustness across multiple continual learning strategies under all simulated uncertainty levels on the PathMNIST dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v310-phan25a, title = {Improving Continual Learning Robustness in Medical Imaging via Illumination Adaptive Transformer}, author = {Phan, Thanh-Ngoc and Pham, Quynh-Trang Thi and Le, Duc-Trong}, booktitle = {Reliable and Trustworthy Artificial Intelligence 2025}, pages = {90--101}, year = {2025}, editor = {Nguyen, Hoang D. and Le, Duc-Trong and Björklund, Johanna and Vu, Xuan-Son}, volume = {310}, series = {Proceedings of Machine Learning Research}, month = {12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v310/main/assets/phan25a/phan25a.pdf}, url = {https://proceedings.mlr.press/v310/phan25a.html}, abstract = {Continual learning (CL) refers to the capability of a model to learn progressively from an evolving stream of data, retaining previously acquired knowledge while integrating new information. This capability is pivotal in advancing medical image classification, especially when data availability fluctuates. Beyond investigating CL performance under standard clean-data conditions, this paper systematically evaluates the robustness of representative CL strategies in uncertain imaging contexts, where visual quality is degraded by varying degrees of low-light conditions and over- or under exposure. In this paper, we augment the training and evaluation data with controlled, simulated low-light and contrast perturbations to model these uncertain conditions, which mimic real-world variability frequently encountered in clinical acquisition environments. Our method integrates an automatic illumination calibration module, termed the Illumination Adaptive Transformer (IAT), within existing CL frameworks to mitigate the adverse effects of such degradations. This module dynamically adjusts the image illumination and contrast, aiming to enhance the visibility of critical features in a data-driven, end-to-end manner without requiring manual tuning or image-specific heuristics. Experiments demonstrate that incorporating the IAT module consistently improves final classification accuracy and robustness across multiple continual learning strategies under all simulated uncertainty levels on the PathMNIST dataset.} }
Endnote
%0 Conference Paper %T Improving Continual Learning Robustness in Medical Imaging via Illumination Adaptive Transformer %A Thanh-Ngoc Phan %A Quynh-Trang Thi Pham %A Duc-Trong Le %B Reliable and Trustworthy Artificial Intelligence 2025 %C Proceedings of Machine Learning Research %D 2025 %E Hoang D. Nguyen %E Duc-Trong Le %E Johanna Björklund %E Xuan-Son Vu %F pmlr-v310-phan25a %I PMLR %P 90--101 %U https://proceedings.mlr.press/v310/phan25a.html %V 310 %X Continual learning (CL) refers to the capability of a model to learn progressively from an evolving stream of data, retaining previously acquired knowledge while integrating new information. This capability is pivotal in advancing medical image classification, especially when data availability fluctuates. Beyond investigating CL performance under standard clean-data conditions, this paper systematically evaluates the robustness of representative CL strategies in uncertain imaging contexts, where visual quality is degraded by varying degrees of low-light conditions and over- or under exposure. In this paper, we augment the training and evaluation data with controlled, simulated low-light and contrast perturbations to model these uncertain conditions, which mimic real-world variability frequently encountered in clinical acquisition environments. Our method integrates an automatic illumination calibration module, termed the Illumination Adaptive Transformer (IAT), within existing CL frameworks to mitigate the adverse effects of such degradations. This module dynamically adjusts the image illumination and contrast, aiming to enhance the visibility of critical features in a data-driven, end-to-end manner without requiring manual tuning or image-specific heuristics. Experiments demonstrate that incorporating the IAT module consistently improves final classification accuracy and robustness across multiple continual learning strategies under all simulated uncertainty levels on the PathMNIST dataset.
APA
Phan, T., Pham, Q.T. & Le, D.. (2025). Improving Continual Learning Robustness in Medical Imaging via Illumination Adaptive Transformer. Reliable and Trustworthy Artificial Intelligence 2025, in Proceedings of Machine Learning Research 310:90-101 Available from https://proceedings.mlr.press/v310/phan25a.html.

Related Material