[edit]
Multi-Scale Dual-Attention Unfolding Network for Compressed Sensing Image Reconstruction
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:207-222, 2025.
Abstract
Deep Unfolding Networks have emerged as a prominent strategy in compressed sensing image reconstruction, effectively merging optimization techniques with deep learning through end-to-end training of truncated inferences. Despite their advantages, these algorithms generally require extensive iterations and parameters, potentially limited by storage capacity. Additionally, the image-level transmission at each iterative step does not optimally harness the inter-scale feature information available. To address these issues, we introduce a novel approach in this paper: the Multi-Scale Dual-Attention Unfolding Network (MSDAUN) for compressed sensing image reconstruction. We propose a cross-stage multi-scale deep reconstruction module D as an iterative process, which is composed of multiple attention sub-modules. These include Cross Attention Transformer(CAT) Modules that enhance the reconstruction with multi-channel inertia, thereby facilitating feature-level transmission and robust information exchange. Concurrently, Texture Attention Transformer(TAT) Modules are designed to meticulously extract salient reconstruction information, subsequently channeling it into the texture path to effectuate the precise prediction of textural regions, thereby contributing to the meticulous restoration of textural details. Our comprehensive experimental evaluation across diverse datasets confirms that MSDAUN surpasses existing state-of-the-art methods. This work presents significant potential for further advancements and applications in inverse imaging problems and optimization models.