[edit]
Constrained Implicit Learning Framework for Neural Network Sparsification
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:1-16, 2025.
Abstract
This paper presents a novel approach to sparsify neural networks by transforming them into implicit models characterized by an equilibrium equation rather than the conventional hierarchical layer structure. Unlike traditional sparsification techniques reliant on network structure or specific loss functions, our method simplifies the process to a simple constrained least-squared problem with sparsity-inducing constraints or penalties. Additionally, we introduce a scalable algorithm that can be parallelized, addressing the computational complexities associated with this transformation while maintaining efficiency. Experimental results on CIFAR-100 and 20NewsGroup datasets demonstrate the high effectiveness of our method, particularly in scenarios with high pruning rates. This approach offers a versatile and efficient solution for neural network parameter reduction. Furthermore, we observe that a moderate subset of the training data suffices to achieve competitive performance, highlighting the robustness and information-capturing capability of our approach.