Constrained Implicit Learning Framework for Neural Network Sparsification

Alicia Y. Tsai, Wenzhi Gao, Laurent Ghaoui
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:1-16, 2025.

Abstract

This paper presents a novel approach to sparsify neural networks by transforming them into implicit models characterized by an equilibrium equation rather than the conventional hierarchical layer structure. Unlike traditional sparsification techniques reliant on network structure or specific loss functions, our method simplifies the process to a simple constrained least-squared problem with sparsity-inducing constraints or penalties. Additionally, we introduce a scalable algorithm that can be parallelized, addressing the computational complexities associated with this transformation while maintaining efficiency. Experimental results on CIFAR-100 and 20NewsGroup datasets demonstrate the high effectiveness of our method, particularly in scenarios with high pruning rates. This approach offers a versatile and efficient solution for neural network parameter reduction. Furthermore, we observe that a moderate subset of the training data suffices to achieve competitive performance, highlighting the robustness and information-capturing capability of our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-tsai25a, title = {Constrained Implicit Learning Framework for Neural Network Sparsification}, author = {Tsai, Alicia Y. and Gao, Wenzhi and Ghaoui, Laurent}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {1--16}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/tsai25a/tsai25a.pdf}, url = {https://proceedings.mlr.press/v260/tsai25a.html}, abstract = {This paper presents a novel approach to sparsify neural networks by transforming them into implicit models characterized by an equilibrium equation rather than the conventional hierarchical layer structure. Unlike traditional sparsification techniques reliant on network structure or specific loss functions, our method simplifies the process to a simple constrained least-squared problem with sparsity-inducing constraints or penalties. Additionally, we introduce a scalable algorithm that can be parallelized, addressing the computational complexities associated with this transformation while maintaining efficiency. Experimental results on CIFAR-100 and 20NewsGroup datasets demonstrate the high effectiveness of our method, particularly in scenarios with high pruning rates. This approach offers a versatile and efficient solution for neural network parameter reduction. Furthermore, we observe that a moderate subset of the training data suffices to achieve competitive performance, highlighting the robustness and information-capturing capability of our approach.} }
Endnote
%0 Conference Paper %T Constrained Implicit Learning Framework for Neural Network Sparsification %A Alicia Y. Tsai %A Wenzhi Gao %A Laurent Ghaoui %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-tsai25a %I PMLR %P 1--16 %U https://proceedings.mlr.press/v260/tsai25a.html %V 260 %X This paper presents a novel approach to sparsify neural networks by transforming them into implicit models characterized by an equilibrium equation rather than the conventional hierarchical layer structure. Unlike traditional sparsification techniques reliant on network structure or specific loss functions, our method simplifies the process to a simple constrained least-squared problem with sparsity-inducing constraints or penalties. Additionally, we introduce a scalable algorithm that can be parallelized, addressing the computational complexities associated with this transformation while maintaining efficiency. Experimental results on CIFAR-100 and 20NewsGroup datasets demonstrate the high effectiveness of our method, particularly in scenarios with high pruning rates. This approach offers a versatile and efficient solution for neural network parameter reduction. Furthermore, we observe that a moderate subset of the training data suffices to achieve competitive performance, highlighting the robustness and information-capturing capability of our approach.
APA
Tsai, A.Y., Gao, W. & Ghaoui, L.. (2025). Constrained Implicit Learning Framework for Neural Network Sparsification. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:1-16 Available from https://proceedings.mlr.press/v260/tsai25a.html.

Related Material