Less is More – Towards parsimonious multi-task models using structured sparsity

Richa Upadhyay, Ronald Phlypo, Rajkumar Saini, Marcus Liwicki
Conference on Parsimony and Learning, PMLR 234:590-601, 2024.

Abstract

Model sparsification in deep learning promotes simpler, more interpretable models with fewer parameters. This not only reduces the model’s memory footprint and computational needs but also shortens inference time. This work focuses on creating sparse models optimized for multiple tasks with fewer parameters. These parsimonious models also possess the potential to match or outperform dense models in terms of performance. In this work, we introduce channel-wise $l_1/l_2$ group sparsity in the shared convolutional layers parameters (or weights) of the multi-task learning model. This approach facilitates the removal of extraneous groups i.e., channels (due to $l_1$ regularization) and also imposes a penalty on the weights, further enhancing the learning efficiency for all tasks (due to $l_2$ regularization). We analyzed the results of group sparsity in both single-task and multi-task settings on two widely-used multi-task learning datasets: NYU-v2 and CelebAMask-HQ. On both datasets, which consist of three different computer vision tasks each, multi-task models with approximately 70% sparsity outperform their dense equivalents. We also investigate how changing the degree of sparsification influences the model’s performance, the overall sparsity percentage, the patterns of sparsity, and the inference time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-upadhyay24a, title = {Less is More – Towards parsimonious multi-task models using structured sparsity}, author = {Upadhyay, Richa and Phlypo, Ronald and Saini, Rajkumar and Liwicki, Marcus}, booktitle = {Conference on Parsimony and Learning}, pages = {590--601}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/upadhyay24a/upadhyay24a.pdf}, url = {https://proceedings.mlr.press/v234/upadhyay24a.html}, abstract = {Model sparsification in deep learning promotes simpler, more interpretable models with fewer parameters. This not only reduces the model’s memory footprint and computational needs but also shortens inference time. This work focuses on creating sparse models optimized for multiple tasks with fewer parameters. These parsimonious models also possess the potential to match or outperform dense models in terms of performance. In this work, we introduce channel-wise $l_1/l_2$ group sparsity in the shared convolutional layers parameters (or weights) of the multi-task learning model. This approach facilitates the removal of extraneous groups i.e., channels (due to $l_1$ regularization) and also imposes a penalty on the weights, further enhancing the learning efficiency for all tasks (due to $l_2$ regularization). We analyzed the results of group sparsity in both single-task and multi-task settings on two widely-used multi-task learning datasets: NYU-v2 and CelebAMask-HQ. On both datasets, which consist of three different computer vision tasks each, multi-task models with approximately 70% sparsity outperform their dense equivalents. We also investigate how changing the degree of sparsification influences the model’s performance, the overall sparsity percentage, the patterns of sparsity, and the inference time.} }
Endnote
%0 Conference Paper %T Less is More – Towards parsimonious multi-task models using structured sparsity %A Richa Upadhyay %A Ronald Phlypo %A Rajkumar Saini %A Marcus Liwicki %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-upadhyay24a %I PMLR %P 590--601 %U https://proceedings.mlr.press/v234/upadhyay24a.html %V 234 %X Model sparsification in deep learning promotes simpler, more interpretable models with fewer parameters. This not only reduces the model’s memory footprint and computational needs but also shortens inference time. This work focuses on creating sparse models optimized for multiple tasks with fewer parameters. These parsimonious models also possess the potential to match or outperform dense models in terms of performance. In this work, we introduce channel-wise $l_1/l_2$ group sparsity in the shared convolutional layers parameters (or weights) of the multi-task learning model. This approach facilitates the removal of extraneous groups i.e., channels (due to $l_1$ regularization) and also imposes a penalty on the weights, further enhancing the learning efficiency for all tasks (due to $l_2$ regularization). We analyzed the results of group sparsity in both single-task and multi-task settings on two widely-used multi-task learning datasets: NYU-v2 and CelebAMask-HQ. On both datasets, which consist of three different computer vision tasks each, multi-task models with approximately 70% sparsity outperform their dense equivalents. We also investigate how changing the degree of sparsification influences the model’s performance, the overall sparsity percentage, the patterns of sparsity, and the inference time.
APA
Upadhyay, R., Phlypo, R., Saini, R. & Liwicki, M.. (2024). Less is More – Towards parsimonious multi-task models using structured sparsity. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:590-601 Available from https://proceedings.mlr.press/v234/upadhyay24a.html.

Related Material