Accelerating the Low-Rank Decomposed Models

Habib Hajimolahoseini, Walid Ahmed, Shuangyue Wen, Yang Liu
Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, PMLR 262:222-231, 2024.

Abstract

Tensor decomposition is a mathematically supported technique for data compression. It consists of applying some kind of a Low Rank Decomposition technique on the tensors or matrices in order to reduce the redundancy of the data. However, it is not a popular technique for compressing the AI models duo to the high number of new layers added to the architecture after decomposition. Although the number of parameters could shrink significantly, it could result in the model be more than twice deeper which could add some latency to the training or inference. In this paper, we present a comprehensive study about how to modify low rank decomposition technique in AI models so that we could benefit from both high accuracy and low memory consumption as well as speeding up the training and inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v262-hajimolahoseini24b, title = {Accelerating the Low-Rank Decomposed Models}, author = {Hajimolahoseini, Habib and Ahmed, Walid and Wen, Shuangyue and Liu, Yang}, booktitle = {Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop}, pages = {222--231}, year = {2024}, editor = {Rezagholizadeh, Mehdi and Passban, Peyman and Samiee, Soheila and Partovi Nia, Vahid and Cheng, Yu and Deng, Yue and Liu, Qun and Chen, Boxing}, volume = {262}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v262/main/assets/hajimolahoseini24b/hajimolahoseini24b.pdf}, url = {https://proceedings.mlr.press/v262/hajimolahoseini24b.html}, abstract = {Tensor decomposition is a mathematically supported technique for data compression. It consists of applying some kind of a Low Rank Decomposition technique on the tensors or matrices in order to reduce the redundancy of the data. However, it is not a popular technique for compressing the AI models duo to the high number of new layers added to the architecture after decomposition. Although the number of parameters could shrink significantly, it could result in the model be more than twice deeper which could add some latency to the training or inference. In this paper, we present a comprehensive study about how to modify low rank decomposition technique in AI models so that we could benefit from both high accuracy and low memory consumption as well as speeding up the training and inference. } }
Endnote
%0 Conference Paper %T Accelerating the Low-Rank Decomposed Models %A Habib Hajimolahoseini %A Walid Ahmed %A Shuangyue Wen %A Yang Liu %B Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop %C Proceedings of Machine Learning Research %D 2024 %E Mehdi Rezagholizadeh %E Peyman Passban %E Soheila Samiee %E Vahid Partovi Nia %E Yu Cheng %E Yue Deng %E Qun Liu %E Boxing Chen %F pmlr-v262-hajimolahoseini24b %I PMLR %P 222--231 %U https://proceedings.mlr.press/v262/hajimolahoseini24b.html %V 262 %X Tensor decomposition is a mathematically supported technique for data compression. It consists of applying some kind of a Low Rank Decomposition technique on the tensors or matrices in order to reduce the redundancy of the data. However, it is not a popular technique for compressing the AI models duo to the high number of new layers added to the architecture after decomposition. Although the number of parameters could shrink significantly, it could result in the model be more than twice deeper which could add some latency to the training or inference. In this paper, we present a comprehensive study about how to modify low rank decomposition technique in AI models so that we could benefit from both high accuracy and low memory consumption as well as speeding up the training and inference.
APA
Hajimolahoseini, H., Ahmed, W., Wen, S. & Liu, Y.. (2024). Accelerating the Low-Rank Decomposed Models. Proceedings of The 4th NeurIPS Efficient Natural Language and Speech Processing Workshop, in Proceedings of Machine Learning Research 262:222-231 Available from https://proceedings.mlr.press/v262/hajimolahoseini24b.html.

Related Material