COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning

Chamika Sudusinghe, Gerasimos Gerogiannis, Damitha Lenadora, Charles Block, Josep Torrellas, Charith Mendis
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:57231-57248, 2025.

Abstract

Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47$\times$ (up to 5.46$\times$) for SpMM and 1.39$\times$ (up to 4.22$\times$) for SDDMM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sudusinghe25a, title = {{COGNATE}: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning}, author = {Sudusinghe, Chamika and Gerogiannis, Gerasimos and Lenadora, Damitha and Block, Charles and Torrellas, Josep and Mendis, Charith}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {57231--57248}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sudusinghe25a/sudusinghe25a.pdf}, url = {https://proceedings.mlr.press/v267/sudusinghe25a.html}, abstract = {Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47$\times$ (up to 5.46$\times$) for SpMM and 1.39$\times$ (up to 4.22$\times$) for SDDMM.} }
Endnote
%0 Conference Paper %T COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning %A Chamika Sudusinghe %A Gerasimos Gerogiannis %A Damitha Lenadora %A Charles Block %A Josep Torrellas %A Charith Mendis %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sudusinghe25a %I PMLR %P 57231--57248 %U https://proceedings.mlr.press/v267/sudusinghe25a.html %V 267 %X Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47$\times$ (up to 5.46$\times$) for SpMM and 1.39$\times$ (up to 4.22$\times$) for SDDMM.
APA
Sudusinghe, C., Gerogiannis, G., Lenadora, D., Block, C., Torrellas, J. & Mendis, C.. (2025). COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:57231-57248 Available from https://proceedings.mlr.press/v267/sudusinghe25a.html.

Related Material