SHARP-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models

Saman Forouzandeh, Parham Moradi, Mahdi Jalili
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:17452-17488, 2025.

Abstract

This paper proposes SHARP-Distill (Speedy Hypergraph And Review-based Personalised Distillation), a novel knowledge distillation approach based on the teacher-student framework that combines Hypergraph Neural Networks (HGNNs) with language models to enhance recommendation quality while significantly improving inference time. The teacher model leverages HGNNs to generate user and item embeddings from interaction data, capturing high-order and group relationships, and employing a pre-trained language model to extract rich semantic features from textual reviews. We utilize a contrastive learning mechanism to ensure structural consistency between various representations. The student includes a shallow and lightweight GCN called CompactGCN designed to inherit high-order relationships while reducing computational complexity. Extensive experiments on real-world datasets demonstrate that SHARP-Distill achieves up to 68$\times$ faster inference time compared to HGNN and 40$\times$ faster than LightGCN while maintaining competitive recommendation accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-forouzandeh25a, title = {{SHARP}-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models}, author = {Forouzandeh, Saman and Moradi, Parham and Jalili, Mahdi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {17452--17488}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/forouzandeh25a/forouzandeh25a.pdf}, url = {https://proceedings.mlr.press/v267/forouzandeh25a.html}, abstract = {This paper proposes SHARP-Distill (Speedy Hypergraph And Review-based Personalised Distillation), a novel knowledge distillation approach based on the teacher-student framework that combines Hypergraph Neural Networks (HGNNs) with language models to enhance recommendation quality while significantly improving inference time. The teacher model leverages HGNNs to generate user and item embeddings from interaction data, capturing high-order and group relationships, and employing a pre-trained language model to extract rich semantic features from textual reviews. We utilize a contrastive learning mechanism to ensure structural consistency between various representations. The student includes a shallow and lightweight GCN called CompactGCN designed to inherit high-order relationships while reducing computational complexity. Extensive experiments on real-world datasets demonstrate that SHARP-Distill achieves up to 68$\times$ faster inference time compared to HGNN and 40$\times$ faster than LightGCN while maintaining competitive recommendation accuracy.} }
Endnote
%0 Conference Paper %T SHARP-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models %A Saman Forouzandeh %A Parham Moradi %A Mahdi Jalili %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-forouzandeh25a %I PMLR %P 17452--17488 %U https://proceedings.mlr.press/v267/forouzandeh25a.html %V 267 %X This paper proposes SHARP-Distill (Speedy Hypergraph And Review-based Personalised Distillation), a novel knowledge distillation approach based on the teacher-student framework that combines Hypergraph Neural Networks (HGNNs) with language models to enhance recommendation quality while significantly improving inference time. The teacher model leverages HGNNs to generate user and item embeddings from interaction data, capturing high-order and group relationships, and employing a pre-trained language model to extract rich semantic features from textual reviews. We utilize a contrastive learning mechanism to ensure structural consistency between various representations. The student includes a shallow and lightweight GCN called CompactGCN designed to inherit high-order relationships while reducing computational complexity. Extensive experiments on real-world datasets demonstrate that SHARP-Distill achieves up to 68$\times$ faster inference time compared to HGNN and 40$\times$ faster than LightGCN while maintaining competitive recommendation accuracy.
APA
Forouzandeh, S., Moradi, P. & Jalili, M.. (2025). SHARP-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:17452-17488 Available from https://proceedings.mlr.press/v267/forouzandeh25a.html.

Related Material