[edit]
SHARP-Distill: A 68$\times$ Faster Recommender System with Hypergraph Neural Networks and Language Models
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:17452-17488, 2025.
Abstract
This paper proposes SHARP-Distill (Speedy Hypergraph And Review-based Personalised Distillation), a novel knowledge distillation approach based on the teacher-student framework that combines Hypergraph Neural Networks (HGNNs) with language models to enhance recommendation quality while significantly improving inference time. The teacher model leverages HGNNs to generate user and item embeddings from interaction data, capturing high-order and group relationships, and employing a pre-trained language model to extract rich semantic features from textual reviews. We utilize a contrastive learning mechanism to ensure structural consistency between various representations. The student includes a shallow and lightweight GCN called CompactGCN designed to inherit high-order relationships while reducing computational complexity. Extensive experiments on real-world datasets demonstrate that SHARP-Distill achieves up to 68$\times$ faster inference time compared to HGNN and 40$\times$ faster than LightGCN while maintaining competitive recommendation accuracy.