Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture

Shijin Duan, Yejia Liu, Gaowen Liu, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu
Conference on Parsimony and Learning, PMLR 280:1413-1432, 2025.

Abstract

Vector Symbolic Architecture (VSA) is emerging in machine learning due to its efficiency, but they are hindered by issues of hyperdimensionality and accuracy. As a promising mitigation, the Low-Dimensional Computing (LDC) method significantly reduces the vector dimension by $\sim$100 times while maintaining accuracy, by employing a gradient-based optimization. Despite its potential, LDC optimization for VSA is still underexplored. Our investigation into vector updates underscores the importance of stable, adaptive dynamics in LDC training. We also reveal the overlooked yet critical roles of batch normalization (BN) and knowledge distillation (KD) in standard approaches. Besides the accuracy boost, BN does not add computational overhead during inference, and KD significantly enhances inference confidence. Through extensive experiments and ablation studies across multiple benchmarks, we provide a thorough evaluation of our approach and extend the interpretability of binary neural network optimization similar to LDC, previously unaddressed in BNN literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v280-duan25a, title = {Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture}, author = {Duan, Shijin and Liu, Yejia and Liu, Gaowen and Kompella, Ramana Rao and Ren, Shaolei and Xu, Xiaolin}, booktitle = {Conference on Parsimony and Learning}, pages = {1413--1432}, year = {2025}, editor = {Chen, Beidi and Liu, Shijia and Pilanci, Mert and Su, Weijie and Sulam, Jeremias and Wang, Yuxiang and Zhu, Zhihui}, volume = {280}, series = {Proceedings of Machine Learning Research}, month = {24--27 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v280/main/assets/duan25a/duan25a.pdf}, url = {https://proceedings.mlr.press/v280/duan25a.html}, abstract = {Vector Symbolic Architecture (VSA) is emerging in machine learning due to its efficiency, but they are hindered by issues of hyperdimensionality and accuracy. As a promising mitigation, the Low-Dimensional Computing (LDC) method significantly reduces the vector dimension by $\sim$100 times while maintaining accuracy, by employing a gradient-based optimization. Despite its potential, LDC optimization for VSA is still underexplored. Our investigation into vector updates underscores the importance of stable, adaptive dynamics in LDC training. We also reveal the overlooked yet critical roles of batch normalization (BN) and knowledge distillation (KD) in standard approaches. Besides the accuracy boost, BN does not add computational overhead during inference, and KD significantly enhances inference confidence. Through extensive experiments and ablation studies across multiple benchmarks, we provide a thorough evaluation of our approach and extend the interpretability of binary neural network optimization similar to LDC, previously unaddressed in BNN literature.} }
Endnote
%0 Conference Paper %T Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture %A Shijin Duan %A Yejia Liu %A Gaowen Liu %A Ramana Rao Kompella %A Shaolei Ren %A Xiaolin Xu %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2025 %E Beidi Chen %E Shijia Liu %E Mert Pilanci %E Weijie Su %E Jeremias Sulam %E Yuxiang Wang %E Zhihui Zhu %F pmlr-v280-duan25a %I PMLR %P 1413--1432 %U https://proceedings.mlr.press/v280/duan25a.html %V 280 %X Vector Symbolic Architecture (VSA) is emerging in machine learning due to its efficiency, but they are hindered by issues of hyperdimensionality and accuracy. As a promising mitigation, the Low-Dimensional Computing (LDC) method significantly reduces the vector dimension by $\sim$100 times while maintaining accuracy, by employing a gradient-based optimization. Despite its potential, LDC optimization for VSA is still underexplored. Our investigation into vector updates underscores the importance of stable, adaptive dynamics in LDC training. We also reveal the overlooked yet critical roles of batch normalization (BN) and knowledge distillation (KD) in standard approaches. Besides the accuracy boost, BN does not add computational overhead during inference, and KD significantly enhances inference confidence. Through extensive experiments and ablation studies across multiple benchmarks, we provide a thorough evaluation of our approach and extend the interpretability of binary neural network optimization similar to LDC, previously unaddressed in BNN literature.
APA
Duan, S., Liu, Y., Liu, G., Kompella, R.R., Ren, S. & Xu, X.. (2025). Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 280:1413-1432 Available from https://proceedings.mlr.press/v280/duan25a.html.

Related Material