Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks

Shyam Venkatasubramanian, Ahmed Aloui, Vahid Tarokh
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:3425-3447, 2024.

Abstract

Advancing loss function design is pivotal for optimizing neural network training and performance. This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data. Distinct from traditional loss functions that target minimizing pointwise errors, RLP loss operates by minimizing the distance between sets of hyperplanes connecting fixed-size subsets of feature-prediction pairs and feature-label pairs. Our empirical evaluations, conducted across benchmark datasets and synthetic examples, demonstrate that neural networks trained with RLP loss outperform those trained with traditional loss functions, achieving improved performance with fewer data samples, and exhibiting greater robustness to additive noise. We provide theoretical analysis supporting our empirical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-venkatasubramanian24a, title = {Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks}, author = {Venkatasubramanian, Shyam and Aloui, Ahmed and Tarokh, Vahid}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {3425--3447}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/venkatasubramanian24a/venkatasubramanian24a.pdf}, url = {https://proceedings.mlr.press/v244/venkatasubramanian24a.html}, abstract = {Advancing loss function design is pivotal for optimizing neural network training and performance. This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data. Distinct from traditional loss functions that target minimizing pointwise errors, RLP loss operates by minimizing the distance between sets of hyperplanes connecting fixed-size subsets of feature-prediction pairs and feature-label pairs. Our empirical evaluations, conducted across benchmark datasets and synthetic examples, demonstrate that neural networks trained with RLP loss outperform those trained with traditional loss functions, achieving improved performance with fewer data samples, and exhibiting greater robustness to additive noise. We provide theoretical analysis supporting our empirical findings.} }
Endnote
%0 Conference Paper %T Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks %A Shyam Venkatasubramanian %A Ahmed Aloui %A Vahid Tarokh %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-venkatasubramanian24a %I PMLR %P 3425--3447 %U https://proceedings.mlr.press/v244/venkatasubramanian24a.html %V 244 %X Advancing loss function design is pivotal for optimizing neural network training and performance. This work introduces Random Linear Projections (RLP) loss, a novel approach that enhances training efficiency by leveraging geometric relationships within the data. Distinct from traditional loss functions that target minimizing pointwise errors, RLP loss operates by minimizing the distance between sets of hyperplanes connecting fixed-size subsets of feature-prediction pairs and feature-label pairs. Our empirical evaluations, conducted across benchmark datasets and synthetic examples, demonstrate that neural networks trained with RLP loss outperform those trained with traditional loss functions, achieving improved performance with fewer data samples, and exhibiting greater robustness to additive noise. We provide theoretical analysis supporting our empirical findings.
APA
Venkatasubramanian, S., Aloui, A. & Tarokh, V.. (2024). Random Linear Projections Loss for Hyperplane-Based Optimization in Neural Networks. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:3425-3447 Available from https://proceedings.mlr.press/v244/venkatasubramanian24a.html.

Related Material