Self-similar Epochs: Value in arrangement

Eliav Buchnik, Edith Cohen, Avinatan Hasidim, Yossi Matias
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:841-850, 2019.

Abstract

Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples. This practice means that each fraction of an epoch comprises an independent random sample of the training data that may not preserve informative structure present in the full data. We hypothesize that the training can be more effective with self-similar arrangements that potentially allow each epoch to provide benefits of multiple ones. We study this for “matrix factorization” – the common task of learning metric embeddings of entities such as queries, videos, or words from example pairwise associations. We construct arrangements that preserve the weighted Jaccard similarities of rows and columns and experimentally observe training acceleration of 3%-37% on synthetic and recommendation datasets. Principled arrangements of training examples emerge as a novel and potentially powerful enhancement to SGD that merits further exploration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-buchnik19a, title = {Self-similar Epochs: Value in arrangement}, author = {Buchnik, Eliav and Cohen, Edith and Hasidim, Avinatan and Matias, Yossi}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {841--850}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/buchnik19a/buchnik19a.pdf}, url = {https://proceedings.mlr.press/v97/buchnik19a.html}, abstract = {Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples. This practice means that each fraction of an epoch comprises an independent random sample of the training data that may not preserve informative structure present in the full data. We hypothesize that the training can be more effective with self-similar arrangements that potentially allow each epoch to provide benefits of multiple ones. We study this for “matrix factorization” – the common task of learning metric embeddings of entities such as queries, videos, or words from example pairwise associations. We construct arrangements that preserve the weighted Jaccard similarities of rows and columns and experimentally observe training acceleration of 3%-37% on synthetic and recommendation datasets. Principled arrangements of training examples emerge as a novel and potentially powerful enhancement to SGD that merits further exploration.} }
Endnote
%0 Conference Paper %T Self-similar Epochs: Value in arrangement %A Eliav Buchnik %A Edith Cohen %A Avinatan Hasidim %A Yossi Matias %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-buchnik19a %I PMLR %P 841--850 %U https://proceedings.mlr.press/v97/buchnik19a.html %V 97 %X Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples. This practice means that each fraction of an epoch comprises an independent random sample of the training data that may not preserve informative structure present in the full data. We hypothesize that the training can be more effective with self-similar arrangements that potentially allow each epoch to provide benefits of multiple ones. We study this for “matrix factorization” – the common task of learning metric embeddings of entities such as queries, videos, or words from example pairwise associations. We construct arrangements that preserve the weighted Jaccard similarities of rows and columns and experimentally observe training acceleration of 3%-37% on synthetic and recommendation datasets. Principled arrangements of training examples emerge as a novel and potentially powerful enhancement to SGD that merits further exploration.
APA
Buchnik, E., Cohen, E., Hasidim, A. & Matias, Y.. (2019). Self-similar Epochs: Value in arrangement. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:841-850 Available from https://proceedings.mlr.press/v97/buchnik19a.html.

Related Material