MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

Amirali Aghazadeh, Ryan Spring, Daniel Lejeune, Gautam Dasarathy, Anshumali Shrivastava,  baraniuk
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:80-88, 2018.

Abstract

Feature selection is an important challenge in machine learning. It plays a crucial role in the explainability of machine-driven decisions that are rapidly permeating throughout modern society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard approach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only O(log^2(p)) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-aghazadeh18a, title = {{MISSION}: Ultra Large-Scale Feature Selection using Count-Sketches}, author = {Aghazadeh, Amirali and Spring, Ryan and Lejeune, Daniel and Dasarathy, Gautam and Shrivastava, Anshumali and richard baraniuk}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {80--88}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/aghazadeh18a/aghazadeh18a.pdf}, url = {https://proceedings.mlr.press/v80/aghazadeh18a.html}, abstract = {Feature selection is an important challenge in machine learning. It plays a crucial role in the explainability of machine-driven decisions that are rapidly permeating throughout modern society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard approach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only O(log^2(p)) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.} }
Endnote
%0 Conference Paper %T MISSION: Ultra Large-Scale Feature Selection using Count-Sketches %A Amirali Aghazadeh %A Ryan Spring %A Daniel Lejeune %A Gautam Dasarathy %A Anshumali Shrivastava %A baraniuk %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-aghazadeh18a %I PMLR %P 80--88 %U https://proceedings.mlr.press/v80/aghazadeh18a.html %V 80 %X Feature selection is an important challenge in machine learning. It plays a crucial role in the explainability of machine-driven decisions that are rapidly permeating throughout modern society. Unfortunately, the explosion in the size and dimensionality of real-world datasets poses a severe challenge to standard feature selection algorithms. Today, it is not uncommon for datasets to have billions of dimensions. At such scale, even storing the feature vector is impossible, causing most existing feature selection methods to fail. Workarounds like feature hashing, a standard approach to large-scale machine learning, helps with the computational feasibility, but at the cost of losing the interpretability of features. In this paper, we present MISSION, a novel framework for ultra large-scale feature selection that performs stochastic gradient descent while maintaining an efficient representation of the features in memory using a Count-Sketch data structure. MISSION retains the simplicity of feature hashing without sacrificing the interpretability of the features while using only O(log^2(p)) working memory. We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
APA
Aghazadeh, A., Spring, R., Lejeune, D., Dasarathy, G., Shrivastava, A. & baraniuk, . (2018). MISSION: Ultra Large-Scale Feature Selection using Count-Sketches. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:80-88 Available from https://proceedings.mlr.press/v80/aghazadeh18a.html.

Related Material