Rounding Methods for Discrete Linear Classification

Yann Chevaleyre, Frédéerick Koriche, Jean-daniel Zucker
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):651-659, 2013.

Abstract

Learning discrete linear functions is a notoriously difficult challenge. In this paper, the learning task is cast as combinatorial optimization problem: given a set of positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of this training set. Since this problem is NP-hard, we propose two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for two important classes of binary-weighted linear functions, by establishing the Rademacher complexity of these classes and proving approximation bounds for rounding methods. These methods are compared on both synthetic and real-world data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-chevaleyre13, title = {Rounding Methods for Discrete Linear Classification}, author = {Chevaleyre, Yann and Koriche, Frédéerick and Zucker, Jean-daniel}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {651--659}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/chevaleyre13.pdf}, url = {https://proceedings.mlr.press/v28/chevaleyre13.html}, abstract = {Learning discrete linear functions is a notoriously difficult challenge. In this paper, the learning task is cast as combinatorial optimization problem: given a set of positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of this training set. Since this problem is NP-hard, we propose two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for two important classes of binary-weighted linear functions, by establishing the Rademacher complexity of these classes and proving approximation bounds for rounding methods. These methods are compared on both synthetic and real-world data.} }
Endnote
%0 Conference Paper %T Rounding Methods for Discrete Linear Classification %A Yann Chevaleyre %A Frédéerick Koriche %A Jean-daniel Zucker %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-chevaleyre13 %I PMLR %P 651--659 %U https://proceedings.mlr.press/v28/chevaleyre13.html %V 28 %N 1 %X Learning discrete linear functions is a notoriously difficult challenge. In this paper, the learning task is cast as combinatorial optimization problem: given a set of positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of this training set. Since this problem is NP-hard, we propose two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for two important classes of binary-weighted linear functions, by establishing the Rademacher complexity of these classes and proving approximation bounds for rounding methods. These methods are compared on both synthetic and real-world data.
RIS
TY - CPAPER TI - Rounding Methods for Discrete Linear Classification AU - Yann Chevaleyre AU - Frédéerick Koriche AU - Jean-daniel Zucker BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-chevaleyre13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 651 EP - 659 L1 - http://proceedings.mlr.press/v28/chevaleyre13.pdf UR - https://proceedings.mlr.press/v28/chevaleyre13.html AB - Learning discrete linear functions is a notoriously difficult challenge. In this paper, the learning task is cast as combinatorial optimization problem: given a set of positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of this training set. Since this problem is NP-hard, we propose two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for two important classes of binary-weighted linear functions, by establishing the Rademacher complexity of these classes and proving approximation bounds for rounding methods. These methods are compared on both synthetic and real-world data. ER -
APA
Chevaleyre, Y., Koriche, F. & Zucker, J.. (2013). Rounding Methods for Discrete Linear Classification. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):651-659 Available from https://proceedings.mlr.press/v28/chevaleyre13.html.

Related Material