Unconfused Ultraconservative Multiclass Algorithms

Ugo Louche, Liva Ralaivola
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:309-324, 2013.

Abstract

We tackle the problem of learning linear classifiers from noisy datasets in a multiclass setting. The two-class version of this problem was studied a few years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these contributions, the proposed approaches to fight the noise revolve around a Perceptron learning scheme fed with peculiar examples computed through a weighted average of points from the noisy training set. We propose to build upon these approaches and we introduce a new algorithm called \uma (for Unconfused Multiclass additive Algorithm) which may be seen as a generalization to the multiclass setting of the previous approaches. In order to characterize the noise we use the \em confusion matrix as a multiclass extension of the classification noise studied in the aforementioned literature. Theoretically well-founded, \uma furthermore displays very good empirical noise robustness, as evidenced by numerical simulations conducted on both synthetic and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Louche13, title = {Unconfused Ultraconservative Multiclass Algorithms}, author = {Louche, Ugo and Ralaivola, Liva}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {309--324}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Louche13.pdf}, url = {https://proceedings.mlr.press/v29/Louche13.html}, abstract = {We tackle the problem of learning linear classifiers from noisy datasets in a multiclass setting. The two-class version of this problem was studied a few years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these contributions, the proposed approaches to fight the noise revolve around a Perceptron learning scheme fed with peculiar examples computed through a weighted average of points from the noisy training set. We propose to build upon these approaches and we introduce a new algorithm called \uma (for Unconfused Multiclass additive Algorithm) which may be seen as a generalization to the multiclass setting of the previous approaches. In order to characterize the noise we use the \em confusion matrix as a multiclass extension of the classification noise studied in the aforementioned literature. Theoretically well-founded, \uma furthermore displays very good empirical noise robustness, as evidenced by numerical simulations conducted on both synthetic and real data.} }
Endnote
%0 Conference Paper %T Unconfused Ultraconservative Multiclass Algorithms %A Ugo Louche %A Liva Ralaivola %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Louche13 %I PMLR %P 309--324 %U https://proceedings.mlr.press/v29/Louche13.html %V 29 %X We tackle the problem of learning linear classifiers from noisy datasets in a multiclass setting. The two-class version of this problem was studied a few years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these contributions, the proposed approaches to fight the noise revolve around a Perceptron learning scheme fed with peculiar examples computed through a weighted average of points from the noisy training set. We propose to build upon these approaches and we introduce a new algorithm called \uma (for Unconfused Multiclass additive Algorithm) which may be seen as a generalization to the multiclass setting of the previous approaches. In order to characterize the noise we use the \em confusion matrix as a multiclass extension of the classification noise studied in the aforementioned literature. Theoretically well-founded, \uma furthermore displays very good empirical noise robustness, as evidenced by numerical simulations conducted on both synthetic and real data.
RIS
TY - CPAPER TI - Unconfused Ultraconservative Multiclass Algorithms AU - Ugo Louche AU - Liva Ralaivola BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Louche13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 309 EP - 324 L1 - http://proceedings.mlr.press/v29/Louche13.pdf UR - https://proceedings.mlr.press/v29/Louche13.html AB - We tackle the problem of learning linear classifiers from noisy datasets in a multiclass setting. The two-class version of this problem was studied a few years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these contributions, the proposed approaches to fight the noise revolve around a Perceptron learning scheme fed with peculiar examples computed through a weighted average of points from the noisy training set. We propose to build upon these approaches and we introduce a new algorithm called \uma (for Unconfused Multiclass additive Algorithm) which may be seen as a generalization to the multiclass setting of the previous approaches. In order to characterize the noise we use the \em confusion matrix as a multiclass extension of the classification noise studied in the aforementioned literature. Theoretically well-founded, \uma furthermore displays very good empirical noise robustness, as evidenced by numerical simulations conducted on both synthetic and real data. ER -
APA
Louche, U. & Ralaivola, L.. (2013). Unconfused Ultraconservative Multiclass Algorithms. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:309-324 Available from https://proceedings.mlr.press/v29/Louche13.html.

Related Material