Sample Compression for Multi-label Concept Classes

Rahim Samei, Pavel Semukhin, Boting Yang, Sandra Zilles
Proceedings of The 27th Conference on Learning Theory, PMLR 35:371-393, 2014.

Abstract

This paper studies labeled sample compression for multi-label concept classes. For a specific extension of the notion of VC-dimension to multi-label classes, we prove that every maximum multi-label class of dimension d has a sample compression scheme in which every sample is compressed to a subset of size at most d. We further show that every multi-label class of dimension 1 has a sample compression scheme using only sets of size at most 1. As opposed to the binary case, the latter result is not immediately implied by the former, since there are multi-label concept classes of dimension 1 that are not contained in maximum classes of dimension 1.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-samei14, title = {Sample Compression for Multi-label Concept Classes}, author = {Samei, Rahim and Semukhin, Pavel and Yang, Boting and Zilles, Sandra}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {371--393}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/samei14.pdf}, url = {https://proceedings.mlr.press/v35/samei14.html}, abstract = {This paper studies labeled sample compression for multi-label concept classes. For a specific extension of the notion of VC-dimension to multi-label classes, we prove that every maximum multi-label class of dimension d has a sample compression scheme in which every sample is compressed to a subset of size at most d. We further show that every multi-label class of dimension 1 has a sample compression scheme using only sets of size at most 1. As opposed to the binary case, the latter result is not immediately implied by the former, since there are multi-label concept classes of dimension 1 that are not contained in maximum classes of dimension 1.} }
Endnote
%0 Conference Paper %T Sample Compression for Multi-label Concept Classes %A Rahim Samei %A Pavel Semukhin %A Boting Yang %A Sandra Zilles %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-samei14 %I PMLR %P 371--393 %U https://proceedings.mlr.press/v35/samei14.html %V 35 %X This paper studies labeled sample compression for multi-label concept classes. For a specific extension of the notion of VC-dimension to multi-label classes, we prove that every maximum multi-label class of dimension d has a sample compression scheme in which every sample is compressed to a subset of size at most d. We further show that every multi-label class of dimension 1 has a sample compression scheme using only sets of size at most 1. As opposed to the binary case, the latter result is not immediately implied by the former, since there are multi-label concept classes of dimension 1 that are not contained in maximum classes of dimension 1.
RIS
TY - CPAPER TI - Sample Compression for Multi-label Concept Classes AU - Rahim Samei AU - Pavel Semukhin AU - Boting Yang AU - Sandra Zilles BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-samei14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 371 EP - 393 L1 - http://proceedings.mlr.press/v35/samei14.pdf UR - https://proceedings.mlr.press/v35/samei14.html AB - This paper studies labeled sample compression for multi-label concept classes. For a specific extension of the notion of VC-dimension to multi-label classes, we prove that every maximum multi-label class of dimension d has a sample compression scheme in which every sample is compressed to a subset of size at most d. We further show that every multi-label class of dimension 1 has a sample compression scheme using only sets of size at most 1. As opposed to the binary case, the latter result is not immediately implied by the former, since there are multi-label concept classes of dimension 1 that are not contained in maximum classes of dimension 1. ER -
APA
Samei, R., Semukhin, P., Yang, B. & Zilles, S.. (2014). Sample Compression for Multi-label Concept Classes. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:371-393 Available from https://proceedings.mlr.press/v35/samei14.html.

Related Material