A New PAC-Bayesian Perspective on Domain Adaptation

Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:859-868, 2016.

Abstract

We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one. Our theoretical contribution brings a new perspective by deriving an upper-bound on the target risk where the distributions’ divergence - expressed as a ratio - controls the trade-off between a source error measure and the target voters’ disagreement. Our bound suggests that one has to focus on regions where the source data is informative. From this result, we derive a PAC-Bayesian generalization bound, and specialize it to linear classifiers. Then, we infer a learning algorithm and perform experiments on real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-germain16, title = {A New PAC-Bayesian Perspective on Domain Adaptation}, author = {Germain, Pascal and Habrard, Amaury and Laviolette, François and Morvant, Emilie}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {859--868}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/germain16.pdf}, url = {https://proceedings.mlr.press/v48/germain16.html}, abstract = {We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one. Our theoretical contribution brings a new perspective by deriving an upper-bound on the target risk where the distributions’ divergence - expressed as a ratio - controls the trade-off between a source error measure and the target voters’ disagreement. Our bound suggests that one has to focus on regions where the source data is informative. From this result, we derive a PAC-Bayesian generalization bound, and specialize it to linear classifiers. Then, we infer a learning algorithm and perform experiments on real data.} }
Endnote
%0 Conference Paper %T A New PAC-Bayesian Perspective on Domain Adaptation %A Pascal Germain %A Amaury Habrard %A François Laviolette %A Emilie Morvant %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-germain16 %I PMLR %P 859--868 %U https://proceedings.mlr.press/v48/germain16.html %V 48 %X We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one. Our theoretical contribution brings a new perspective by deriving an upper-bound on the target risk where the distributions’ divergence - expressed as a ratio - controls the trade-off between a source error measure and the target voters’ disagreement. Our bound suggests that one has to focus on regions where the source data is informative. From this result, we derive a PAC-Bayesian generalization bound, and specialize it to linear classifiers. Then, we infer a learning algorithm and perform experiments on real data.
RIS
TY - CPAPER TI - A New PAC-Bayesian Perspective on Domain Adaptation AU - Pascal Germain AU - Amaury Habrard AU - François Laviolette AU - Emilie Morvant BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-germain16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 859 EP - 868 L1 - http://proceedings.mlr.press/v48/germain16.pdf UR - https://proceedings.mlr.press/v48/germain16.html AB - We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one. Our theoretical contribution brings a new perspective by deriving an upper-bound on the target risk where the distributions’ divergence - expressed as a ratio - controls the trade-off between a source error measure and the target voters’ disagreement. Our bound suggests that one has to focus on regions where the source data is informative. From this result, we derive a PAC-Bayesian generalization bound, and specialize it to linear classifiers. Then, we infer a learning algorithm and perform experiments on real data. ER -
APA
Germain, P., Habrard, A., Laviolette, F. & Morvant, E.. (2016). A New PAC-Bayesian Perspective on Domain Adaptation. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:859-868 Available from https://proceedings.mlr.press/v48/germain16.html.

Related Material