Domain Adaptation: A Small Sample Statistical Approach

Ruslan Salakhutdinov, Sham Kakade, Dean Foster
; Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:960-968, 2012.

Abstract

We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that “dogs are similar to dogs” even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-salakhutdinov12, title = {Domain Adaptation: A Small Sample Statistical Approach}, author = {Ruslan Salakhutdinov and Sham Kakade and Dean Foster}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {960--968}, year = {2012}, editor = {Neil D. Lawrence and Mark Girolami}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/salakhutdinov12/salakhutdinov12.pdf}, url = {http://proceedings.mlr.press/v22/salakhutdinov12.html}, abstract = {We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that “dogs are similar to dogs” even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.} }
Endnote
%0 Conference Paper %T Domain Adaptation: A Small Sample Statistical Approach %A Ruslan Salakhutdinov %A Sham Kakade %A Dean Foster %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-salakhutdinov12 %I PMLR %J Proceedings of Machine Learning Research %P 960--968 %U http://proceedings.mlr.press %V 22 %W PMLR %X We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that “dogs are similar to dogs” even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.
RIS
TY - CPAPER TI - Domain Adaptation: A Small Sample Statistical Approach AU - Ruslan Salakhutdinov AU - Sham Kakade AU - Dean Foster BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics PY - 2012/03/21 DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-salakhutdinov12 PB - PMLR SP - 960 DP - PMLR EP - 968 L1 - http://proceedings.mlr.press/v22/salakhutdinov12/salakhutdinov12.pdf UR - http://proceedings.mlr.press/v22/salakhutdinov12.html AB - We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that “dogs are similar to dogs” even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure. ER -
APA
Salakhutdinov, R., Kakade, S. & Foster, D.. (2012). Domain Adaptation: A Small Sample Statistical Approach. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:960-968

Related Material