Internet Explorer: Targeted Representation Learning on the Open Web

Alexander Cong Li, Ellis Langham Brown, Alexei A Efros, Deepak Pathak
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:19385-19406, 2023.

Abstract

Vision models typically rely on fine-tuning general-purpose models pre-trained on large, static datasets. These general-purpose models only capture the knowledge within their pre-training datasets, which are tiny, out-of-date snapshots of the Internet—where billions of images are uploaded each day. We suggest an alternate approach: rather than hoping our static datasets transfer to our desired tasks after large-scale pre-training, we propose dynamically utilizing the Internet to quickly train a small-scale model that does extremely well on a target dataset. Our approach, called Internet Explorer, explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and prioritizing what to search for next. We evaluate Internet Explorer across several datasets and show that it outperforms or matches CLIP oracle performance using just a single GPU desktop to actively query the Internet for 30-40 hours.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-li23c, title = {Internet Explorer: Targeted Representation Learning on the Open Web}, author = {Li, Alexander Cong and Brown, Ellis Langham and Efros, Alexei A and Pathak, Deepak}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {19385--19406}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/li23c/li23c.pdf}, url = {https://proceedings.mlr.press/v202/li23c.html}, abstract = {Vision models typically rely on fine-tuning general-purpose models pre-trained on large, static datasets. These general-purpose models only capture the knowledge within their pre-training datasets, which are tiny, out-of-date snapshots of the Internet—where billions of images are uploaded each day. We suggest an alternate approach: rather than hoping our static datasets transfer to our desired tasks after large-scale pre-training, we propose dynamically utilizing the Internet to quickly train a small-scale model that does extremely well on a target dataset. Our approach, called Internet Explorer, explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and prioritizing what to search for next. We evaluate Internet Explorer across several datasets and show that it outperforms or matches CLIP oracle performance using just a single GPU desktop to actively query the Internet for 30-40 hours.} }
Endnote
%0 Conference Paper %T Internet Explorer: Targeted Representation Learning on the Open Web %A Alexander Cong Li %A Ellis Langham Brown %A Alexei A Efros %A Deepak Pathak %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-li23c %I PMLR %P 19385--19406 %U https://proceedings.mlr.press/v202/li23c.html %V 202 %X Vision models typically rely on fine-tuning general-purpose models pre-trained on large, static datasets. These general-purpose models only capture the knowledge within their pre-training datasets, which are tiny, out-of-date snapshots of the Internet—where billions of images are uploaded each day. We suggest an alternate approach: rather than hoping our static datasets transfer to our desired tasks after large-scale pre-training, we propose dynamically utilizing the Internet to quickly train a small-scale model that does extremely well on a target dataset. Our approach, called Internet Explorer, explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and prioritizing what to search for next. We evaluate Internet Explorer across several datasets and show that it outperforms or matches CLIP oracle performance using just a single GPU desktop to actively query the Internet for 30-40 hours.
APA
Li, A.C., Brown, E.L., Efros, A.A. & Pathak, D.. (2023). Internet Explorer: Targeted Representation Learning on the Open Web. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:19385-19406 Available from https://proceedings.mlr.press/v202/li23c.html.

Related Material