Abstract
We show that a curiosity-driven computer vision algorithm can learn to efficiently query Internet text-to-image search engines for images that improve the model’s performance on a specified dataset. In contrast to typical self-supervised computer vision algorithms, which learn from static datasets, our model actively expands its training set with the most relevant images. First, we calculate an image-level curiosity reward that encourages our model to find the most useful images for pre-training. Second, we use text similarity scores to propagate observed curiosity rewards to untried text queries. This efficiently identifies relevant semantic clusters without any need for class labels or label names from the targeted dataset. Our method significantly outperforms models that require 1–2 orders of magnitude more compute and data.
Alexander C. Li and E. Brown—Authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, X., Xie, S., He, K.: An empirical study of training self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9640–9649 (2021)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Kornblith, S., Shlens, J., Le, Q.V.: Do better imageNet models transfer better? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2661–2671 (2019)
Mahajan, D., et al.: Exploring the limits of weakly supervised pretraining. In: Proceedings of the European conference on computer vision (ECCV) (2018)
Oord, A.v.d., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. preprint arXiv:1807.03748 (2018)
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)
Settles, B.: Active learning literature survey (2009)
Acknowledgements
AL is supported by the NSF GRFP under grants DGE1745016 and DGE2140739. This work is supported by NSF IIS-2024594 and ONR N00014-22-1-2096.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, A.C., Brown, E., Efros, A.A., Pathak, D. (2023). Internet Curiosity: Directed Unsupervised Learning on Uncurated Internet Data. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13804. Springer, Cham. https://doi.org/10.1007/978-3-031-25069-9_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-25069-9_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25068-2
Online ISBN: 978-3-031-25069-9
eBook Packages: Computer ScienceComputer Science (R0)