skip to main content
10.1145/2708463.2709044acmotherconferencesArticle/Chapter ViewAbstractPublication PagesperminConference Proceedingsconference-collections
research-article

DI-BOW: Domain Invariant Feature Descriptor Using Bag of Words

Published:26 February 2015Publication History

ABSTRACT

This paper describes a method to learn Bag-Of-Words (BOW) descriptor for image representation which is robust to domain shift. Domain shift is necessary when a classifier trained on one dataset (source) is applied for classification on a different dataset (target). Datasets acquired with different conditions, have dissimilar feature distributions among them. Traditional method for representing each image by a BOW descriptor with the vocabulary learnt on a reference dataset does not work well for such cross-dataset tasks. We propose a new method to learn an amended dictionary composed of class specific atoms. The proposed Domain-Invariant BOW (DI-BOW) descriptor built from this dictionary has much better class discriminability and inherently attenuates domain-specific characteristics, making it more suitable to cross-domain tasks. Results based on DI-BOW descriptor reveal its efficiency, by outperforming state-of-the-art domain adaptation techniques for object recognition.

References

  1. M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311--4322, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Bacchiani and B. Roark. Unsupervised language model adaptation. In Conference on Acoustics, Speech, and Signal Processing, volume 1, pages I--224, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  3. M. Baktashmotlagh, M. Harandi, B. Lovell, and M. Salzmann. Unsupervised domain adaptation by domain invariant projection. In IEEE International Conference on Computer Vision, pages 769--776, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (SURF). Computer Vision Image Understanding, 110(3):346--359, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In International Conference on Computer Vision and Pattern Recognition, pages 1--8, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  6. H. Daume III. Frustratingly easy domain adaptation. In Annual Meeting of the Association of Computational Linguistics, pages 256--263, 2007.Google ScholarGoogle Scholar
  7. L. Duan, D. Xu, I. W. Tsang, and J. Luo. Visual event recognition in videos by learning from web data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9):1667--1680, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In International Conference in Computer Vision, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In IEEE International Conference on Computer Vision and Pattern Recognition, pages 2066--2073, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In IEEE International Conference on Computer Vision, pages 999--1006, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Schölkopf. Covariate shift by kernel mean matching. Dataset shift in machine learning, Chap. 8, Cambridge: MIT Press, pages 131--160, 2009.Google ScholarGoogle Scholar
  12. V. Jain and E. G. Learned-Miller. Online domain adaptation of a pre-trained cascade of classifiers. In CVPR, pages 577--584, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Khosla, T. Zhou, T. Malisiewicz, A. Efros, and A. Torralba. Undoing the damage of dataset bias. In European Conference on Computer Vision, pages 158--171, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In International Conference on Computer Vision and Pattern Recognition, volume 2, pages 2169--2178, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. J. Pan, I. Tsang, J. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199--210, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22:1345--1359, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision, pages 213--226, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their location in images. In International Conference on Computer Vision, pages 370--377, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. Sugiyama, S. Nakajima, H. Kashima, P. von Bünau, and M. Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In Neural Information Processing Systems, pages 1962--1965, 2007.Google ScholarGoogle Scholar
  20. T. Tommasi and B. Caputo. Frustratingly easy NBNN domain adaptation. In International Conference on Computer Vision, pages 897--904, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. A. Torralba and A. A. Efros. Unbiased look at dataset bias. In International Conference on Computer Vision and Pattern Recognition, pages 1521--1528, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. I. Tosic and P. Frossard. Dictionary learning. Signal Processing Magazine, IEEE, 28(2):27--38, March 2011.Google ScholarGoogle ScholarCross RefCross Ref
  23. J. Yang, R. Yan, and A. G. Hauptmann. Cross-domain video concept detection using adaptive svms. In International Conference on Multimedia, pages 188--197, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. DI-BOW: Domain Invariant Feature Descriptor Using Bag of Words

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      PerMIn '15: Proceedings of the 2nd International Conference on Perception and Machine Intelligence
      February 2015
      269 pages
      ISBN:9781450320023
      DOI:10.1145/2708463

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 February 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader