skip to main content
10.1145/1873951.1874332acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Towards a universal detector by mining concepts with small semantic gaps

Authors Info & Claims
Published:25 October 2010Publication History

ABSTRACT

Can we have a universal detector that could recognize unseen objects with no training exemplars available? Such a detector is so desirable, as there are hundreds of thousands of object concepts in human vocabulary but few available labeled image examples. In this study, we attempt to build such a universal detector to predict concepts in the absence of training data. First, by considering both semantic relatedness and visual variance, we mine a set of realistic small-semantic-gap (SSG) concepts from a large-scale image corpus. Detectors of these concepts can deliver reasonably satisfactory recognition accuracies. From these distinctive visual models, we then leverage the semantic ontology knowledge and co-occurrence statistics of concepts to extend visual recognition to unseen concepts. To the best of our knowledge, this work presents the first research attempting to substantiate the semantic gap measuring of a large amount of concepts and leverage visually learnable concepts to predicate those with no training images available. Testings on NUS-WIDE dataset demonstrate that the selected concepts with small semantic gaps can be well modeled and the prediction of unseen concepts delivers promising results with comparable accuracy to preliminary training-based methods.

References

  1. T. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng. Nus-wide: A real-world web image database from national university of singapore. In CIVR, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Cilibrasi and P. Vitányi. The google similarity distance. TKDE, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  4. R. Duda, D. Stork, and P. Hart. Pattern Classification. John Wiley, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. F. Li, A. Iyer, C. Koch, and P. Perona. What do we perceive in a glance of a real-world scene? Journal of Vision, 2007.Google ScholarGoogle Scholar
  6. C. Fellbaum. WordNet: An Electronic Lexical Database. MIT Press, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  7. Y. Gao and J. Fan. Incorporating concept ontology to enable probabilistic concept reasoning for multi-level image annotation. In MIR, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Griffin and D. Perona. Learning and using taxonomies for fast visual categorization. In CVPR, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  9. Y. Jiang, C. Ngo, and S. Chang. Semantic context transfer across heterogeneous sources for domain adaptive video search. In MM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Liu, X.-S. Hua, L. Yang, M. Wang and H.-J. Zhang, Tag ranking, In WWW, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Rosch and B. Lloyd. Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum, 1978.Google ScholarGoogle Scholar
  12. B. Schölkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Tang, S. Yan, R. Hong, G. Qi, and T. Chua. Inferring semantic concepts from community-contributed images and noisy tags. In MM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. B. Tversky and K. Hemenway. Categories of environmental scenes. Cognitive Psychology, 1983.Google ScholarGoogle Scholar
  15. Z. Wu and M. Palmer. Verb semantics and lexical selection. In ACL, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. IJCV, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Zweig and D. Weinshall. Exploiting object hierarchy: Combining models from different category levels. In ICCV, 2007.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Towards a universal detector by mining concepts with small semantic gaps

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '10: Proceedings of the 18th ACM international conference on Multimedia
      October 2010
      1836 pages
      ISBN:9781605589336
      DOI:10.1145/1873951

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 October 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader