skip to main content
10.1145/2578726.2578789acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
tutorial

Selective Multi-cotraining for Video Concept Detection

Published:01 April 2014Publication History

ABSTRACT

Research interest in cotraining is increasing which combines information from usually two classifiers to iteratively increase training resources and strengthen the classifiers. We try to select classifiers for cotraining when more than two representations of the data are available. The classifier based on the selected representation or data descriptor is expected to provide the most complementary information as new labels for the target classifier. These labels are critical for the next learning iteration. We present two criteria to select the complementary classifier where classification results on a validation set are used to calculate statistics for all the available classifiers. These statistics are used not only to pick the best classifier but also ascertain the number of new labels to be added for the target classifier. We demonstrate the effectiveness of classifier selection for semantic indexing task on the TRECIVD 2013 dataset and compare it to the self-training.

References

  1. A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. W. Du, R. Phlypo, and T. Adali. Adaptive feature split selection for co-training: Application to tire irregular wear classification. In ICASSP, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  3. G.-Z. Li, D. Li, W.-C. Lu, J. Y. Yang, and M. Q. Yang. Feature selection for co-training: A qsar study. In IC-AI, 2007.Google ScholarGoogle Scholar
  4. X. Li and C. G. M. Snoek. Classifying tag relevance with relevant positive and negative examples. In ACM International Conference on Multimedia, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, November 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Over, G. Awad, M. Michel, J. Fiscus, G. Sanders, W. Kraaij, A. F. Smeaton, and G. Quéenot. Trecvid 2013 -- an overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proceedings of TRECVID 2013. NIST, USA, 2013.Google ScholarGoogle Scholar
  8. S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. ICML, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. D. Smucker, J. Allan, and B. Carterette. A comparison of statistical significance tests for information retrieval evaluation. In ACM-IKM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Empowering visual categorization with the gpu. IEEE Transactions on Multimedia, 13(1), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.Google ScholarGoogle Scholar
  12. R. Yan and M. R. Naphade. Co-training non-robust classifiers for video semantic concept detection. In ICIP. IEEE, 2005.Google ScholarGoogle Scholar
  13. R. Yan and M. R. Naphade. Semi-supervised cross feature learning for semantic concept detection in videos. In CVPR. IEEE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Selective Multi-cotraining for Video Concept Detection

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ICMR '14: Proceedings of International Conference on Multimedia Retrieval
        April 2014
        564 pages
        ISBN:9781450327824
        DOI:10.1145/2578726

        Copyright © 2014 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 April 2014

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • tutorial
        • Research
        • Refereed limited

        Acceptance Rates

        ICMR '14 Paper Acceptance Rate21of111submissions,19%Overall Acceptance Rate254of830submissions,31%
      • Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader