ABSTRACT
Research interest in cotraining is increasing which combines information from usually two classifiers to iteratively increase training resources and strengthen the classifiers. We try to select classifiers for cotraining when more than two representations of the data are available. The classifier based on the selected representation or data descriptor is expected to provide the most complementary information as new labels for the target classifier. These labels are critical for the next learning iteration. We present two criteria to select the complementary classifier where classification results on a validation set are used to calculate statistics for all the available classifiers. These statistics are used not only to pick the best classifier but also ascertain the number of new labels to be added for the target classifier. We demonstrate the effectiveness of classifier selection for semantic indexing task on the TRECIVD 2013 dataset and compare it to the self-training.
- A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. Google ScholarDigital Library
- W. Du, R. Phlypo, and T. Adali. Adaptive feature split selection for co-training: Application to tire irregular wear classification. In ICASSP, 2013.Google ScholarCross Ref
- G.-Z. Li, D. Li, W.-C. Lu, J. Y. Yang, and M. Q. Yang. Feature selection for co-training: A qsar study. In IC-AI, 2007.Google Scholar
- X. Li and C. G. M. Snoek. Classifying tag relevance with relevant positive and negative examples. In ACM International Conference on Multimedia, 2013. Google ScholarDigital Library
- D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, November 2004. Google ScholarDigital Library
- K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000. Google ScholarDigital Library
- P. Over, G. Awad, M. Michel, J. Fiscus, G. Sanders, W. Kraaij, A. F. Smeaton, and G. Quéenot. Trecvid 2013 -- an overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proceedings of TRECVID 2013. NIST, USA, 2013.Google Scholar
- S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. ICML, 2007. Google ScholarDigital Library
- M. D. Smucker, J. Allan, and B. Carterette. A comparison of statistical significance tests for information retrieval evaluation. In ACM-IKM, 2007. Google ScholarDigital Library
- K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Empowering visual categorization with the gpu. IEEE Transactions on Multimedia, 13(1), 2011. Google ScholarDigital Library
- A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.Google Scholar
- R. Yan and M. R. Naphade. Co-training non-robust classifiers for video semantic concept detection. In ICIP. IEEE, 2005.Google Scholar
- R. Yan and M. R. Naphade. Semi-supervised cross feature learning for semantic concept detection in videos. In CVPR. IEEE, 2005. Google ScholarDigital Library
Index Terms
- Selective Multi-cotraining for Video Concept Detection
Recommendations
When Does Cotraining Work in Real Data?
Cotraining, a paradigm of semisupervised learning, is promised to alleviate effectively the shortage of labeled examples in supervised learning. The standard two-view cotraining requires the data set to be described by two views of features, and ...
Bayesian network classifiers versus selective k-NN classifier
In this paper Bayesian network classifiers are compared to the k-nearest neighbor (k-NN) classifier, which is based on a subset of features. This subset is established by means of sequential feature selection methods. Experimental results on classifying ...
Transductive multi-label learning for video concept detection
MIR '08: Proceedings of the 1st ACM international conference on Multimedia information retrievalTransductive video concept detection is an effective way to handle the lack of sufficient labeled videos. However, another issue, the multi-label interdependence, is not essentially addressed in the existing transductive methods. Most solutions only ...
Comments