Abstract
Supervised learning methods require sufficient labeled examples to learn a good model for classification or regression. However, available labeled data are insufficient in many applications. Active learning (AL) and domain adaptation (DA) are two strategies to minimize the required amount of labeled data for model training. AL requires the domain expert to label a small number of highly informative examples to facilitate classification, while DA involves tuning the source domain knowledge for classification on the target domain. In this paper, we demonstrate how AL can efficiently minimize the required amount of labeled data for DA. Since the source and target domains usually have different distributions, it is possible that the domain expert may not have sufficient knowledge to answer each query correctly. We exploit our active DA framework to handle incorrect labels provided by domain experts. Experiments with multimedia data demonstrate the efficiency of our proposed framework for active DA with noisy labels.








Similar content being viewed by others
References
Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multiclass to binary: A unifying approach for margin classifiers. JMLR 1(1), 113–141 (2000)
Aodha, O.M., Campbell, N.D., Kautz, J., Brostow, G.J.: Hierarchical subquery evaluation for active learning on a graph. In: CVPR (2014)
Argyrious, A., Evegenious, T.: Multi-task feature learning. In: NIPS (2007)
Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. J. Mach. Learn. Res. 20, 97–112 (2011)
Bonilla, E., Chai, K., Williams, C.: Multi-task gaussian process prediction. In: NIPS (2008)
Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines (2001)
Chang, X., Nie, F., Yang, Y., Huang, H.: A convex formulation for semi-supervised multi-label feature selection. In: AAAI (2014)
Dai, W., Yang, Q., Yu, Y.: Boosting for transfer learning. In: ICML (2007)
Daume, H.: Frustratingly easy domain adaptation. In: ACL (2007)
Du, J., Ling, C. X.: Active learning with human-like noisy oracle. In: ICDM (2010)
Duan, L., Xu, D., Tsang, I. W.: Visual event recognition in videos by learning from web data. In: CVPR (2010)
Elhamifar, E., Sapiro, G., Sastry, S: A convex optimization framework for active learning. In: ICCV (2013)
Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: SIGKDD (2004)
Freund, Y., Schapire, R.: A short introduction to boosting. J.Japanese Soc. Artif. Intell. 14(5), 771–780 (1999)
Golovin, D., Krause, A., Ray, D.: Near-optimal bayesian active learning with noisy observations. In: NIPS (2010)
Gretton, A., Borgwardt, K., Scholkopt, B.: A kernel method for the two-sample-problem. In: NIPS (2006)
Han, Y., Wu, F., Zhuang, Y., He, X.: Multi-label transfer learning with sparse representation. TCSVT 20, 1110–1121 (2010)
Han, Y., Wu, F., Tao, D., Shao, J., Zhuang, Y., Jiang, J.: Sparse unsupervised dimensionality reduction for multiple view data. TCSVT 22, 1485–1496 (2012)
Han, Y., Yang, Y., Ma, Z., Shen, H., Sebe, N., Zhou, X.: Image attribute adaptation. TMM 16, 1115–1126 (2014)
Hoi, S., Jin, R., Lyu, M.: Large-scale text categorization by batch mode active learning. In: WWW (2006)
Hua, G., Long, C., Yang, M., Gao, Y.: Collaborative active learning of a kernel machine ensemble for recognition. In: ICCV (2013)
Huang, J., Smola, A., Scholkopf, B.: Correcting sample selection bias by unlabeled data. In: NIPS (2007)
Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In: CVPR (2011)
Li, X., Guo, Y.: Multi-level adaptive active learning for scene classification. In: ECCV (2014)
Liang, L., Grauman, K.: Beyond comparing image pairs: Setwise active learning for relative attributes. In: CVPR (2014)
Maron, O., Lozano-Perez, T.: A framework for multiple-instance learning. In: NIPS (1998)
Mihalkova, L., Huynh, T., Mooney, R.: Mapping and revising markov logic networks for transfer learning. In: AAAI (2007)
Natarajan, N., Dhillon, I.S., Ravikumar, P., Tewari, A.: Learning with noisy labels. In: NIPS (2013)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowledge Data Eng. 22(10), 1345–1359 (2010)
Rajagopal, A.K., Subramanian, R., Ricci, E., Vieriu, R. L., Lanz, O., Sebe, N., et al.: Exploring transfer learning approaches for head pose classification from multi-view surveillance images. IJCV 109(1–2), 146–167 (2014)
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: ECCV (2010)
Sheng, V., Provost, F., Ipeirotis, P.: Get another label? Improving data quality and data mining using multiple, noisy labelers. In: KDD (2008)
Shi, X., Fan, W., Ren, J.: Actively transfer domain knowledge. In: ECML (2008)
Sogawa, Y., Ueno, T., Kawahara, Y., Washio, T.: Active learning for noisy oracle via density power divergence. Neural Netw. 46, 133–143 (2013)
Stiefelhagen, R., Bowers, R., Fiscus, J.G.: Multimodal technologies for perception of humans. CLEAR, 2007 (2007)
Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. In: ICML (2000)
Wang, X., Huang, T.K., Schneider, J.: Active transfer learning under model shift. In: ICML (2014)
Yan, R., Yang, J., Hauptmann, A.G.: Automatically labeling video data using multi-class active learning. In: ICCV (2003)
Yan, Y., Ricci, E., Subramanian, R., Lanz, O., Sebe, N.: No matter where you are: Flexible graph-guided multi-task learning for multi-view head pose classification under target motion. In: ICCV (2013)
Yan, Y., Subramanian, R., Lanz, O., Sebe, N.: Active Transfer Learning for Multiview Head-pose Classification. ICPR (2012)
Yan, Y., Yang, Y., Shen, H., Meng, D., Liu, G., Hauptmann, A., Sebe, N.: Complex event detection via event oriented dictionary learning. In: AAAI (2015)
Yan, Y., Ricci, E., Subramanian, R., Liu, G., Sebe, N.: Multitask Linear Discriminant Analysis for View Invariant Action Recognition. IEEE Transactions on Image Processing, vol. 23, no. 12, (2014)
Yang, J., Yan, R., Hauptmann, A.G.: Cross-domain video concept detection using adaptive svms. In: ACM MM (2007)
Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with application to actively transfer. JMLR (2012)
Yang, Y., Nie, F., Xu, D., Luo, J., Zhuang, Y., Pan, Y.: A multimedia retrieval framework based on semi-supervised ranking and relevance feedback. TPAMI 34, 723–742 (2012)
Yang, Y., Ma, Z., Nie, F., Chang, X., Hauptmann, A. G.: Multi-class active learning by uncertainty sampling with diversity maximization. IJCV, 11 (2014)
Yao, Y., Dorretto, G.: Boosting for transfer learning with multiple sources. In: CVPR (2010)
Zhang, J., Han, Y., Tang, J., Hu, Q., Jiang, J.: What can we learn about motion videos from still images? In: ACM MM (2014)
Acknowledgments
This work was partially supported by the MIUR Cluster project Active Ageing at Home, the EC project xLiMe and A*STAR Singapore under the Human-Centered Cyber-physical Systems (HCCS) grant.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Liu, G., Yan, Y., Subramanian, R. et al. Active domain adaptation with noisy labels for multimedia analysis. World Wide Web 19, 199–215 (2016). https://doi.org/10.1007/s11280-015-0343-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11280-015-0343-3