Skip to main content
Log in

A transductive transfer learning approach for image classification

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Among machine learning paradigms, unsupervised transductive transfer learning is useful when no labeled data from the target domain are available at training time, but there is accessible unlabeled target data during training phase instead. The current paper proposes a novel unsupervised transductive transfer learning method to find the specific and shared features across the source and the target domains. The proposed learning method then maps both domains into the respective subspaces with minimum marginal and conditional distribution divergences. It is shown that the discriminative learning across domains leads to boost the model performance. Hence, the proposed method discriminates the classes of both domains via maximizing the distance between each sample-pairs with different labels and via minimizing the distance between each instance-pairs of the same classes. We verified our approach using standard visual benchmarks, with the average accuracy of 46 experiments as 76.5%, which rates rather high in comparison with other state-of-the-art transfer learning methods through various cross-domain tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Gammerman A, Vovk V, Vapnik V (1998) Learning by transduction. In: Proceedings of the fourteenth conference on uncertainty in artificial intelligence, UAI’98, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc, pp 148–155

  2. Tangirala K. Stanescu, A. Caragea D (2016) Study of transductive learning and unsupervised feature construction methods for biological sequence classification. In: 2016 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 999–1006, IEEE

  3. Wang J, Ding G, Pan SJ, Long M, Philip SY (2014) Adaptation regularization: a general framework for transfer learning. IEEE Trans Knowl Data Eng 26(5):1076–1089

    Article  Google Scholar 

  4. Bruzzone L, Marconcini M (2009) Domain adaptation problems: a dasvm classification technique and a circular validation strategy. IEEE Trans Pattern Anal Mach Intell 32(5):770–787

    Article  Google Scholar 

  5. Gong Grauman K, Sha BF (2013) Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. Int Conf Mach Learn 28(1):222–230

    Google Scholar 

  6. Tahmoresnezhad J, Hashemi S (2017) Visual domain adaptation via transfer feature learning. Knowl Inf Syst 50(2):585–605

    Article  Google Scholar 

  7. Li J, Liu J, Lu K (2018) Coupled local-global adaptation for multi-source transfer learning. Neurocomputing 275:247–254

    Article  Google Scholar 

  8. Wu J, Fang X, Xu Li X, Zhang YD (2015) Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans Image Process 25(2):850–863

    MathSciNet  MATH  Google Scholar 

  9. Song S, Huang G, Ding Z, Li S, Wu C (2018) Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE Trans Image Process 27(9):4260–4273

    Article  MathSciNet  Google Scholar 

  10. Wang X, Hu S, Luo L, Chen L (2017) Robust data geometric structure aligned close yet discriminative domain adaptation. arXiv:1705.08620(arXiv preprint)

  11. Li W, Zhang J, Ogunbona P (2017) Joint geometrical and statistical alignment for visual domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1859–1867

  12. Tahmoresnezhad J, Hashemi S (2017) Exploiting kernel-based feature weighting and instance clustering to transfer knowledge across domains. Turk J Electr Eng Comput Sci 25(1):292–307

    Article  Google Scholar 

  13. Shao L, Zhu F, Li X (2014) Transfer learning for visual categorization: a survey. IEEE Trans Neural Netw Learn Syst 26(5):1019–1034

    Article  MathSciNet  Google Scholar 

  14. Borgwardt KM, Rasch MJ, Schölkopf B, Gretton A, Smola A (2012) A kernel two-sample test. J Mach Learn Rese Turk J Electr Eng Comput 13(1):723–773

    MathSciNet  MATH  Google Scholar 

  15. Wang J, Sun J Ding G, Long M, Yu PS (2014) Transfer joint matching for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1410–1417

  16. Kulis B, Saenko Fritz M, K. Darrell T (2010) Adapting visual category models to new domains. In: European conference on computer vision, pp 213–226,

  17. Holub A, Griffin G, Perona P (2007) Caltech-256 object category dataset. 2007

  18. BakerS, Sim T, Bsat M (2002) The cmu pose, illumination, and expression (pie) database. In: Proceedings of fifth IEEE international conference on automatic face gesture recognition, pp 53–58

  19. Bottou L, LeCun Bengio Y, Haffner P (1998) Gradient based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  20. Hull JJ (1994) A database for handwritten text recognition research. IEEE Trans Pattern Anal Mach Intell 16(5):550–554

    Article  Google Scholar 

  21. Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Donahue J, Darrell T (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In: International conference on machine learning, pp 647–655

  22. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  23. Hashemi S, Tahmoresnezhad J (2015) A generalized kernel-based random k-sample sets method for transfer learning. Iran J Sci Technol Trans Electr Eng 39(E2):193–207

    Google Scholar 

  24. Chen L, Hu S, Lu Y, Luo L, Wang X (2020) Discriminative and geometry aware unsupervised domain adaptation. IEEE Trans Cybern 20:20

    Google Scholar 

  25. Habrard A, Sebban M. Fernando B, Tuytelaars T (2013) Unsupervised visual domain adaptation using subspace alignment. In: Proceedings of the IEEE international conference on computer vision, pp 2960–2967

  26. Jolliffe IT (2002) Principle component analysis. Springer, New York

    MATH  Google Scholar 

  27. Saenko K, Sun B (2015) Subspace distribution alignment for unsupervised domain adaptation. BMVC 4:24–1

    Google Scholar 

  28. Rezaei S, Tahmoresnezhad J (2019) Discriminative and domain invariant subspace alignment for visual tasks. Iran J Comput Sci 2(4):219–230

    Article  Google Scholar 

  29. Gretton A. Song L. Smola A, Schölkopf B (2007) A hilbert space embedding for distributions. In: International conference on algorithmic learning theory, pp 13–31

  30. Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  Google Scholar 

  31. Wang J, Ding G, Long M, Sun J, Yu PS (2013) Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE international conference on computer vision, pp 2200–2207

  32. Tahmoresnezhad J, Gholenji E (2019) Joint local and statistical discriminant learning via feature alignment. Signal Image Video Process 20:1–8

    Google Scholar 

  33. Ding Z, Fu Y (2017) Robust transfer metric learning for image classification. IEEE Trans Image Process 26(2):660–670

    Article  MathSciNet  Google Scholar 

  34. Wang X, Hu S, Wang C, Tang Y, Luo L, Chen L (2017) Close yet distinctive domain adaptation. arXiv:1704.04235(arXiv preprint)

  35. Lu K, Huang Z, Zhu L, Li J, Shen HT (2018) Transfer independently together: a generalized framework for domain adaptation. IEEE Trans Cybern 49(6):2144–2155

    Google Scholar 

  36. Balduzzi D, Kleijn WB, Ghifary M, Zhang M (2017) Scatter component analysis: a unified framework for domain adaptation and domain generalization. IEEE Trans Pattern Anal Mach Intell 39(7):1414–1430

    Article  Google Scholar 

  37. Uzair M, Mian A (2016) Blind domain adaptation with augmented extreme learning machine features. IEEE Trans Cybern 47(3):651–660

    Article  Google Scholar 

  38. Sutskever I, Krizhevsky A, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 20:1097–1105

    Google Scholar 

  39. Hoffman J, Zhang N, Saenko K, Tzeng E, Darrell T (2014) Deep domain confusion: Maximizing for domain invariance. arXiv:1412.3474(arXiv preprint)

  40. Zhang L, Cao Z, Wei W, Xian K, Shen C. Lu H, van den Hengel A (2017) When unsupervised domain adaptation meets tensor representations. In: Proceedings of the IEEE international conference on computer vision, pp 599–608

  41. Gholami B, Pavlovic V (2017) Punda: Probabilistic unsupervised domain adaptation for knowledge transfer across visual categories. In: Proceedings of the IEEE international conference on computer vision, pp 3581–3590

  42. Sun B, Saenko K (2016) Deep coral: Correlation alignment for deep domain adaptation. In: European conference on computer vision

  43. Shao M, Ding Z, Fu Y (2015) Missing modality transfer learning via latent low-rank constraint. IEEE Trans Image Process 24(11):4322–4334

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jafar Tahmoresnezhad.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rezaei, S., Tahmoresnezhad, J. & Solouk, V. A transductive transfer learning approach for image classification. Int. J. Mach. Learn. & Cyber. 12, 747–762 (2021). https://doi.org/10.1007/s13042-020-01200-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01200-9

Keywords

Navigation