Skip to main content
Log in

Representation learning via an integrated autoencoder for unsupervised domain adaptation

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain. The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy. Recently, deep learning methods based on autoencoder have achieved sound performance in representation learning, and many dual or serial autoencoder-based methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation. However, most existing methods of autoencoders just serially connect the features generated by different autoencoders, which pose challenges for the discriminative representation learning and fail to find the real cross-domain features. To address this problem, we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation, called IAUDA. To capture the inter- and inner-domain features of the raw data, two different autoencoders, which are the marginalized autoencoder with maximum mean discrepancy (mAEMMD) and convolutional autoencoder (CAE) respectively, are proposed to learn different feature representations. After higher-level features are obtained by these two different autoencoders, a sparse autoencoder is introduced to compact these inter- and inner-domain representations. In addition, a whitening layer is embedded for features processed before the mAEMMD to reduce redundant features inside a local area. Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345–1359

    Article  Google Scholar 

  2. Xin J, Cui Z, Zhao P, He T. Active transfer learning of matching query results across multiple sources. Frontiers of Computer Science, 2015, 9(4): 595–607

    Article  Google Scholar 

  3. Weiss K, Khoshgoftaar T M, Wang D. A survey of transfer learning. Journal of Big Data, 2016, 3(1): 9

    Article  Google Scholar 

  4. Zhang Y, Chu G, Li P, Hu X, Wu X. Three-layer concept drifting detection in text data streams. Neurocomputing, 2017, 260: 393–403

    Article  Google Scholar 

  5. Du B, Xiong W, Wu J, Zhang L, Zhang L, Tao D. Stacked convolutional denoising auto-encoders for feature representation. IEEE Transactions on Cybernetics, 2017, 47(4): 1017–1027

    Article  Google Scholar 

  6. Zhu Y, Wu X, Li P, Zhang Y, Hu X. Transfer learning with deep manifold regularized auto-encoders. Neurocomputing, 2019, 369: 145–154

    Article  Google Scholar 

  7. Caron M, Bojanowski P, Joulin A, Douze M. Deep clustering for unsupervised learning of visual features. In: Proceedings of the 15th European Conference on Computer Vision (ECCV). 2018, 139–156

  8. Zhang H, Zhang Y, Geng X. Practical age estimation using deep label distribution learning. Frontiers of Computer Science, 2021, 15(3): 153318

    Article  Google Scholar 

  9. Qiang J, Qian Z, Li Y, Yuan Y, Wu X. Short text topic modeling techniques, applications, and performance: a survey. IEEE Transactions on Knowledge and Data Engineering, 2022, 34(3): 1427–1445

    Article  Google Scholar 

  10. Zhu Y, Hu X, Zhang Y, Li P. Transfer learning with stacked reconstruction independent component analysis. Knowledge-Based Systems, 2018, 152: 100–106

    Article  Google Scholar 

  11. Li R, Jiao Q, Cao W, Wong H S, Wu S. Model adaptation: Unsupervised domain adaptation without source data. In: Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 9638–9647

  12. Chen M, Xu Z, Weinberger K Q, Sha F. Marginalized denoising autoencoders for domain adaptation. In: Proceedings of the 29th International Conference on Machine Learning. 2012, 1627–1634

  13. Yang S, Zhang Y, Zhu Y, Li P, Hu X. Representation learning via serial autoencoders for domain adaptation. Neurocomputing, 2019, 351: 1–9

    Article  Google Scholar 

  14. Wang J, Feng W, Chen Y, Yu H, Huang M, Yu P S. Visual domain adaptation with manifold embedded distribution alignment. In: Proceedings of the 26th ACM International Conference on Multimedia. 2018, 402–410

  15. Iovanac N C, Savoie B M. Simpler is better: how linear prediction tasks improve transfer learning in chemical autoencoders. The Journal of Physical Chemistry A, 2020, 124(18): 3679–3685

    Article  Google Scholar 

  16. Wang X, Ma Y, Cheng Y. Domain adaptation network based on autoencoder. Chinese Journal of Electronics, 2018, 27(6): 1258–1264

    Article  Google Scholar 

  17. Zhuang F, Cheng X, Luo P, Pan S J, He Q. Supervised representation learning with double encoding-layer autoencoder for transfer learning. ACM Transactions on Intelligent Systems and Technology, 2018, 9(2): 16

    Article  Google Scholar 

  18. Sun C, Ma M, Zhao Z, Tian S, Yan R, Chen X. Deep transfer learning based on sparse autoencoder for remaining useful life prediction of tool in manufacturing. IEEE Transactions on Industrial Informatics, 2019, 15(4): 2416–2425

    Article  Google Scholar 

  19. Li C, Zhang S, Qin Y, Estupinan E. A systematic review of deep transfer learning for machinery fault diagnosis. Neurocomputing, 2020, 407: 121–135

    Article  Google Scholar 

  20. Sevakula R K, Singh V, Verma N K, Kumar C, Cui Y. Transfer learning for molecular cancer classification using deep neural networks. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2019, 16(6): 2089–2100

    Article  Google Scholar 

  21. Sun M, Wang H, Liu P, Huang S, Fan P. A sparse stacked denoising autoencoder with optimized transfer learning applied to the fault diagnosis of rolling bearings. Measurement, 2019, 146: 305–314

    Article  Google Scholar 

  22. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 2010, 11: 3371–3408

    MathSciNet  MATH  Google Scholar 

  23. Yan H, Ding Y, Li P, Wang Q, Xu Y, Zuo W. Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 945–954

  24. Lin W W, Mak M W, Chien J T. Multisource I-vectors domain adaptation using maximum mean discrepancy based autoencoders. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018, 26(12): 2412–2422

    Article  Google Scholar 

  25. Yang S, Wang H, Zhang Y, Li P, Zhu Y, Hu X. Semi-supervised representation learning via dual autoencoders for domain adaptation. Knowledge-Based Systems, 2020, 190: 105161

    Article  Google Scholar 

  26. Glorot X, Bordes A, Bengio Y. Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning. 2011, 513–520

  27. Jin X, Zhuang F, Xiong H, Du C, Luo P, He Q. Multi-task multi-view learning for heterogeneous tasks. In: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. 2014, 441–450

  28. Roy S, Siarohin A, Sangineto E, Bulò S R, Sebe N, Ricci E. Unsupervised domain adaptation using feature-whitening and consensus loss. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 9463–9472

  29. Pan S J, Tsang I W, Kwok J T, Yang Q. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 2011, 22(2): 199–210

    Article  Google Scholar 

  30. Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. 2016, 2058–2065

  31. Cao Y, Long M, Wang J. Unsupervised domain adaptation with distribution matching machines. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence. 2018, 341

  32. Zhang J, Li W, Ogunbona P. Joint geometrical and statistical alignment for visual domain adaptation. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5150–5158

  33. Chen Z, Chen C, Jin X, Liu Y, Cheng Z. Deep joint two-stream wasserstein auto-encoder and selective attention alignment for unsupervised domain adaptation. Neural Computing and Applications, 2020, 32(11): 7489–7502

    Article  Google Scholar 

  34. Ben-David S, Blitzer J, Crammer K, F. Pereira. Analysis of representations for domain adaptation. In: Proceedings of the 19th International Conference on Neural Information Processing Systems. 2007, 137–144

  35. Yang S, Zhang Y, Wang H, Li P, Hu X. Representation learning via serial robust autoencoder for domain adaptation. Expert Systems with Applications, 2020, 160: 113635

    Article  Google Scholar 

  36. Hoffman J, Rodner E, Donahue J, Kulis B, Saenko K. Asymmetric and category invariant feature transformations for domain adaptation. International Journal of Computer Vision, 2014, 109(1–2): 28–41

    Article  MathSciNet  MATH  Google Scholar 

  37. Tsai Y H, Sohn K, Schulter S, Chandraker M. Domain adaptation for structured output via discriminative patch representations. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 1456–1465

  38. Sharma R, Bhattacharyya P, Dandapat S, Bhatt H S. Identifying transferable information across domains for cross-domain sentiment classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018, 968–978

  39. Chen M, Zhao S, Liu H, Cai D. Adversarial-learned loss for domain adaptation. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020, 3521–3528

  40. Fan H, Zheng L, Yan C, Yang Y. Unsupervised person re-identification: clustering and fine-tuning. ACM Transactions on Multimedia Computing, Communications, and Applications, 2018, 14(4): 83

    Article  Google Scholar 

  41. Fan H, Liu P, Xu M, Yang Y. Unsupervised visual representation learning via dual-level progressive similar instance selection. IEEE Transactions on Cybernetics, 2022, 52(9): 8851–8861

    Article  Google Scholar 

  42. Qiang J, Wu X. Unsupervised statistical text simplification. IEEE Transactions on Knowledge and Data Engineering, 2021, 33(4): 1802–1806

    Article  Google Scholar 

  43. Qiang J, Chen P, Ding W, Wang T, Xie F, Wu X. Heterogeneous-length text topic modeling for reader-aware multi-document summarization. ACM Transactions on Knowledge Discovery from Data, 2019, 13(4): 42

    Article  Google Scholar 

  44. Su J C, Tsai Y H, Sohn K, Liu B, Maji S, Chandraker M. Active adversarial domain adaptation. In: Proceedings of 2020 IEEE Winter Conference on Applications of Computer Vision. 2020, 728–737

  45. Gholami B, Sahu P, Rudovic O, Bousmalis K, Pavlovic V. Unsupervised multi-target domain adaptation: an information theoretic approach. IEEE Transactions on Image Processing, 2020, 29: 3993–4002

    Article  MATH  Google Scholar 

  46. Carlucci F M, Porzi L, Caputo B, Ricci E, Buló S R. MultiDIAL: domain alignment layers for (multisource) unsupervised domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(12): 4441–4452

    Article  Google Scholar 

  47. Luo L, Chen L, Hu S, Lu Y, Wang X. Discriminative and geometry-aware unsupervised domain adaptation. IEEE Transactions on Cybernetics, 2020, 50(9): 3914–3927

    Article  Google Scholar 

  48. Vincent P, Larochelle H, Bengio Y, Manzagol P A. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning. 2008, 1096–1103

  49. Wei P, Ke Y, Goh C K. Deep nonlinear feature coding for unsupervised domain adaptation. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence. 2016, 2189–2195

  50. Wang D, Cui P, Zhu W. Deep asymmetric transfer network for unbalanced domain adaptation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. 2018, 55

Download references

Acknowledgements

This research was partially supported by the National Natural Science Foundation of China (Grant Nos. 61906060, 62076217, 62120106008), the Yangzhou University Interdisciplinary Research Foundation for Animal Husbandry Discipline of Targeted Support (yzuxk202015), the Opening Foundation of Key Laboratory of Huizhou Architecture in Anhui Province (HPJZ-2020-02), and the Open Project Program of Joint International Research Laboratory of Agriculture and Agri-Product Safety (JILAR-KF202104).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yun Li.

Additional information

Yi Zhu is currently an assistant professor in the School of Information Engineering, at Yangzhou University, China. He received the BS degree from Anhui University, China in 2006, the MS degree from the University of Science and Technology of China, China in 2012, and the PhD degree from Hefei University of Technology, China in 2018. His research interests include data mining and recommendation systems.

Xindong Wu is a Professor in the School of Computer Science and Information Engineering at the Hefei University of Technology, China, and a fellow of IEEE and AAAS. He received his BS and MS degrees in computer science from the Hefei University of Technology, China in 1984 and 1987, and his PhD degree in artificial intelligence from the University of Edinburgh, Britain in 1993. His research interests include data mining, big data analytics, and knowledge-based systems.

Jipeng Qiang is currently an associate professor in the School of Information Engineering, at Yangzhou University, China. He received his PhD degree in computer science and technology from Hefei University of Technology, China in 2016. He was a PhD visiting student in the Artificial Intelligence Lab at the University of Massachusetts Boston, USA from 2014 to 2016. He has published more than 40 papers, including AAAI, TKDE, TKDD, and TASLP. His research interests mainly include natural language processing and data mining.

Yunhao Yuan is currently an associate professor in the School of Information Engineering, Yangzhou University, China. He received the MEng degree in computer science and technology from Yangzhou University, China in 2009, and the PhD degree in pattern recognition and intelligence system from Nanjing University of Science and Technology, China in 2013. His research interests include pattern recognition, data mining, and image processing.

Yun Li is currently a professor in the School of Information Engineering, Yangzhou University, China. He received the MS degree in computer science and technology from Hefei University of Technology, China in 1991, and the PhD degree in control theory and control engineering from Shanghai University, China in 2005. He has published more than 100 scientific papers. His research interests include data mining and cloud computing.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, Y., Wu, X., Qiang, J. et al. Representation learning via an integrated autoencoder for unsupervised domain adaptation. Front. Comput. Sci. 17, 175334 (2023). https://doi.org/10.1007/s11704-022-1349-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-022-1349-5

Keywords

Navigation