Skip to main content
Log in

Laplacian regularized low-rank sparse representation transfer learning

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

In unsupervised transfer learning, it is extremely valuable to effectively extract knowledge from the vast amount of untagged data that exists by utilizing tagged data from other similar databases. In general, the data in the real world often resides in the low-dimensional manifold embedded in the high-dimensional environment space. However, the current subspace transfer learning methods do not consider the nonlinear geometry structure inside the data, so the local similarity information between the data may be lost in the learning process. In order to improve this respect, we propose a new subspace transfer learning algorithm, namely Laplacian Regularized Low-Rank Sparse Representation Transfer Learning (LRLRSR-TL). After introducing the low-rank representation and sparse constraints, the method incorporates Laplacian regularization term to represent the global low-dimensional structure and capture the inherent nonlinear geometry information of the data. Experimental investigation conducted based on five different cross-domain visual image datasets shows that the proposed method has outstanding performance compared with several state-of-the-art transfer learning methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

source data

Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Shao M, Kit D, Fu Y (2014) Generalized transfer subspace learning through low-rank constraint. Int J Comput Vis 109:74–93

    Article  MathSciNet  Google Scholar 

  2. Jhuo I-H, Liu D, Lee D, Chang S-F (2012) Robust visual domain adaptation with low-rank reconstruction. In: 2012 IEEE Conference on computer vision and pattern recognition, 2012, pp 2168–2175

  3. Zhu F, Shao L (2014) Weakly-supervised cross-domain dictionary learning for visual recognition. Int J Comput Vis 109:42–59

    Article  Google Scholar 

  4. Deng Z, Jiang Y, Chung F-L, Ishibuchi H, Wang S (2012) Knowledge-leverage-based fuzzy system and its modeling. IEEE Trans Fuzzy Syst 21:597–609

    Article  Google Scholar 

  5. Deng Z, Jiang Y, Choi K-S, Chung F-L, Wang S (2013) Knowledge-leverage-based TSK fuzzy system modeling. IEEE Trans Neural Netw Learn Syst 24:1200–1212

    Article  Google Scholar 

  6. Shao L, Zhu F, Li X (2014) Transfer learning for visual categorization: a survey. IEEE Trans Neural Netw Learn Syst 26:1019–1034

    Article  MathSciNet  Google Scholar 

  7. Yao y, Doretto G (2010) Boosting for transfer learning with multiple sources. In: 2010 IEEE Computer Society Conference on computer vision and pattern recognition, 2010, pp 1855–1862

  8. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: 2012 IEEE Conference on computer vision and pattern recognition, 2012, pp 2066–2073

  9. Gopalan R, Li R, Chellappa R (2011) Domain adaptation for object recognition: an unsupervised approach. In: 2011 International Conference on computer vision, 2011, pp 999–1006

  10. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adapting visual category models to new domains. In:2010 European Conference on computer vision, 2010, pp 21–226

  11. Si S, Tao D, Geng B (2009) Bregman divergence-based regularization for transfer subspace learning. IEEE Trans Knowl Data Eng 22:929–942

    Article  Google Scholar 

  12. Duan L, Xu D, Chang S-F (2012) Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. In: 2012 IEEE Conference on computer vision and pattern recognition, 2012, pp 1338–1345

  13. Duan L, Xu D, Tsang IW-H, Luo J (2011) Visual event recognition in videos by learning from web data. IEEE Trans Pattern Anal Mach Intell 34:1667–1680

    Article  Google Scholar 

  14. Xu Y, Zhang D, Yang J, Yang J-Y (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21:1255–1262

    Article  MathSciNet  Google Scholar 

  15. Xu Y, Fang X, Li X, Yang J, You J, Liu H et al (2014) Data uncertainty in face recognition. IEEE Trans Cybern 44:1950–1961

    Article  Google Scholar 

  16. Xu Y, Fang X, Wu J, Li X, Zhang D (2015) Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans Image Process 25:850–863

    Article  MathSciNet  Google Scholar 

  17. Zhang T, Ghanem B, Liu S, Ahuja (2012)Low-rank sparse learning for robust visual tracking. In: 2012 European Conference on computer vision, 2012, pp 470–484

  18. Xu J, Deng C, Gao X, et al (2017) Predicting Alzheimer's disease cognitive assessment via robust low-rank structured sparse model. In: Twenty-sixth International Joint Conference on artificial intelligence, 2017, pp 3880–3886

  19. He X, Yan S, Hu Y, Niyogi P, Zhang H-J (2005) Face recognition using laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27:328–340

    Article  Google Scholar 

  20. Cai D, He X, Han J, Huang TS (2010) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33:1548–1560

    Google Scholar 

  21. Lu X, Wang Y, Yuan Y (2013) Graph-regularized low-rank representation for destriping of hyperspectral images. IEEE Trans Geosci Remote Sens 51:4009–4018

    Article  Google Scholar 

  22. Zheng M, Bu J, Chen C, Wang C, Zhang L, Qiu G et al (2010) Graph regularized sparse coding for image representation. IEEE Trans Image Process 20:1327–1336

    Article  MathSciNet  Google Scholar 

  23. Gao S, Tsang IW-H, Chia L-T (2012) Laplacian sparse coding, hypergraph laplacian sparse coding, and applications. IEEE Trans Pattern Anal Mach Intell 35:92–104

    Article  Google Scholar 

  24. He X, Cai D, Shao Y, Bao H, Han J (2010) Laplacian regularized gaussian mixture model for data clustering. IEEE Trans Knowl Data Eng 23:1406–1418

    Article  Google Scholar 

  25. Jeribi A (2015) Spectral graph theory Spectral. Theory and applications of linear operators and block operator matrice, pp 413–439

  26. Yin M, Gao J, Lin Z (2015) Laplacian regularized low-rank representation and its applications. IEEE Trans Pattern Anal Mach Intell 38:504–517

    Article  Google Scholar 

  27. Côté-Allard U, Fall CL, Drouin A, Campeau-Lecours A, Gosselin C, Glette K et al (2019) Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans Neural Syst Rehabil Eng 27:760–771

    Article  Google Scholar 

  28. Li Y, Shen L (2018) cC-GAN: A robust transfer-learning framework for HEp-2 specimen image segmentation. IEEE Access 6:14048–14058

    Article  Google Scholar 

  29. Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I et al (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35:1285–1298

    Article  Google Scholar 

  30. Turki T, Wei Z, Wang JT (2018) A transfer learning approach via procrustes analysis and mean shift for cancer drug sensitivity prediction. J Bioinform Comput Biol 16:1840014

    Article  Google Scholar 

  31. Yu X, Chu Y, Jiang F, Guo Y, Gong D (2018) SVMs classification based two-side cross domain collaborative filtering by inferring intrinsic user and item features. Knowl-Based Syst 141:80–91

    Article  Google Scholar 

  32. Lu J, Behbood V, Hao P, Zuo H, Xue S, Zhang G (2015) Transfer learning using computational intelligence: a survey. Knowl-Based Syst 80:14–23

    Article  Google Scholar 

  33. Weiss K, Khoshgoftaar TM, Wang DD (2016) A survey of transfer learning. J Big Data 3:9

    Article  Google Scholar 

  34. Wright J, Ganesh A, Rao S, Peng Y, Ma Y (2009) Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: Twenty-second International Conference on neural informationprocessing, 2009, pp 2080–2088

  35. Chen C-F, Wei C-P, Wang Y-CF (2012) Low-rank matrix recovery with structural incoherence for robust face recognition. In: 2012 IEEE Conference on computer vision and pattern recognition, 2012, pp 2618–2625

  36. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35:171–184

    Article  Google Scholar 

  37. Liu G, Lin Z, Yu y (2010) Robust subspace segmentation by low-rank representation. In: Twenty-seventh international conference on international conference on machine learning, pp 663–670

  38. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction. In: 2011 International Conference on computer vision, 2011, pp 1615–1622

  39. Liu G, Xu H, Yan S (2012) Exact subspace segmentation and outlier detection by low-rank representation. In: Artificial intelligence and statistics, 2012, pp 703–711

  40. Ding Z, Ming S, Fu Y (2014) Latent low-rank transfer subspace learning for missing modality recognition. In: Twenty-eighth AAAI Conference on artificial intelligence, 2014, pp 1192–1198

  41. Shao M, Castillo C, Gu Z, Fu Y (2012) Low-rank transfer subspace learning. In: 2012 IEEE International Conference on data mining, 2012, pp 1104–1109

  42. Wang J, Feng W, Chen Y, Yu H, Huang M, Yu PS (2018) Visual domain adaptation with manifold embedded distribution alignment. In: 2018 ACM Multimedia Conference on multimedia conference, 2018, pp 402–410

  43. Xiao T, Liu P, Zhao W et al (2019) Structure preservation and distribution alignment in discriminative transfer subspace learning. Neurocomputing 337:218–234

    Article  Google Scholar 

  44. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52:1289–1306

    Article  MathSciNet  Google Scholar 

  45. Kim H, Park H (2008) Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM J Matrix Anal Appl 30:713–730

    Article  MathSciNet  Google Scholar 

  46. Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98:1031–1044

    Article  Google Scholar 

  47. Lin Z, Chen M, Ma Y (2010) The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.

  48. Chung FRK (1997) Spectral graph theory. In: CBMS regional conference series in mathematics, vol 92

  49. Belkin M, Niyogi P (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Fourteenth International Conference on neural information processing systems, 2002, pp 585–591

  50. Magnus JR, Neudecker H (1999) Matrix differential calculus with applications in statistics and econometrics, revised ed. Wiley, Chichester

  51. Cai j-F, Candes EJ, Shen Z (2008) A singular value thresholding algorithm for matrix completion. arXiv preprint arXiv:0810.3286, 2008

  52. Eckstein J, Bertsekas DP (1992) On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math Progr 55:293–318

    Article  MathSciNet  Google Scholar 

  53. Pan SJ, Tsang IW, Kwok JT, Yang Q (2010) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22:199–210

    Article  Google Scholar 

  54. Long M, Wang J, Ding G, Sun J, Yu PS (2013) Transfer feature learning with joint distribution adaptation. In: 2013 IEEE International Conference on computer vision, 2013, pp 2200–2207

  55. Zhang J, Li W, Ogunbona P (2017) Joint geometrical and statistical alignment for visual domain adaptation. In: 2017 IEEE Conference on computer vision and pattern recognition, 2017, pp 1859–1867

  56. Chong W, Blei D, Li F-F (2009) Simultaneous image classification and annotation. In: 2009 IEEE Conference on computer vision and pattern recognition, 2009, pp 1903–1910

Download references

Acknowledgements

This work is supported by the National Key R&D Program of China (Grant Nos. 2018YFC2001600, 2018YFC2001602), and the National Natural Science Foundation of China under Grant no. 61473150.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qun Dai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, L., Dai, Q. Laplacian regularized low-rank sparse representation transfer learning. Int. J. Mach. Learn. & Cyber. 12, 807–821 (2021). https://doi.org/10.1007/s13042-020-01203-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01203-6

Keywords

Navigation