Skip to main content

Transfer Learning with Manifold Regularized Convolutional Neural Network

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10412))

Abstract

Deep learning has been recently proposed to learn robust representation for various tasks and deliver state-of-the-art performance in the past few years. Most researchers attribute such success to the substantially increased depth of deep learning models. However, training a deep model is time-consuming and need huge amount of data. Though techniques like fine-tuning can ease those pains, the generalization performance drops significantly in transfer learning setting with little or without target domain data. Since the representation in higher layers must transition from general to specific eventually, generalization performance degrades without integrating sufficient label information of target domain. To address such problem, we propose a transfer learning framework called manifold regularized convolutional neural networks (MRCNN). Specifically, MRCNN fine-tunes a very deep convolutional neural network on source domain, and simultaneously tries to preserve the manifold structure of target domain. Extensive experiments demonstrate the effectiveness of MRCNN compared to several state-of-the-art baselines.

Lang Huang—This work is finished when Lang Huang is an intern (under the supervision of Fuzhen Zhuang) in Institute of Computing Technology, Chinese Academy of Sciences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.cs.toronto.edu/~kriz/cifar.html.

  2. 2.

    http://archive.ics.uci.edu/ml/datasets/Corel+Image+Features.

  3. 3.

    http://vikas.sindhwani.org/svmlin.html.

  4. 4.

    The code is available at https://github.com/LayneH/MRCNN.

References

  1. Joachims, T.: Transductive inference for text classification using support vector machines. ICML 99, 200–209 (1999)

    Google Scholar 

  2. Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7, 2399–2434 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Vincent, P., Larochelle, H., Bengio, Y., et al.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)

    Google Scholar 

  5. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  6. Wu, J., Xiong, H., Chen, J.: Adapting the right measures for k-means clustering. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 877–886. ACM (2009)

    Google Scholar 

  7. Torrey, L., Shavlik, J.: Transfer learning. Handb. Res. Mach. Learn. Appl. Trends: Algorithms Methods Tech. 1, 242 (2009)

    Google Scholar 

  8. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 807–814 (2010)

    Google Scholar 

  9. Zhuang, F., Luo, P., Xiong, H., et al.: Cross-domain learning from multiple sources: a consensus regularization perspective. IEEE Trans. Knowl. Data Eng. 22(12), 1664–1678 (2010)

    Article  Google Scholar 

  10. Si, S., Tao, D., Geng, B.: Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22(7), 929–942 (2010)

    Article  Google Scholar 

  11. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  12. Pan, S.J., Tsang, I.W., Kwok, J.T., et al.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  13. Vincent, P., Larochelle, H., Lajoie, I., et al.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  14. Glorot, X., Bordes, A., Bengio, Y.: Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pp. 513–520 (2011)

    Google Scholar 

  15. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  16. Chen, M., Xu, Z., Weinberger, K., et al.: Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683 (2012)

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  18. Hoffman, J., Guadarrama, S., Tzeng, E.S., et al.: LSDA: large scale detection through adaptation. In: Advances in Neural Information Processing Systems, pp. 3536–3544 (2014)

    Google Scholar 

  19. Sharif Razavian, A., Azizpour, H., Sullivan, J., et al.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014)

    Google Scholar 

  20. Yosinski, J., Clune, J., Bengio, Y., et al.: How transferable are features in deep neural networks?. In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  21. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  22. Zhuang, F., Cheng, X., Luo, P., et al.: Supervised representation learning: transfer learning with deep autoencoders. In: IJCAI, pp. 4119–4125 (2015)

    Google Scholar 

  23. Russakovsky, O., Deng, J., Su, H., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  24. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  25. Abadi, M., Agarwal, A., Barham, P., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)

  26. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

Download references

Acknowledgment

This work is supported by the National Natural Science Foundation of China (Nos. 61473273, 91546122, 61573335, 61602438), Guangdong provincial science and technology plan projects (No. 2015 B010109005), the Youth Innovation Promotion Association CAS 2017146.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lang Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhuang, F., Huang, L., He, J., Ma, J., He, Q. (2017). Transfer Learning with Manifold Regularized Convolutional Neural Network. In: Li, G., Ge, Y., Zhang, Z., Jin, Z., Blumenstein, M. (eds) Knowledge Science, Engineering and Management. KSEM 2017. Lecture Notes in Computer Science(), vol 10412. Springer, Cham. https://doi.org/10.1007/978-3-319-63558-3_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63558-3_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63557-6

  • Online ISBN: 978-3-319-63558-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics