Skip to main content

Self-taught Learning: Image Classification Using Stacked Autoencoders

  • Conference paper
  • First Online:
Soft Computing for Problem Solving 2019

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1138))

  • 255 Accesses

Abstract

Availability of a large amount of unlabeled data and practical challenges associated with annotating datasets for different domains has led to the development of models that can obtain knowledge from one domain (called source domain) and use this knowledge in a similar domain (called the target domain). This forms the core of transfer learning. Self-taught learning is one of the popular paradigms of transfer learning frequently used for classification tasks. In a typical self-taught learning setting, we have a source domain having a large amount of unlabeled data instances and a target domain having limited labeled data instances. With this setting, self-taught learning proceeds as follows: Given ample unlabeled data instances in the source domain, we try to obtain their optimal representation. Basically, we are learning the transformation that maps unlabeled data instances to their optimal representation.The transformation learnt in the source domain is then used to transform target domain instances; the transformed target domain instances along with their corresponding labels are then used in supervised classification tasks. In our work, we have applied self-taught learning in image classification task. For this, we have used stacked autoencoders (for grayscale images) and convolutional autoencoders (for colored images) to obtain an optimal representation of the images present in the source domain. The transformation function learnt in the source domain is then used to transform target domain images. The transformed target domain instances along with their labels are then used for building the supervised classifier (in our case, SVM). Rigorous experiments on MNIST, CIFAR10 and CIFAR100 dataset show that our self-taught learning approach is doing well against the baseline model (where no transfer learning has been used) even when there are limited number of labeled data instances in the target domain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. R. Rajat, B. Alexis, L. Honglak, P. Benjamin, Y.N. Andrew, Self-taught learning: transfer learning from unlabeled data, in Proceedings of the 24th International Conference on Machine Learning, vol. 8 (2007), pp. 759–766

    Google Scholar 

  2. M. Lei, L. Manchun, M. Xiaoxue, C. Liang, D. Peijun, L. Yongxue, A review of supervised object-based land-cover image classification. ISPRS J. Photogram. Remote Sens. 130, 277–293 (2017)

    Article  Google Scholar 

  3. D. Thibaut, M. Taylor, T. Nicolas, C. Matthieu, Wildcat: weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation, in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2 (2017)

    Google Scholar 

  4. G. Jie, W. Hongyu, F. Jianchao, M. Xiaorui, Deep supervised and contractive neural network for SAR image classification. IEEE Trans. Geosci. Remote Sens. 55(4), 2442–2459 (2017)

    Article  Google Scholar 

  5. B.J. Kyle, M.O. Hakeem, Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset. New Astron. 52, 35–47 (2017)

    Article  Google Scholar 

  6. L. Peng, R.C. Kim-Kwang, W. Lizhe, H. Fang, SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 21(23), 7053–7065 (2017)

    Article  Google Scholar 

  7. E. Tuba, N. Bacanin, An algorithm for handwritten digit recognition using projection histograms and SVM classifier (2015)

    Google Scholar 

  8. T. Chuanqi, S. Fuchun, K. Tao, Z. Wenchang, Y. Chao, L. Chunfang, A survey on deep transfer learning (2018)

    Google Scholar 

  9. Y. Zhilin, S. Ruslan, W.C. William, Transfer learning for sequence tagging with hierarchical recurrent networks (2017)

    Google Scholar 

  10. W. Karl, M.K. Taghi, D.D. Wang, A survey of transfer learning. J. Big Data 3(1), 9 (2016)

    Article  Google Scholar 

  11. L. Shao, F. Zhu, X. Li, Transfer learning for visual categorization: a survey. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 1019–1034 (2015)

    Article  MathSciNet  Google Scholar 

  12. L. Jie, B. Vahid, H. Peng, Z. Hua, X. Shan, Z. Guangquan, Transfer learning using computational intelligence: a survey. Knowl.-Based Syst. 80, 14–23 (2015)

    Article  Google Scholar 

  13. B. Yoshua, C. Aaron, V. Pascal, Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  14. K.L.L. Chamara, Z. Hongming, H. Guang-Bin, V. Chi Man, Representational learning with extreme learning machine for big data. IEEE Intell. Syst. 28(6), 31–34 (2013)

    Google Scholar 

  15. S. Dinggang, W. Guorong, S. Heung-Il, Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017)

    Article  Google Scholar 

  16. C. Diego, T. Isabel, A.F. Manuel, B. Søren, M.G.I. José, Deep feature learning for virus detection using a Convolutional Neural Network (2017)

    Google Scholar 

  17. J. Luyang, Z. Ming, L. Pin, X. Xiaoqiang, A convolutional neural network based feature learning and fault diagnosis method for the condition monitoring of gearbox. Measurement 111, 1–10 (2017)

    Article  Google Scholar 

  18. M. Konstantin, M. Tomoko, High level feature extraction for the self-taught learning algorithm. EURASIP J. Audio Speech Music Process. 1, 6 (2013). https://doi.org/10.1186/1687-4722-2013-6

    Article  Google Scholar 

  19. K. Ronald, K. Christopher, Self-taught feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 55(5), 2693–2705 (2017)

    Article  Google Scholar 

  20. J. Zequn, W. Yunchao, J. Xiaojie, F. Jiashi, L. Wei, Deep self-taught learning for weakly supervised object localization (2017)

    Google Scholar 

  21. Y. Sethi, V. Jain, K.P. Singh, M. Ojha, Isomap based self-taught transfer learning for image classification

    Google Scholar 

  22. H. Peilin, J. Pengfei, Q. Siqi, D. Shukai, Self-taught learning based on sparse autoencoder for E-nose in wound infection detection. Sensors 17(10), 2279 (2017)

    Article  Google Scholar 

  23. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (The MIT Press, Cambridge, 2016)

    MATH  Google Scholar 

  24. J.S. Alex, S. Bernhard, A tutorial on support vector regression. Stat. Comput. 14(3), 199–222 (2004)

    Article  MathSciNet  Google Scholar 

  25. X. Guo, X. Liu, E. Zhu, J. Yin, Deep clustering with convolutional autoencoders, in International Conference on Neural Information Processing (2017), pp. 373–382

    Google Scholar 

  26. F. Mazdak, MNIST handwritten digits (2014)

    Google Scholar 

  27. D. Li, The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Sig. Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  28. K. Alex, H. Geoffrey, Learning multiple layers of features from tiny images (2009 )

    Google Scholar 

  29. Q. Yu, W. Yueming, Z. Xiaoxiang, W. Zhaohui, Robust feature learning by stacked autoencoder with maximum correntropy criterion, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2014), pp. 6716-6720

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Upendra Pratap Singh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Singh, U.P., Chavan, S., Hindwani, S., Singh, K.P. (2020). Self-taught Learning: Image Classification Using Stacked Autoencoders. In: Nagar, A., Deep, K., Bansal, J., Das, K. (eds) Soft Computing for Problem Solving 2019 . Advances in Intelligent Systems and Computing, vol 1138. Springer, Singapore. https://doi.org/10.1007/978-981-15-3290-0_1

Download citation

Publish with us

Policies and ethics