Skip to main content

Comparing Sparse Autoencoders for Acquisition of More Robust Bases in Handwritten Characters

  • Conference paper
  • First Online:
Integrated Uncertainty in Knowledge Modelling and Decision Making (IUKM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10758))

  • 1335 Accesses

Abstract

Autoencoders that acquire specific feature space models from unsupervised data have become an important technique for designing systems based on neural networks. In this paper, we focuses on the reusability of sparse encoder for handwritten characters. In existing studies, the training bias of sparse autoencoders is generally more constrained in the aspect of the number of the activated intermediate units than the other autoencoders. We investigate the role that trained units play as another direction of training bias for more reusable autoencoder. As a basis of the investigation, we manually selected three autoencoders and compare the reusability of them in two experiments. One is a letter identification experiment for a character whose character faded or blurred and the structure of the original character collapsed. The other is the experiment to distinguish the lines that form letters from that the line segments in subparts other than the text part constituting the sentences such as figures and tables. As a result, we found that the role that the intermediate units of the most reusable autoencoder in our experiments plays is regarded as binary functions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  3. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2014)

    Google Scholar 

  5. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)

    MATH  Google Scholar 

  6. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. arXiv:1412.1897 (2014)

  7. Goodfellow, I.J., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). Book in preparation

    Google Scholar 

  8. Ng, A.: Sparse autoencoder. CS294A Lecture notes, Stanford University, p. 72 (2011)

    Google Scholar 

  9. Tsuboi, Y., Unno, Y., Suzuki, J.: Natural Language Processing by Deep Learning. Kodansha, New York (2017)

    Google Scholar 

  10. Makhzani, A., Frey, B.: k-Sparse autoencoders. arXiv:1312.5663 (2013)

  11. Makhzani, A., Frey, B.J.: Winner-take-all autoencoders. In: Advances in Neural Information Processing Systems, pp. 2773–2781 (2015)

    Google Scholar 

  12. Okatani, T.: Deep Learning. Kodansha, New York (2015)

    Google Scholar 

  13. Electrotechnical Laboratory, Japanese Technical Committee for Optical Character Recognition, ETL Character Database (1973–1984)

    Google Scholar 

  14. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of AISTATS, vol. 9, pp. 249–256 (2010)

    Google Scholar 

  15. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: ICML (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takuya Okada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Okada, T., Takeuchi, K. (2018). Comparing Sparse Autoencoders for Acquisition of More Robust Bases in Handwritten Characters. In: Huynh, VN., Inuiguchi, M., Tran, D., Denoeux, T. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2018. Lecture Notes in Computer Science(), vol 10758. Springer, Cham. https://doi.org/10.1007/978-3-319-75429-1_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-75429-1_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-75428-4

  • Online ISBN: 978-3-319-75429-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics