Abstract
Recent research has shown that deep neural network is very powerful for object recognition task. However, training the deep neural network with more than two hidden layers is not easy even now because of regularization problem. To overcome such a regularization problem, some techniques like dropout and de-noising were developed. The philosophy behind de-noising is to extract more robust features from the training data. For that purpose, randomly corrupted input data are used for training an auto-encoder or Restricted Boltzmann machine (RBM). In this paper, we propose unsupervised pre-training with a Self-Organization Map (SOM) to increase robustness and reliability of feature extraction. The basic idea is that instead of random corruption, our proposed algorithm works as a feature extractor so that corrupted input maintains the main skeleton or structure of original data. As a result, our proposed algorithm can extract more robust features related to input data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bengio, Y.: Learning deep architectures for AI. Foundations and Trends® in Machine Learning 2(1), 1–127 (2009)
Aarts, E.H., Jan, H.M.: Boltzmann machines and their applications. In: Treleaven, P.C., Nijman, A.J., de Bakker, J.W. (eds.) PARLE 1987. LNCS, vol. 258, pp. 34–50. Springer, Heidelberg (1987)
Kohonen, T.: The self-organizing map. Proceedings of the IEEE 78(9), 1464–1480 (1990)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Cognitive Modeling (1988)
Rojas, R.: Unsupervised Learning and Clustering Algorithms. In: Neural Networks, pp. 99–121. Springer, Heidelberg (1996)
Vincent, P., et al.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning. ACM (2008)
Hinton, G.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 926 (2010)
Le, Q.V.: Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2013)
Lee, H., et al.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning. ACM (2009)
Hinton, G.E.: Deep belief networks. Scholarpedia 4(5), 5947 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Lee, YM., Kim, JH. (2015). Robust and Reliable Feature Extractor Training by Using Unsupervised Pre-training with Self-Organization Map. In: Kim, JH., Yang, W., Jo, J., Sincak, P., Myung, H. (eds) Robot Intelligence Technology and Applications 3. Advances in Intelligent Systems and Computing, vol 345. Springer, Cham. https://doi.org/10.1007/978-3-319-16841-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-16841-8_16
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16840-1
Online ISBN: 978-3-319-16841-8
eBook Packages: EngineeringEngineering (R0)