Abstract
Generalization capability of multi-layer perceptron (MLP) depends on the initialization of its weights. If the weights of an MLP are not initialized properly, it may fail to achieve good generalization. In this article, we propose a weight initialization technique for MLP to improve its generalization. This is achieved by a regularized stacked auto-encoder based pre-training method. During pre-training, the weights between each adjacent layers of an MLP, upto the penultimate layer, are trained layer wise by an auto-encoder. To train the auto-encoder, we use weighted sum of two terms: (i) mean squared error (MSE) and (ii) sum of squares of the first order derivatives of the outputs with respect to inputs. Here, the second term acts as a regularizer. It is used to penalize the training of auto-encoder during pre-training to generate better initial values of the weights for each successive layers of MLP. To compare the proposed initialization technique with random weight initialization, we have considered ten standard classification data sets. Empirical results show that the proposed initialization technique improves the generalization of MLP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ali, M.B.: Use of dropouts and sparsity for regularization of autoencoders in deep neural networks. Ph.D. thesis, bilkent university (2015)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al.: Greedy layer-wise training of deep networks. Adv. Neural Inform. Process. Syst. 19, 153 (2007)
Bishop, C.M.: Curvature-driven smoothing in back-propagation neural networks. Theory Appl. Neural Networks 2, 139–148 (1990)
Bishop, C.M.: Curvature-driven smoothing: a learning algorithm for feedforward networks. IEEE Trans. Neural Networks 4(5), 882–884 (1993)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)
Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)
Erhan, D., Manzagol, P.A., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. In: AISTATS, vol. 5, pp. 153–160 (2009)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Jin, Y., Okabe, T., Sendhoff, B.: Neural network regularization and ensembling using multi-objective evolutionary algorithms. In: IEEE Congress on Evolutionary Computation, CEC 2004, vol. 1, pp. 1–8. IEEE (2004)
Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 10, 1–40 (2009)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml
Ludwig, O., Nunes, U., Araujo, R.: Eigenvalue decay: a new method for neural network regularization. Neurocomputing 124, 33–42 (2014)
Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., Glorot, X.: Higher order contractive auto-encoder. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011. LNCS (LNAI), vol. 6912, pp. 645–660. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23783-6_41
Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pp. 833–840 (2011)
Santara, A., Maji, D., Tejas, D., Mitra, P., Gupta, A.: Faster learning of deep stacked autoencoders on multi-core systems using synchronized layer-wise pre-training. arXiv preprint arXiv:1603.02836 (2016)
Seyyedsalehi, S.Z., Seyyedsalehi, S.A.: A fast and efficient pre-training method based on layer-by-layer maximum discrimination for deep neural networks. Neurocomputing 168, 669–680 (2015)
Treadgold, N.K., Gedeon, T.D.: Exploring constructive cascade networks. IEEE Trans. Neural Networks 10(6), 1335–1350 (1999)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Dey, P., Ghosh, A., Pal, T. (2017). Regularized Stacked Auto-Encoder Based Pre-training for Generalization of Multi-layer Perceptron. In: MartÃn-Vide, C., Neruda, R., Vega-RodrÃguez, M. (eds) Theory and Practice of Natural Computing. TPNC 2017. Lecture Notes in Computer Science(), vol 10687. Springer, Cham. https://doi.org/10.1007/978-3-319-71069-3_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-71069-3_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71068-6
Online ISBN: 978-3-319-71069-3
eBook Packages: Computer ScienceComputer Science (R0)