Abstract
In this paper, a method for increasing the number of multilayer perceptron inputs has been proposed. Three kinds of additional input variables have been tested. They make it possible to perform data separation by neurons in the first layer of multilayer perceptrons, with the use of hypercurves having various shapes in the two selected dimensions. By using more inputs, single neurons in the first hidden layer are capable of solving some non-linear separable problems, e.g. the XOR function. In dependence of the weight values of these neurons, they may, in some dimensions, realise the similar transformations as neurons in the hidden layer of RBF networks or separated the data with hyperplanes or hyperparabolas. The use of the proposed procedure does not need to implement, from the very beginning, a new network training algorithm. The classification results on the three very popular UCI benchmarks, which contain the real-world data, are presented.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Nelles, O.: Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models. Springer Science & Business Media, Berlin (2001)
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)
Rusiecki, A., Kordos, M., Kamiński, T., Greń, K.: Training neural networks on noisy data. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part I. LNCS, vol. 8467, pp. 131–142. Springer, Heidelberg (2014)
Cybenko, G.: Approximations by superpositions of sigmoidal functions. Math. Control Signals Syst. 2(4), 303–314 (1989)
Rutkowski, L.: Computational Intelligence: Methods and Techniques. Springer Science & Business Media, Heidelberg (2008)
Laar, P.V.D., Heskes, T., Gielen, S.: Partial retraining: a new approach to input relevance determination. Int. J. Neural Syst. 9(1), 75–85 (1999)
Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 178(3), 389–397 (2004)
Fock, E.: Global sensitivity analysis approach for input selection and system identification purposes-a new framework for feedforward neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(8), 1484–1495 (2014)
Murphy, P.M., Aha, D.W.: UCI Repository of machine learning databases, Department of Information and Computer Science, University of California, Irvine, CA (1994). http://www.ics.uci.edu/~mlearn/MLRepository.html
Lichman, M.: UCI Machine Learning Repository, School of Information and Computer Science, University of California, Irvine, CA (2013). http://archive.ics.uci.edu/ml
Möller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6(4), 525–533 (1993)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Halawa, K. (2016). Method Enabling the First Hidden Layer of Multilayer Perceptrons to Make Division of Space with Various Hypercurves. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2016. Lecture Notes in Computer Science(), vol 9692. Springer, Cham. https://doi.org/10.1007/978-3-319-39378-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-39378-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39377-3
Online ISBN: 978-3-319-39378-0
eBook Packages: Computer ScienceComputer Science (R0)