Abstract
We discuss an algorithm which allows us to find the algebraic expression of a dependent variable as a function of an arbitrary number of independent ones where data may have arisen from experimental data. The possibility of such approximation is proved starting from the Universal Approximation Theorem (UAT). As opposed to the neural network (NN) approach to which it is frequently associated, the relationship between the independent variables is explicit, thus resolving the “black box” characteristics of NNs. It implies the use of a nonlinear function such as the logistic 1/(1 + e-x). Thus, any function is expressible as a combination of a set of logistics. We show that a close polynomial approximation of logistic is possible by using only a constant and monomials of odd degree. Hence, an upper bound (D) on the degree of the polynomial may be found. Furthermore, we may calculate the form of the model resulting from D. We discuss how to determine the best such set by using a genetic algorithm leading to the best L∞-L2 approximation. It allows us to find the best approximation polynomial consisting of a fixed number of monomials and yielding the degrees of the variables in every monomial and the associated coefficients. Furthermore, we trained a multi-layered perceptron network to determine the most adequate number of such monomials for a set of arbitrary data. We discuss how to analyze the explicit relationship between the variables by using a well known experimental database. We show that our method yields better quantitative and qualitative measures than those of the human experts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D., McClelland, J., the PDP research group. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations. MIT Press (1986)
Haykin, S.: Neural Networks and Learning Machines, 3rd edn, Chap. 4, Multilayer Perceptrons. Prentice Hall (2009). ISBN-13: 978–0–13–147139–9
Powell, M.J.D.: The theory of radial basis functions. In:. Light, W (ed.) Advances in Numerical Analysis II: Wavelets, Subdivision, and Radial Basis Functions. King Fahd University of Petroleum & Minerals (1992)
Haykin, S.: Chap. 5, Kernel Methods and Radial Basis Functions (2009)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20 (1995). http://www.springerlink.com/content/k238jx04hm87j80g/
Haykin, S.: Chap. 6, Support Vector Machines (2009)
MacKay, D.: Information Theory, Inference, and Learning Algorithms. Cambridge (2004). ISBN 0–521–64298–1
Ratkowsky, D.: Handbook of Nonlinear Regression Models, Marcel Dekker, Inc., New York, Library of Congress QA278.2 .R369 (1990)
Beckermann, B.: The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 85(4), 553–577 (2000). https://doi.org/10.1007/PL00005392
Meyer, C.: Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM) (2001). ISBN 978-0-89871-454-8
Kuri-Morales, A.: Training neural networks using non-standard norms- preliminary results. In: Cairó, O., Súcar, E., Cantú, F. (eds.) Lecture Notes in Artificial Intelligence, vol. 1793, pp. 350–364. Springer, Heidelberg (2020) ISBN: 3-540-67354-7, ISSN: 0302-9743
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)
Bishop, E.: A generalization of the Stone-Weierstrass theorem. Pac. J. Math. 11(3), 777–783 (1961)
Koornwinder, T., Wong, R., Koekoek, R., Swarttouw, R.: Orthogonal polynomials. In: Olver, F., Lozier, D., Boisvert, R., et al. (eds.) NIST Handbook of Mathematical Functions. Cambridge University Press, Cmabridge (2010). ISBN 978-0521192255
Scheid, F.: Numerical Analysis, Schaum’s Outline Series, Chap. 21, Least Squares Polynomial Approximation (1968). ISBN 07-055197-9
Kuri-Morales, A., Aldana-Bobadilla, E.: The best genetic algorithm part I. In: Castro, G.G. (ed.) A Comparative Study of Structurally Different Genetic Algorithms, pp. 1–15. Springer, Heidelberg (2013). ISBN: 9783642451119, ISSN: 1611-3349
Kuri-Morales, A., Aldana-Bobadilla, E., López-Peña, J.: The best genetic algorithm part II. In: Castro, G.G. (ed.) A Comparative Study of Structurally Different Genetic Algorithms, pp. 16–29. Springer, Heidelberg (2013). ISBN: 9783642451119, ISSN: 1611-3349
Cheney, E.W.: Introduction to Approximation Theory, pp. 34–45. McGraw-Hill Book Company (1966)
https://archive.ics.uci.edu/ml/datasets/wine. Accessed 18 Feb 2023
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kuri-Morales, A.F. (2023). Removing the Black-Box from Machine Learning. In: Rodríguez-González, A.Y., Pérez-Espinosa, H., Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Olvera-López, J.A. (eds) Pattern Recognition. MCPR 2023. Lecture Notes in Computer Science, vol 13902. Springer, Cham. https://doi.org/10.1007/978-3-031-33783-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-33783-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33782-6
Online ISBN: 978-3-031-33783-3
eBook Packages: Computer ScienceComputer Science (R0)