Skip to main content

Removing the Black-Box from Machine Learning

  • Conference paper
  • First Online:
Pattern Recognition (MCPR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13902))

Included in the following conference series:

  • 267 Accesses

Abstract

We discuss an algorithm which allows us to find the algebraic expression of a dependent variable as a function of an arbitrary number of independent ones where data may have arisen from experimental data. The possibility of such approximation is proved starting from the Universal Approximation Theorem (UAT). As opposed to the neural network (NN) approach to which it is frequently associated, the relationship between the independent variables is explicit, thus resolving the “black box” characteristics of NNs. It implies the use of a nonlinear function such as the logistic 1/(1 + e-x). Thus, any function is expressible as a combination of a set of logistics. We show that a close polynomial approximation of logistic is possible by using only a constant and monomials of odd degree. Hence, an upper bound (D) on the degree of the polynomial may be found. Furthermore, we may calculate the form of the model resulting from D. We discuss how to determine the best such set by using a genetic algorithm leading to the best L-L2 approximation. It allows us to find the best approximation polynomial consisting of a fixed number of monomials and yielding the degrees of the variables in every monomial and the associated coefficients. Furthermore, we trained a multi-layered perceptron network to determine the most adequate number of such monomials for a set of arbitrary data. We discuss how to analyze the explicit relationship between the variables by using a well known experimental database. We show that our method yields better quantitative and qualitative measures than those of the human experts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D., McClelland, J., the PDP research group. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations. MIT Press (1986)

    Google Scholar 

  2. Haykin, S.: Neural Networks and Learning Machines, 3rd edn, Chap. 4, Multilayer Perceptrons. Prentice Hall (2009). ISBN-13: 978–0–13–147139–9

    Google Scholar 

  3. Powell, M.J.D.: The theory of radial basis functions. In:. Light, W (ed.) Advances in Numerical Analysis II: Wavelets, Subdivision, and Radial Basis Functions. King Fahd University of Petroleum & Minerals (1992)

    Google Scholar 

  4. Haykin, S.: Chap. 5, Kernel Methods and Radial Basis Functions (2009)

    Google Scholar 

  5. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20 (1995). http://www.springerlink.com/content/k238jx04hm87j80g/

  6. Haykin, S.: Chap. 6, Support Vector Machines (2009)

    Google Scholar 

  7. MacKay, D.: Information Theory, Inference, and Learning Algorithms. Cambridge (2004). ISBN 0–521–64298–1

    Google Scholar 

  8. Ratkowsky, D.: Handbook of Nonlinear Regression Models, Marcel Dekker, Inc., New York, Library of Congress QA278.2 .R369 (1990)

    Google Scholar 

  9. Beckermann, B.: The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 85(4), 553–577 (2000). https://doi.org/10.1007/PL00005392

    Article  MathSciNet  MATH  Google Scholar 

  10. Meyer, C.: Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM) (2001). ISBN 978-0-89871-454-8

    Google Scholar 

  11. Kuri-Morales, A.: Training neural networks using non-standard norms- preliminary results. In: Cairó, O., Súcar, E., Cantú, F. (eds.) Lecture Notes in Artificial Intelligence, vol. 1793, pp. 350–364. Springer, Heidelberg (2020) ISBN: 3-540-67354-7, ISSN: 0302-9743

    Google Scholar 

  12. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bishop, E.: A generalization of the Stone-Weierstrass theorem. Pac. J. Math. 11(3), 777–783 (1961)

    Article  MathSciNet  MATH  Google Scholar 

  14. Koornwinder, T., Wong, R., Koekoek, R., Swarttouw, R.: Orthogonal polynomials. In: Olver, F., Lozier, D., Boisvert, R., et al. (eds.) NIST Handbook of Mathematical Functions. Cambridge University Press, Cmabridge (2010). ISBN 978-0521192255

    Google Scholar 

  15. Scheid, F.: Numerical Analysis, Schaum’s Outline Series, Chap. 21, Least Squares Polynomial Approximation (1968). ISBN 07-055197-9

    Google Scholar 

  16. Kuri-Morales, A., Aldana-Bobadilla, E.: The best genetic algorithm part I. In: Castro, G.G. (ed.) A Comparative Study of Structurally Different Genetic Algorithms, pp. 1–15. Springer, Heidelberg (2013). ISBN: 9783642451119, ISSN: 1611-3349

    Google Scholar 

  17. Kuri-Morales, A., Aldana-Bobadilla, E., López-Peña, J.: The best genetic algorithm part II. In: Castro, G.G. (ed.) A Comparative Study of Structurally Different Genetic Algorithms, pp. 16–29. Springer, Heidelberg (2013). ISBN: 9783642451119, ISSN: 1611-3349

    Google Scholar 

  18. Cheney, E.W.: Introduction to Approximation Theory, pp. 34–45. McGraw-Hill Book Company (1966)

    Google Scholar 

  19. https://archive.ics.uci.edu/ml/datasets/wine. Accessed 18 Feb 2023

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Angel Fernando Kuri-Morales .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kuri-Morales, A.F. (2023). Removing the Black-Box from Machine Learning. In: Rodríguez-González, A.Y., Pérez-Espinosa, H., Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Olvera-López, J.A. (eds) Pattern Recognition. MCPR 2023. Lecture Notes in Computer Science, vol 13902. Springer, Cham. https://doi.org/10.1007/978-3-031-33783-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33783-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33782-6

  • Online ISBN: 978-3-031-33783-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics