Skip to main content

Algorithmically Transitive Network: Learning Padé Networks for Regression

  • Conference paper
  • First Online:
Bio-Inspired Models of Network, Information, and Computing Systems (BIONETICS 2012)

Abstract

The learning capability of a network-based computation model named “Algorithmically Transitive Network (ATN)” is extensively studied using symbolic regression problems. To represent a variety of functions uniformly, the ATN’s topological structure is designed in the form of a truncated power series or a Padé approximant. Since the Padé approximation has better convergence properties than the Taylor expansion, the ATN with the Padé can construct an algebraic function with a relatively small number of parameters. The ATN learns with the standard back-propagation algorithm which optimizes intra-network parameters by the steepest descent method. Numerical experiments with benchmark problems show that the ATN in the form of a Padé approximant has better learning capability than linear regression analysis in a power series, the standard multi-layered neural network with the back-propagation learning, the support vector machine using the radial basis function as kernel, or the simple genetic programming.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. aiSee: Commercial software for visualizing graphs with various algorithms such as rubberband. http://www.aisee.com/

  2. Baker, G.A.: Essentials of Padé Approximants. Academic Press, New York (1975)

    MATH  Google Scholar 

  3. Barron, R.L.: Adaptive transformation networks for modelling, prediction, and control. In: IEEE Systems, Man, and Cybernetics Group Annual Symposium Record, pp. 254–263 (1971)

    Google Scholar 

  4. Bishop, C.M.: Pattern Recognition And Machine Learning: Information Science and Statistics. Springer, Berlin (2006)

    Google Scholar 

  5. Burges, C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2, 21–167 (1998)

    Article  Google Scholar 

  6. Dennis, J.B.: Data flow supercomputer. IEEE Comput. 13(11), 48–56 (1980)

    Article  Google Scholar 

  7. Downing, K.L.: Reinforced genetic programming. Genet. Program. Evolvable Mach. 2(3), 259–288 (2001)

    Article  MATH  Google Scholar 

  8. Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2(3), 183–192 (1989)

    Article  Google Scholar 

  9. Haykin, S.: Neural networks and learning machines. Prentice-Hall, Upper Saddle River (2009)

    Google Scholar 

  10. Iba, H., de Garis, H., Sato, T.: A numerical approach to genetic programming for system identification. Evol. Comput. 3(4), 417–452 (1995)

    Article  Google Scholar 

  11. Iba, H.: Multi-agent reinforcement learning with genetic programming. In: Koza, J.R., et al. (eds): Genetic Programming 1998: Proceedings of the Third Annual Conference (GP-98), pp. 167–172 (1998)

    Google Scholar 

  12. Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Sys. Man Cyber. SMC-1, 364–378 (1971)

    Google Scholar 

  13. Kawato, M., Furukawa, K., Suzuki, R.: A hierarchical neural-network model for control and learning of voluntary movement. Biol. Cybern. 57, 169–185 (1987)

    Article  MATH  Google Scholar 

  14. Keijzer, M.: Improving symbolic regression with interval arithmetic and linear scaling. In: Ryan, C., Soule, T., Keijzer, M., Tsang, E.P.K., Poli, R., Costa, E. (eds.) EuroGP 2003. LNCS, vol. 2610, pp. 70–82. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  15. Koza, J.R.: Genetic Programming: on the Programming of Computers by Means of Natural Selection. MIT Press, Boston (1992)

    MATH  Google Scholar 

  16. Ljung, L.: System Identification: Theory for the User. Prentice Hall, Upper Saddle River (1998)

    Google Scholar 

  17. Mabu, S., Hirasawa, K., Hu, J.: A graph-based evolutionary algorithm: genetic network programming (GNP) and its extension using reinforcement learning. Evol. Comput. 15(3), 369–398 (2007)

    Article  Google Scholar 

  18. Mehta, B., Schaal, S.: Forward models in visuomotor control. J. Neurophysiol. 88, 942–953 (2002)

    Google Scholar 

  19. Miller, J.F.: An empirical study of the efficiency of learning boolean functions using a cartesian genetic programming approach. In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference, vol. 2, pp. 1135–1142. Morgan Kaufmann (1999)

    Google Scholar 

  20. Miller, J.F., Smith, S.L.: Redundancy and computational efficiency in cartesian genetic programming. IEEE Trans. Evol. Comput. 10(2), 167–174 (2006)

    Article  Google Scholar 

  21. Nikolaev, N.Y., Iba, H.: Regularization approach to inductive genetic programming. IEEE Trans. Evol. Comput. 5(4), 359–375 (2001)

    Article  Google Scholar 

  22. Otani, M., Taji, K., Uno, Y.: Motion adaptation with iterative control using inverse-dynamics model. (Japan.) Inst. Electron. Inf. Commun. Eng. J. D J95-D(2), 305–313 (2012)

    Google Scholar 

  23. Pagie, L., Hogeweg, P.: Evolutionary consequences of coevolving targets. Evol. Comput. 5(4), 401–418 (1997)

    Article  Google Scholar 

  24. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  25. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: McClelland, J.L., Rumelhart, D.E. (eds.) The PDP Research Group: Parallel Distributed Processing, vol. 1, pp. 45–76. MIT Press, Cambridge (1986)

    Google Scholar 

  26. Schölkopf, B., Smola, A.J., Williamson, R.C., Bartlett, P.L.: New support vector algorithms. Neural Comput. 12(5), 1207–1245 (2000)

    Article  Google Scholar 

  27. Sejnowski, T.J., Rosenberg, C.R.: Parallel networks that learn to pronounce English text. Complex Syst. 1, 145–168 (1987)

    MATH  Google Scholar 

  28. Sharp, J.A. (ed.): Data Flow Computing: Theory and Practice. Ablex Publishing Corp, Norwood (1992)

    Google Scholar 

  29. Skomorokhov, A.O.: Adaptive learning networks in APL2. In: Proceedings of the international conference on APL (APL ’93), pp. 219–229. ACM, New York (1993)

    Google Scholar 

  30. Soares, C., Brazdil, P.B.: Selecting parameters of SVM using meta-learning and kernel matrix-based meta-features. In: Proceedings of the 2006 ACM Symposium on Applied Computing (SAC ’06), pp. 564–568. ACM, New York (2006)

    Google Scholar 

  31. Suzuki, H.: A network cell with molecular agents that divides from centrosome signals. BioSystems 94, 118–125 (2008)

    Article  Google Scholar 

  32. Suzuki, H., Ohsaki, H., Sawai, H.: A network-based computational model with learning. In: Calude, C.S., Hagiya, M., Morita, K., Rozenberg, G., Timmis, J. (eds.) Unconventional Computation. LNCS, vol. 6079, pp. 193–193. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  33. Suzuki, H., Ohsaki, H., Sawai H.: Algorithmically transitive network: a new computing model that combines artificial chemistry and information-communication engineering. In: Proceedings of the 24th Annual Conference of Japanese Society for Artificial Intelligence (JSAI), 2H1-OS4-5 (Japanese, 2010)

    Google Scholar 

  34. Suzuki, H., Ohsaki, H., Sawai, H.: An agent-based neural computational model with learning. Frontiers in Neuroscience. Conference Abstract: Neuroinformatics (2010). doi:10.3389/conf.fnins.2010.13.00021

  35. Suzuki, H., Ohsaki, H., Sawai, H.: Algorithmically transitive network: a self-organizing data-flow network with learning. In: Suzuki, J., Nakano, T. (eds.) BIONETICS 2010. LNICST, vol. 87, pp. 59–73. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  36. Suzuki, H.: Padé network: a parametric machine Learning system for regression. In: Proceedings of the 22nd Annual Conference of Japanese Neural Network Society, P2–20 (2012)

    Google Scholar 

  37. Teller, A., Veloso, M.: PADO: Learning tree-structured algorithm for orchestration into an object recognition system. Carnegie Mellon University Technical report, CMU-CS-95-101 (1995)

    Google Scholar 

  38. Tennenhouse, D.L., Wetherall, D.J.: Towards an active network architecture. ACM Comput. Commun. Rev. 26(2), 5–18 (1996)

    Article  Google Scholar 

  39. Uy, N.Q., Hoai, N.X., O’Neill, M., McKay, R.I., Galván-López, E.: Semantically-based crossover in genetic programming: application to real-valued symbolic regression. Genet. Program. Evolvable Mach. 12(2), 91–119 (2011)

    Article  Google Scholar 

  40. Vladislavleva, E.J., Smits, G.F., den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming. IEEE Trans. Evol. Comput. 13(2), 333–349 (2009)

    Article  Google Scholar 

  41. Werbos, P.J.: The roots of backpropagation: From ordered derivatives to neural networks and political forecasting. Adaptive and Learning Systems for Signal Processing, Communications and Control Series. Wiley-Interscience, New York (1994)

    Google Scholar 

  42. Wuytack, L. (ed.): Padé Approximation and Its Applications: Proceedings of a Conference Held in Antwerp, Belgium. Lecture Notes in Mathematics. Springer, Heidelberg (1979)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideaki Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Cite this paper

Suzuki, H. (2014). Algorithmically Transitive Network: Learning Padé Networks for Regression. In: Di Caro, G., Theraulaz, G. (eds) Bio-Inspired Models of Network, Information, and Computing Systems. BIONETICS 2012. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 134. Springer, Cham. https://doi.org/10.1007/978-3-319-06944-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-06944-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-06943-2

  • Online ISBN: 978-3-319-06944-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics