Skip to main content

Extracting Symbolic Function Expressions by Means of Neural Networks

  • Conference paper
Image Processing and Communications Challenges 2

Part of the book series: Advances in Intelligent and Soft Computing ((AINSC,volume 84))

  • 1123 Accesses

Summary

In this paper, a new neural network capable of extracting knowledge from empirical data [1]–[6] is presented. The network utilizes the idea proposed in [2] and developed in [3,4]. Two variants of the network are shown that differ in relationships describing activation functions of neurons in the network. One variant utilizes logarithmic and exponential functions as the activation ones and the other is based on reciprocal activation functions. The first network variant is similar that proposed in [3]. The difference is that in our network the logarithmic activation function works with hidden layer neurons while in [3] with input signals. In the second variant, all activation functions are of 1/x type. To the author’s knowledge, such a network has not been published in the literature so far. Like that of [3], our network provides a real valued symbolic relationship between input and output signals, resulting from numerical data describing the signals. The relationship is a continuous function created on the basis of a given set of input–output numerical data when learning the network. Extraction of the symbolic function expression is carried out after the training in finished. By forming the symbolic expression, the neural network structure and synaptic connection weights associated with the neurons are taken into account. The ability of knowledge extraction, also called law discovery, is a consequence of applying proper activation functions of neurons included in hidden and output layers of the network. The neural network under consideration can also play an inverse role to the above mentioned. Instead of extracting the symbolic relation, it can also be used as a neural realization of continuous functions expressed in a symbolic way. The presented theory is illustrated by an example.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fu, L.M.: Knowledge Discovery by Inductive Neural Networks. IEEE Trans. On Knowledge and Data Engineering 11(6) (November/December 1999)

    Google Scholar 

  2. Durbin, R., Rumelhart, D.: Product Units: A Computationally Powerful and Biologically Plausible Extension to Backpropagation Networks. Neural Computation 1, 133–142 (1989)

    Article  Google Scholar 

  3. Saito, K., Nakano, R.: Law Discovery using neural networks. In: Proc. of the 15th International Joint Conference on Artificial Intelligence, pp. 1078–1083 (1997)

    Google Scholar 

  4. Ismail, A., Engelbrecht, A.P.: Paining Product Units in Feedforward Neural Networks using Particle Swarm Optimization. In: Bajic, V.B., Sha, D. (eds.) Proceedings of the International Conference on Artificial Intelligence Development and Practice of Artificial Intelligence Techniques, Durban, South Africa, pp. 36–40 (1999)

    Google Scholar 

  5. Tickle, A.B., Andrews, R., Golea, M., Diederich, J.: The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded Within Trained Artificial Neural Networks. IEEE Trans. on Neural Networks 9(6) (November 1998)

    Google Scholar 

  6. Fu, L.M.: Learning in Certainty–Factor–Based Multilayer Neural Networks for Classification. IEEE Trans. on Neural Networks 9(1) (January 1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Majewski, J., Wojtyna, R. (2010). Extracting Symbolic Function Expressions by Means of Neural Networks. In: Choraś, R.S. (eds) Image Processing and Communications Challenges 2. Advances in Intelligent and Soft Computing, vol 84. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16295-4_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16295-4_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16294-7

  • Online ISBN: 978-3-642-16295-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics