Abstract
Existing neural network approaches to optimisation problems are quite limited in the types of optimisation problems that can be solved. Convergence theorems that utilise Liapunov functions limit the applicability of these techniques to minimising usually quadratic functions only. This paper proposes a new neural network approach that can be used to solve a broad variety of continuous optimisation problems since it makes no assumptions about the nature of the objective function. The approach comprises two stages: first a feedforward neural network is used to approximate the optimisation function based on a sample of evaluated data points; then a feedback neural network is used to perform gradient descent on this approximation function. The final solution is a local minima of the approximated function, which should coincide with true local minima if the learning has been accurate. The proposed method is evaluated on the De Jong test suite: a collection of continuous optimisation problems featuring various characteristics such as saddlepoints, discontinuities, and noise.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
K. A. Smith, “Neural Networks for Combinatorial Optimisation: a review of more than a decade of research”, INFORMS Journal on Computing, vol. 11, no. 1, pp. 15–34, 1999.
J. J. Hopfield and D. W. Tank, “ ‘Neural’ Computation of Decisions in Optimization Problems', Biological Cybernetics, vol. 52, pp. 141–152, 1985.
R. Durbin and D. Willshaw, “An analogue approach to the travelling salesman problem using an elastic net method”, Nature, vol. 326, pp. 689–691, 1987.
K. Smith, M. Palaniswami and M. Krishnamoorthy, “Neural Techniques for Combinatorial Optimisation with Applications”, IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1301–1318, 1998.
K. A. De Jong, “An analysis of behavior of a class of genetic adaptive systems”, Doctoral Dissertation, University of Michigan, Dissertation Abstracts International, vol. 36, no. 10, 5140B, 1975.
K. Hornik, M. Stinchcombe and H. White, “Multilayer feedforward networks are universal approximators”, Neural Networks, vol. 2, pp. 359–366, 1989.
R. Barron, “Universal approximation bounds for superpositions of a sigmoidal function”, IEEE Transactions on Information Theory, vol. 39, pp. 359–366, 1989.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Smith, K.A., Gupta, J.N.D. (2001). Continuous Function Optimisation via Gradient Descent on a Neural Network Approxmiation Function. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_89
Download citation
DOI: https://doi.org/10.1007/3-540-45720-8_89
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42235-8
Online ISBN: 978-3-540-45720-6
eBook Packages: Springer Book Archive