Abstract
There is a growing evidence that the human brain follows an environmentally-guided neural circuit building that increases its learning flexibility. Similarly, it has been shown that artificial neural networks with dynamic topologies attempt to overcome the problem of determining the appropriate topology to optimally solve a given application. This paper presents a modular structure-adaptable artificial neural network architecture for autonomous control systems consisting of an unsupervised learning network, a reinforcement learning module and a planning module. Finally, we present an extension of the state representation of the environment by introducing short-term memories to deal with the problem of partial observability in the real-world.
A. Pérez-Uribe is supported by the Centre Suisse d’électronique et Microtechnique CSEM, Neuchâtel, Switzerland.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A.E. Alpaydin. Neural Models of Incremental Supervised and Unsupervised Learning. PhD thesis, Swiss Federal Institute of Technology, Lausanne, 1990. These 863.
T. Ash and G. Cottrell. Topology-modifying neural network algorithms. In Michael A. Arbib, editor, Handbook of Brain Theory and Neural Networks, pages 990–993. MIT Press, 1995.
A.G. Barto. Reinforcement learning in motor control. In Michael A. Arbib, editor, Handbook of Brain Theory and Neural Networks, pages 809–812. MIT Press, 1995.
M. Bishop-Ring. Continual learning in reinforcement environments. PhD thesis, The University of Texas at Austin, August 1994.
G. Carpenter and S. Grossberg. The ART of Adaptive Pattern Recognition by a self-organizing neural network. IEEE Computer, pages 77–88, March 1988.
E. Fiesler. Comparative bibliography of ontogenic neural networks. Proceedings of the International Conference on Artificial Neural Networks (ICANN’94), 1994.
B. Fritzke. Unsupervised ontogenic networks. In Handbook of Neural Computation, pages C2.4:1–C2.4:16. Institute of Physics Publishing and Oxford University Press, 1997.
S. Grossberg. The link between brain learning, attention and conciousness. Technical Report CAS/CNS-TR-97-018, Department of Cognitive and Neural Systems, Boston University, June 1998. (to appear in Conciousness and Cognition).
T. Kohonen. Self-Organizing Maps, volume 30. Springer Series in Information Sciences, april 1995.
L. Kuvayev and R. Sutton. Approximation in model-based learning. In Proceedings of the ICML’97 Workshop on Modelling in Reinforcement Learning, Vanderbilt University, July 1997.
J.L. Lin and T.M. Mitchell. Reinforcement learning with hidden states. In J-A. Meyer, H.L. Roitblat, and S.W. Wilson, editors, From Animals to Animats: Proceedings of the Second Intl. Conf. on Simulation of Adaptive Behavior, 1992.
F. Mondada, E. Franzi, and P. Ienne. Mobile robot miniaturization: A tool for investigat ing in control algorithms. In Proceedings of the Third International Symposium on Experimen tal Robotics, Kyoto, Japan, 1993.
A. Pérez-Uribe and E. Sanchez. FPGA Implementation of an Adaptable-Size Neural Network. In Proceedings of the International Conference on Artificial Neural Networks ICANN96, pages 383–388, Springer Verlag, July 1996.
A. Pérez-Uribe and E. Sanchez. Structure-Adaptable Neurocontrollers: A Hardware-Friendly Approach. In J. Mira, R. Moreno-Díaz, and J. Cabestany, editors, Biological and Artificial Computation: From Neuroscience to technology, pages 1251–1259, Lecture Notes in Computer Science 1240, Springer Verlag, 1997.
A. Pérez-Uribe and E. Sanchez. A Comparison of Reinforcement Learning with Eligibility Traces and Integrated Learning, Planning and Reacting. In Proceedings of the International Conference on Computational Intelligence for Modelling Control and Automation, 1999. (to appear).
A. Pérez-Uribe and E. Sanchez. A Digital Artificial Brain Architecture for Mobile Autonomous Robots. In M. Sugisaka and H. Tanaka, editors, Proceedings of the Fourth International Symposium on Artificial Life and Robotics AROB’99, pages 240–243, Oita, Japan, 1999.
S. R. Quartz and T. J. Sejnowski. The neural basis of cognitive development: A constructivism manifesto. Behavioral and Brain Sciences, 20(4):537+, December 1997.
S. Schaal and C.G. Atkenson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047–2084, 1998.
R.S. Sutton. Integrated architectures for Learning, Planning, and Reacting based on approximating Dynamic Programming. In Morgan Kaufmann, editor, Proceedings of the Seventh International Conference on Machine Learning, pages 216–224, 1990.
R.S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In MIT Press, editor, Advances in Neural Information Processing Systems 8, pages 1038–1044, 1996.
R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
C. Watkins and P. Dayan. Technical note q-learning. Machine Learning, 8:279–292, 1992.
S.D. Whitehead and D.H. Ballard. Active perception and reinforcement learning. In Proceedings of the Seventh Intl. Conf. on Machine Learning, Austin, 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Pérez-Uribe, A., Sanchez, E. (1999). Structure adaptation in artificial neural networks through adaptive clustering and through growth in state space. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098213
Download citation
DOI: https://doi.org/10.1007/BFb0098213
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66069-9
Online ISBN: 978-3-540-48771-5
eBook Packages: Springer Book Archive