Abstract
The paper presents an application of an algorithm which solves the shortest path problem in arbitrary deterministic environment in linear time, using OFF ROUTE acting method and emotional agent architecture. The complexity of the algorithm in general does not depend on the number of states n, but only on the length of the shortest path, in the worst case the complexity can be at most O (n). The algorithm is applied for the Tower of Hanoi problem.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Barto, A.: Reinforcement learning. In: Omidvar, O., Elliot, D. (eds.) Neural systems for control, pp. 7–30. Academic Press, San Diego (1997)
Botelho, L.M., Coelho, H.: Adaptive agents: emotion learning. In: AAAI, pp. 19–24 (1998)
Bozinovski, S.: A Self-learning System using Secondary Reinforcement. ANW Report, November 25 (1981), COINS, University of Massachusetts at Amherst (1981)
Bozinovski, S.: A self learning system using secondary reinforcement. In: Trappl, E. (ed.) Cybernetics and Systems Research, pp. 397–402. N. Holland Publishing Company, Amsterdam (1982)
Bozinovski, S.: Crossbar Adaptive Array: The First Connectionist Network that Solved the Delayed Reinforcement Learning Problem. In: Dobnikar, A., Steele, N., Pearson, D., Alberts, R. (eds.) Artificial Neural Networks and Genetic Algorithms, pp. 320–325. Springer, Heidelberg (1999)
Bozinovski, S.: Anticipation Driven Artificial Personality: Building on Lewin and Loehlin. In: Butz, M.V., Sigaud, O., Gérard, P. (eds.) Anticipatory Behavior in Adaptive Learning Systems. LNCS (LNAI), vol. 2684, pp. 133–150. Springer, Heidelberg (2003)
Bozinovski, S.: The Artificial Intelligence (In Macedonian). Gocmar Press, Bitola (1994)
Damasio, A.: Descartes’ Error: Emotion. Reason and the Human Brain, Grosset (1994)
Petruseva, S.: Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent. International Yugoslav Journal of Operations Research (YUJOR) 16(2), 211–226 (2006a)
Petruseva, S.: Consequence programming: Solving a shortest path problem in polynomial time using emotional learning. International Journal of Pure and Applied Mathematics (IJPAM) 29(4), 491–520 (2006b)
Petruseva, S.: Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent. International Journal of Applied Mathematics and Computer Science (AMCS) 18(3), 409–421 (2008)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, A Bradford Book (1998)
Watkins, C.: Learning from Delayed Rewards. Ph.D. Thesis. King’s College, Cambridge (1989)
Wittek, G.: Me, Me, Me, the spider in the web, The Law of correspondence, and the Law of projection. Verlag DAS WORT, Marktheidenfeld-Altfeld (1995)
Wikipedija, the free encyclopedia, http://en.wikipedija.org/wiki/A-star_algorithm
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Petruseva, S. (2009). Forward Chaining Algorithm for Solving the Shortest Path Problem in Arbitrary Deterministic Environment in Linear Time - Applied for the Tower of Hanoi Problem. In: Mertsching, B., Hund, M., Aziz, Z. (eds) KI 2009: Advances in Artificial Intelligence. KI 2009. Lecture Notes in Computer Science(), vol 5803. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04617-9_84
Download citation
DOI: https://doi.org/10.1007/978-3-642-04617-9_84
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04616-2
Online ISBN: 978-3-642-04617-9
eBook Packages: Computer ScienceComputer Science (R0)