Skip to main content

Self-improving behavior arbitration

  • Chapter
  • First Online:
Foundations of Computer Science

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1337))

  • 271 Accesses

Abstract

During the last few years, and in an attempt to provide an efficient alternative to classical methods to designing robot control structures, the behavior-based approach has emerged. Its success has largely been a result of the bottom-up development of a number of fast, tightly coupled control processes. These are specifically designed for a particular agent-environment situation. This new approach, however, has some important limitations because of its lack of goal-directedness and flexibility. In earlier work we presented a model for an architecture that would deal with some of these problems. The architecture bases on two levels of arbitration, a local level which enables the robot to survive in a particular real world situation, and a global level which ensures that the robot's reactions be consistent with the required goal. In this paper the emphasis is put on the local arbitration. We show how the local priorities can be computed and learnt and present simulation results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R.C. Arkin. Integrating behavioral, perceptual, and world knowledge in reactive navigation. Robotics and Autonomous Systems, 6:105–122, 1990.

    Google Scholar 

  2. M. Asada, S. Noda, S. Tawaratsumida, and K. Hosoda. Vision-based behavior acquisition for a shooting robot by using a reinforcement learning. In Proceedings of The IAPR/IEEE Workshop on Visual Behaviors-1994, pages 112–118, 1994.

    Google Scholar 

  3. R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(1), April 1986.

    Google Scholar 

  4. D. Chapman and L.P. Kaelbling. Input generalization in delayed reinforcement learning: An algorithm and performance comparisons. In Proceedings of The International Joint Conference on Artificial Intelligence, IJCAI-91, 1991.

    Google Scholar 

  5. R.J. Firby. Adaptive execution in complex dynamic worlds. Ph.D. Dissertation YALEU/CSD/RR#672, Yale University, Jan. 1989.

    Google Scholar 

  6. E. Gat. Integrating planning and reacting in a heterogeneous asynchronous architecture for controlling real-world mobile robots. In Proceedings of AAAI-92, pages 809–815, San Jose, CA, July 1992.

    Google Scholar 

  7. M. S. Hamdi. A goal-oriented behavior-based control architecture for autonomous mobile robots allowing learning. In M. Kaiser, editor, Proceedings of the Fourth European Workshop on Learning Robots, Karlsruhe, Germany, December 1995.

    Google Scholar 

  8. M. S. Hamdi and K. Kaiser. Adaptable local level arbitration of behaviors. In Proceedings of The First International Conference on Autonomous Agents, Agents '97, Marina del Rey, CA, USA, February 1997.

    Google Scholar 

  9. T. Kohonen. Self-Organization and Associative Memory. Springer Series in Information Sciences 8, Heidelberg. Springer Verlag, 1984.

    Google Scholar 

  10. L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8(3/4):293–321, 1992.

    Article  Google Scholar 

  11. S. Mahadevan and J. Connell. Automatic programming of behavior-based robots using reinforcement learning. Artificial Intelligence, 55(2):311–365, 1992.

    Article  Google Scholar 

  12. J.d.R. Millàn and C. Torras. Efficient reinforcement learning of navigation strategies in an autonomous robot. In Proceedings of The International Conference on Intelligent Robots and Systems, IROS'94, 1994.

    Google Scholar 

  13. N. Nilsson. Shakey the robot. Technical Note 323, SRI AI center, 1984.

    Google Scholar 

  14. Richard S. Sutton. Reinforcement learning architectures for animats. In Jean-Arcady Meyer and Stewart W. Wilson, editors, Proceedings of the First International Conference on Simulation of Adaptive Behavior. MIT Press, Cambridge, Massachusetts, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christian Freksa Matthias Jantzen Rüdiger Valk

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Salah Hamdi, M., Kaiser, K. (1997). Self-improving behavior arbitration. In: Freksa, C., Jantzen, M., Valk, R. (eds) Foundations of Computer Science. Lecture Notes in Computer Science, vol 1337. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0052114

Download citation

  • DOI: https://doi.org/10.1007/BFb0052114

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63746-2

  • Online ISBN: 978-3-540-69640-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics