Skip to main content

Learning Situation Dependent Success Rates of Actions in a RoboCup Scenario

  • Conference paper
PRICAI 2000 Topics in Artificial Intelligence (PRICAI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1886))

Included in the following conference series:

Abstract

A quickly changing, not predictable environment complicates autonomous decision making in a system of mobile robots. To simplify action selection we suggest a suitable reduction of decision space by restricting the number of executable actions the agent can choose from. We use supervised neural learning to automaticly learn success rates of actions to facilitate decision making. To determine probabilities of success each agent relies on its sensory data. We show that using our approach it is possible to compute probabilities of success close to the real success rates of actions and further we give a few results of games of a RoboCup simulation team based on this approach.

The RoboCup soccer server offers a couple of low level commands for soccer agents to choose from each 100 ms. Mainly they have the following options: turn (angle), dash (power), kick (power) (angle). If we treat the task of playing soccer as an optimization problem the aim is to control our agents in the given environment such that they score more goals than the opponent team does. We can estimate the number of possible policies by discretising the angle and the power value of the low level commands: Assuming 72 possible angles (5 degree steps) to turn to or to kick to and 10 power levels to dash with or to kick with we get 802 different commands to choose from for a player possessing the ball at one time step. This means we have up to 8023000 different policies over a period of five minutes for only one agent. This forces us to reduce the number of possible choices per time step. To do this we introduce a number of actions such as pass, shoot2goal or go2ball from which the agent can choose. We compute explicit situation dependent success rates for these actions using neural networks (one for each action). From all promising actions (estimated success rate exceeds threshold) the one ranked highest in a priority list (shoot2goal is ranked higher than pass...) is chosen to be executed. In order to evaluate our concept we compared estimated success rates with real success rates and played simulation games against different teams. In addition to our statistics our concept was quite successful in official games of our team Karlsruhe Brainstormers against some simulator league teams of 1999. For further information please contact Sebastian Buck.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Buck, S., Riedmiller, M. (2000). Learning Situation Dependent Success Rates of Actions in a RoboCup Scenario. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_98

Download citation

  • DOI: https://doi.org/10.1007/3-540-44533-1_98

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67925-7

  • Online ISBN: 978-3-540-44533-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics