Skip to main content

Exchanging Advice and Learning to Trust

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2782))

Abstract

One of the most important features of “intelligent behaviour” is the ability to learn from experience. The introduction of Multiagent Systems brings new challenges to the research in Machine Learning. New difficulties, but also new advantages, appear when learning takes place in an environment in which agents can communicate and cooperate. The main question that drives this work is “How can agents benefit from communication with their peers during the learning process to improve their individual and global performances? ” We are particularly interested in environments where speed and band-width limitations do not allow highly structured communication, and where learning agents may use different algorithms. The concept of advice-exchange, which started out as mixture of reinforced and supervised learning procedures, is developing into a meta-learning architecture that allows learning agents to improve their learning skills by exchanging information with their peers. This paper reports the latest experiments and results in this subject.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Nunes, L., Oliveira, E.: On learning by exchanging advice. In: Proc. of the First Symposium on Adaptive Agents and Multi-Agent Systems (AAMAS/AISB 2002), pp. 29–40 (2002)

    Google Scholar 

  2. Dorigo, M., Colombetti, M.: The role of the trainer in reinforcement learning. In: Mahadevan, S. (ed.) Proc. of MLC-COLT 1994, pp. 37–45 (1994)

    Google Scholar 

  3. Clouse, J.A.: On integrating apprentice learning and reinforcement learning. PhD thesis, University of Massachusetts, Department of Computer Science (1997)

    Google Scholar 

  4. Nunes, L., Oliveira, E.: Advice-exchange in heterogeneous groups of learning agents. Technical Report 1 12/02, FEUP/LIACC (2002)

    Google Scholar 

  5. Nunes, L., Oliveira, E.: Advice exchange between evolutionary algorithms and reinforcement learning agents: Experimental results in the pursuit domain. In: Proc. of the Second Symposium on Adaptive Agents and Multi-Agent Systems, AAMAS/AISB 2003 (2003)

    Google Scholar 

  6. Nunes, L., Oliveira, E.: Advice exchange architecture. Technical Report 3 04/03, FEUP/LIACC (2003)

    Google Scholar 

  7. Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cognitive Science 9 (1985)

    Google Scholar 

  8. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Parallel Distributed Processing: Exploration in the Microstructure of Cognition 1, 318–362 (1986)

    Google Scholar 

  9. Watkins, C.J.C.H., Dayan, P.D.: Technical note: Q-learning. Machine Learning 8, 279–292 (1992)

    MATH  Google Scholar 

  10. Whitehead, S.D.: A complexity analisys of cooperative mechanisms in reinforcement learning. In: Proc. of the 9th National Conf. on AI (AAAI 1991), pp. 607–613 (1991)

    Google Scholar 

  11. Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press (1975)

    Google Scholar 

  12. Koza, J.R.: Genetic programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  13. Glickman, M., Sycara, K.: Evolution of goal-directed behavior using limited information in a complex environment. In: Proc. of the Genetic and Evolutionary Computation Conference (GECCO 1999) (1999)

    Google Scholar 

  14. Benda, M., Jagannathan, V., Dodhiawalla, R.: On optimal cooperation of knowledge resources. Technical Report BCS G-2012-28, Boeing AI Center, Boeing Computer Services, Bellevue, WA (1985)

    Google Scholar 

  15. Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Proc. of the Tenth Int. Conf. on Machine Learning, pp. 330–337 (1993)

    Google Scholar 

  16. Haynes, T., Wainwright, R., Sen, S., Schoenfeld, D.: Strongly typed genetic programming in evolving cooperation strategies. In: Proc. of the Sixth Int. Conf. on Genetic Algorithms, pp. 271–278 (1995)

    Google Scholar 

  17. Sen, S., Sekaran, M., Hale, J.: Learning to coordinate without sharing information. In: Proc. of the National Conf. on AI, pp. 426–431 (1994)

    Google Scholar 

  18. Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8, 293–321 (1992)

    Google Scholar 

  19. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proc. of the Eleventh International Conference on Machine Learning, pp. 157–163 (1994)

    Google Scholar 

  20. Thrun, S., Mitchell, T.: Lifelong robot learning. Robotics and Autonomous Systems 15, 25–46 (1995)

    Article  Google Scholar 

  21. Maclin, R., Shavlik, J.: Creating advicetaking reinforcement learners. Machine Learning 22, 251–281 (1996)

    Google Scholar 

  22. Matarić, M.J.: Using communication to reduce locality in distributed multi-agent learning. Computer Science Technical Report CS-96-190, Brandeis University (1996)

    Google Scholar 

  23. Claus, C., Boutilier, C.: The dynamics of reinforcement learning in cooperative multiagent systems. In: Proc. of the Fifteenth National Conference on Artificial Intelligence, Madison, WI, pp. 746–752 (1998)

    Google Scholar 

  24. Price, B., Boutilier, C.: Implicit imitation in multiagent reinforcement learning. In: Proc. of the Sixteenth Int. Conf. on Machine Learning, pp. 325–334 (1999)

    Google Scholar 

  25. Berenji, H.R., Vengerov, D.: Advantages of cooperation between reinforcement learning agents in difficult stochastic problems. In: Proc. of the Nineth IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2000) (2000)

    Google Scholar 

  26. Price, B., Boutilier, C.: Imitation and reinforcement learning in agents with heterogeneous actions. In: Proc. of the Seveteenth Int. Conf. on Machine Learning (ICML2000)(2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nunes, L., Oliveira, E. (2003). Exchanging Advice and Learning to Trust. In: Klusch, M., Omicini, A., Ossowski, S., Laamanen, H. (eds) Cooperative Information Agents VII. CIA 2003. Lecture Notes in Computer Science(), vol 2782. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45217-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45217-1_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40798-0

  • Online ISBN: 978-3-540-45217-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics