Skip to main content

Comparing Action Sets: Mutual Information as a Measure of Control

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2017 (ICANN 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10613))

Included in the following conference series:

  • 2813 Accesses

Abstract

Finding good principles to choose the actions of artificial agents like robots in the most beneficial way to optimize their control of the environment is very much in the focus of current research in the field of intelligent systems. Especially in reinforcement learning, where the agent learns through the direct interaction with the environment, a good choice of actions is essential. We propose a new approach that allows a predictive ranking of different action sets with regard to their influence on the learning performance of an artificial agent. Our approach is based on a measure of control that utilizes the concept of mutual information. To evaluate this approach, we investigate its prediction of the effectiveness of different sets of actions in “mediated interaction” scenarios. Our results indicate that the mutual information-based measure can yield useful predictions on the aptitude of action sets for the learning process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Catto, E.: Box2d (2010). http://www.box2d.org

  2. Friston, K.: The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11(2), 127–138 (2010)

    Article  Google Scholar 

  3. Geramifard, A., Dann, C., Klein, R.H., Dabney, W., How, J.P.: Rlpy: a value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res. 16, 1573–1578 (2015)

    Google Scholar 

  4. Geramifard, A., Walsh, T.J., Tellex, S., Chowdhary, G., Roy, N., How, J.P., et al.: A tutorial on linear function approximators for dynamic programming and reinforcement learning. Foundations and Trends\(\textregistered \). Mach. Learn. 6(4), 375–451 (2013)

    Article  MATH  Google Scholar 

  5. Gottlieb, J., Oudeyer, P.Y., Lopes, M., Baranes, A.: Information-seeking, curiosity, and attention: computational and neural mechanisms 17(11), 585–593 (2013)

    Google Scholar 

  6. Jones, D.S.: Elementary Information Theory. Oxford University Press, New York (1979)

    MATH  Google Scholar 

  7. Moody, J., Darken, C.J.: Fast learning in networks of locally-tuned processing units. Neural Comput. 1(2), 281–294 (1989)

    Article  Google Scholar 

  8. Paninski, L.: Estimation of entropy and mutual information. Neural Comput. 15(6), 1191–1253 (2003)

    Article  MATH  Google Scholar 

  9. Shannon, C.E., Weaver, W.: The mathematical theory of communication. Mathe. Gaz. 34(310), 312 (1950)

    Article  Google Scholar 

  10. Strong, S.P., Koberle, R., de Ruyter van Steveninck, R.R.: Entropy and information in neural spike trains. Phys. Rev. Lett. 80(1), 197–200 (1998)

    Article  Google Scholar 

  11. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press Cambridge, Cambridge (1998)

    Google Scholar 

  12. Tishby, N., Polani, D.: Information theory of decisions and actions. In: Cutsuridis, V., Hussain, A., Taylor, J. (eds.) Perception-Action Cycle. Cognitive and Neural Systems. Springer, New York (2010)

    Google Scholar 

  13. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

Download references

Acknowledgments

This research/work was supported by the Cluster of Excellence Cognitive Interaction Technology ‘CITEC’ (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sascha Fleer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Fleer, S., Ritter, H. (2017). Comparing Action Sets: Mutual Information as a Measure of Control. In: Lintas, A., Rovetta, S., Verschure, P., Villa, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2017. ICANN 2017. Lecture Notes in Computer Science(), vol 10613. Springer, Cham. https://doi.org/10.1007/978-3-319-68600-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68600-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68599-1

  • Online ISBN: 978-3-319-68600-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics