Abstract
Finding good principles to choose the actions of artificial agents like robots in the most beneficial way to optimize their control of the environment is very much in the focus of current research in the field of intelligent systems. Especially in reinforcement learning, where the agent learns through the direct interaction with the environment, a good choice of actions is essential. We propose a new approach that allows a predictive ranking of different action sets with regard to their influence on the learning performance of an artificial agent. Our approach is based on a measure of control that utilizes the concept of mutual information. To evaluate this approach, we investigate its prediction of the effectiveness of different sets of actions in “mediated interaction” scenarios. Our results indicate that the mutual information-based measure can yield useful predictions on the aptitude of action sets for the learning process.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Catto, E.: Box2d (2010). http://www.box2d.org
Friston, K.: The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11(2), 127–138 (2010)
Geramifard, A., Dann, C., Klein, R.H., Dabney, W., How, J.P.: Rlpy: a value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res. 16, 1573–1578 (2015)
Geramifard, A., Walsh, T.J., Tellex, S., Chowdhary, G., Roy, N., How, J.P., et al.: A tutorial on linear function approximators for dynamic programming and reinforcement learning. Foundations and Trends\(\textregistered \). Mach. Learn. 6(4), 375–451 (2013)
Gottlieb, J., Oudeyer, P.Y., Lopes, M., Baranes, A.: Information-seeking, curiosity, and attention: computational and neural mechanisms 17(11), 585–593 (2013)
Jones, D.S.: Elementary Information Theory. Oxford University Press, New York (1979)
Moody, J., Darken, C.J.: Fast learning in networks of locally-tuned processing units. Neural Comput. 1(2), 281–294 (1989)
Paninski, L.: Estimation of entropy and mutual information. Neural Comput. 15(6), 1191–1253 (2003)
Shannon, C.E., Weaver, W.: The mathematical theory of communication. Mathe. Gaz. 34(310), 312 (1950)
Strong, S.P., Koberle, R., de Ruyter van Steveninck, R.R.: Entropy and information in neural spike trains. Phys. Rev. Lett. 80(1), 197–200 (1998)
Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press Cambridge, Cambridge (1998)
Tishby, N., Polani, D.: Information theory of decisions and actions. In: Cutsuridis, V., Hussain, A., Taylor, J. (eds.) Perception-Action Cycle. Cognitive and Neural Systems. Springer, New York (2010)
Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)
Acknowledgments
This research/work was supported by the Cluster of Excellence Cognitive Interaction Technology ‘CITEC’ (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Fleer, S., Ritter, H. (2017). Comparing Action Sets: Mutual Information as a Measure of Control. In: Lintas, A., Rovetta, S., Verschure, P., Villa, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2017. ICANN 2017. Lecture Notes in Computer Science(), vol 10613. Springer, Cham. https://doi.org/10.1007/978-3-319-68600-4_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-68600-4_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68599-1
Online ISBN: 978-3-319-68600-4
eBook Packages: Computer ScienceComputer Science (R0)