Skip to main content

Incorporating Adaptive RNN-Based Action Inference and Sensory Perception

  • Conference paper
  • First Online:
Book cover Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series (ICANN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11730))

Included in the following conference series:

Abstract

In this paper we investigate how directional distance signals can be incorporated in RNN-based adaptive goal-direction behavior inference mechanisms, which is closely related to formalizations of active inference. It was shown previously that RNNs can be used to effectively infer goal-directed action control policies online. This is achieved by projecting hypothetical environmental interactions dependent on anticipated motor neural activities into the future, back-projecting the discrepancies between predicted and desired future states onto the motor neural activities. Here, we integrate distance signals surrounding a simulated robot flying in a 2D space into this active motor inference process. As a result, local obstacle avoidance emerges in a natural manner. We demonstrate in several experiments with static as well as dynamic obstacle constellations that a simulated flying robot controlled by our RNN-based procedure automatically avoids collisions, while pursuing goal-directed behavior. Moreover, we show that the flight direction dependent regulation of the sensory sensitivity facilitates fast and smooth traversals through tight maze-like environments. In conclusion, it appears that local and global objectives can be integrated seamlessly into RNN-based, model-predictive active inference processes, as long as the objectives do not yield competing gradients.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Botvinick, M., Niv, Y., Barto, A.C.: Hierarchically organized behavior and its neural foundations: a reinforcement learning perspective. Cognition 113(3), 262–280 (2009). https://doi.org/10.1016/j.cognition.2008.08.011

    Article  Google Scholar 

  2. Botvinick, M., Weinstein, A.: Model-based hierarchical reinforcement learning and human action control. Philos. Trans. Roy. Soc. London B: Biol. Sci. 369(1655) (2014). https://doi.org/10.1098/rstb.2013.0480

    Article  Google Scholar 

  3. Butz, M.V.: Towards a unified sub-symbolic computational theory of cognition. Front. Psychol. 7(925) (2016). https://doi.org/10.3389/fpsyg.2016.00925

  4. Butz, M.V., Bilkey, D., Humaidan, D., Knott, A., Otte, S.: Learning, planning, and control in a monolithic neural event inference architecture. Neural Networks (2019). https://doi.org/10.1016/j.neunet.2019.05.001

    Article  Google Scholar 

  5. Butz, M.V., Bilkey, D., Knott, A., Otte, S.: Reprise: a retrospective and prospective inference scheme. In: Proceedings of the 40th Annual Meeting of the Cognitive Science Society, pp. 1427–1432 (2018)

    Google Scholar 

  6. Butz, M.V., Kutter, E.F.: How the Mind Comes Into Being: Introducing Cognitive Science from a Functional and Computational Perspective. Oxford University Press, Oxford (2017)

    Book  Google Scholar 

  7. Camacho, E.F., Bordons, C.: Model Predictive Control. Springer, London (1999). https://doi.org/10.1007/978-1-4471-3398-8

    Book  MATH  Google Scholar 

  8. Friston, K.: The free-energy principle: a rough guide to the brain? Trends Cogn. Sci. 13(7), 293–301 (2009)

    Article  Google Scholar 

  9. Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., Pezzulo, G.: Active inference: a process theory. Neural Comput. 29(1), 1–49 (2016)

    Article  MathSciNet  Google Scholar 

  10. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  11. Kingma, D.P., Ba, J.L.: Adam: A method for stochastic optimization. In: 3rd International Conference for Learning Representations, abs/1412.6980 (2015)

    Google Scholar 

  12. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017). https://doi.org/10.1017/S0140525X16001837

    Article  Google Scholar 

  13. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236

    Article  Google Scholar 

  14. Otte, S., Hofmaier, L., Butz, M.V.: Integrative collision avoidance within RNN-driven many-joint robot arms. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11141, pp. 748–758. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01424-7_73

    Chapter  Google Scholar 

  15. Otte, S., Schmitt, T., Friston, K., Butz, M.V.: Inferring adaptive goal-directed behavior within recurrent neural networks. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017. LNCS, vol. 10613, pp. 227–235. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68600-4_27

    Chapter  Google Scholar 

  16. Otte, S., Zwiener, A., Butz, M.V.: Inherently constraint-aware control of many-joint robot arms with inverse recurrent models. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017. LNCS, vol. 10613, pp. 262–270. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68600-4_31

    Chapter  Google Scholar 

  17. Otte, S., Zwiener, A., Hanten, R., Zell, A.: Inverse recurrent models – an application scenario for many-joint robot arm control. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9886, pp. 149–157. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44778-0_18

    Chapter  Google Scholar 

  18. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction (1998)

    Article  Google Scholar 

  19. Werbos, P.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990). https://doi.org/10.1109/5.58337

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastian Otte .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Otte, S., Stoll, J., Butz, M.V. (2019). Incorporating Adaptive RNN-Based Action Inference and Sensory Perception. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series. ICANN 2019. Lecture Notes in Computer Science(), vol 11730. Springer, Cham. https://doi.org/10.1007/978-3-030-30490-4_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30490-4_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30489-8

  • Online ISBN: 978-3-030-30490-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics