Skip to main content

Learning Passive Policies

  • Conference paper
  • First Online:
European Robotics Forum 2024 (ERF 2024)

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 32))

Included in the following conference series:

  • 76 Accesses

Abstract

We merge the techniques of passivity-based control (PBC) and reinforcement learning (RL) in a robotic context, with the goal of learning passive control policies. We frame our contribution in a scenario where PBC is implemented by means of virtual energy tanks, a control technique developed to achieve closed-loop passivity for any arbitrary control input. The use of RL in combination with energy tanks allows to learn control policies which, under proper conditions, are structurally passive. Simulations show the validity of the approach, as well as novel research directions in energy-aware robotics.

The research leading to these results has received funding from the European Union’s Horizon Europe Framework Programme undergrant agreement No 101070596 (euROBIN).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Stramigioli, S.: Energy-aware robotics. In: Camlibel, M., Julius, A., Pasumarthy, R., Scherpen, J. (eds.) Mathematical Control Theory I. Lecture Notes in Control and Information Sciences, vol. 461, pp. 37–50 (2015)

    Google Scholar 

  2. van der Schaft, A.: L2-Gain and Passivity Techniques in Nonlinear Control, 3rd edn. Springer Publishing Company, Incorporated (2016)

    MATH  Google Scholar 

  3. Califano, F., Rashad, R., Secchi, C., Stramigioli, S.: On the use of energy tanks for robotic systems. In: Borja, P., Della Santina, C., Peternel, L., Torta, E. (eds.) Human-Friendly Robotics, vol. 2023, pp. 174–188. Springer, Cham (2022)

    Google Scholar 

  4. Secchi, C., Stramigioli, S., Fantuzzi, C.: Position drift compensation in port-Hamiltonian based telemanipulation. In: IEEE International Conference on Intelligent Robots and Systems, pp. 4211–4216 (2006)

    Google Scholar 

  5. Duindam, V., Stramigioli, S.: Port-based asymptotic curve tracking for mechanical systems. Eur. J. Control. 10(5), 411–420 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dimeas, F., Aspragathos, N.: Online stability in human-robot cooperation with admittance control. IEEE Trans. Haptics 9(2), 267–278 (2016)

    Article  Google Scholar 

  7. Buşoniu, L., de Bruin, T., Tolić, D., Kober, J., Palunko, I.: Reinforcement learning for control: performance, stability, and deep approximators. Annu. Rev. Control. 46, 8–28 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  8. Zanella, R., Palli, G., Stramigioli, S., Califano, F.: Learning passive policies with virtual energy tanks in robotics. IET Control Theory Appl. 00, 1–10, (2024). https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/cth2.12558

  9. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (2018)

    Google Scholar 

  10. Schindlbeck, C., Haddadin, S.: Unified passivity-based Cartesian force/impedance control for rigid and flexible joint robots via task-energy tanks. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2015, pp. 440–447 (2015)

    Google Scholar 

  11. Todorov, E., Erez, T., Tassa, Y.: MuJoCo: a physics engine for model-based control. In: IEEE International Conference on Intelligent Robots and Systems, pp. 5026–5033 (2012)

    Google Scholar 

  12. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: [SAC] Soft actor-critic. In: 35th International Conference on Machine Learning, ICML 2018, vol. 5, pp. 2976–2989 (2018)

    Google Scholar 

  13. Benzi, F., Ferraguti, F., Riggio, G., Secchi, C.: An energy-based control architecture for shared autonomy. IEEE Trans. Robot. 38, 1–19 (2022)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Zanella .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zanella, R., Califano, F., Secchi, C., Stramigioli, S. (2024). Learning Passive Policies. In: Secchi, C., Marconi, L. (eds) European Robotics Forum 2024. ERF 2024. Springer Proceedings in Advanced Robotics, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-031-76424-0_60

Download citation

Publish with us

Policies and ethics