Abstract
We merge the techniques of passivity-based control (PBC) and reinforcement learning (RL) in a robotic context, with the goal of learning passive control policies. We frame our contribution in a scenario where PBC is implemented by means of virtual energy tanks, a control technique developed to achieve closed-loop passivity for any arbitrary control input. The use of RL in combination with energy tanks allows to learn control policies which, under proper conditions, are structurally passive. Simulations show the validity of the approach, as well as novel research directions in energy-aware robotics.
The research leading to these results has received funding from the European Union’s Horizon Europe Framework Programme undergrant agreement No 101070596 (euROBIN).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Stramigioli, S.: Energy-aware robotics. In: Camlibel, M., Julius, A., Pasumarthy, R., Scherpen, J. (eds.) Mathematical Control Theory I. Lecture Notes in Control and Information Sciences, vol. 461, pp. 37–50 (2015)
van der Schaft, A.: L2-Gain and Passivity Techniques in Nonlinear Control, 3rd edn. Springer Publishing Company, Incorporated (2016)
Califano, F., Rashad, R., Secchi, C., Stramigioli, S.: On the use of energy tanks for robotic systems. In: Borja, P., Della Santina, C., Peternel, L., Torta, E. (eds.) Human-Friendly Robotics, vol. 2023, pp. 174–188. Springer, Cham (2022)
Secchi, C., Stramigioli, S., Fantuzzi, C.: Position drift compensation in port-Hamiltonian based telemanipulation. In: IEEE International Conference on Intelligent Robots and Systems, pp. 4211–4216 (2006)
Duindam, V., Stramigioli, S.: Port-based asymptotic curve tracking for mechanical systems. Eur. J. Control. 10(5), 411–420 (2004)
Dimeas, F., Aspragathos, N.: Online stability in human-robot cooperation with admittance control. IEEE Trans. Haptics 9(2), 267–278 (2016)
Buşoniu, L., de Bruin, T., Tolić, D., Kober, J., Palunko, I.: Reinforcement learning for control: performance, stability, and deep approximators. Annu. Rev. Control. 46, 8–28 (2018)
Zanella, R., Palli, G., Stramigioli, S., Califano, F.: Learning passive policies with virtual energy tanks in robotics. IET Control Theory Appl. 00, 1–10, (2024). https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/cth2.12558
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (2018)
Schindlbeck, C., Haddadin, S.: Unified passivity-based Cartesian force/impedance control for rigid and flexible joint robots via task-energy tanks. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2015, pp. 440–447 (2015)
Todorov, E., Erez, T., Tassa, Y.: MuJoCo: a physics engine for model-based control. In: IEEE International Conference on Intelligent Robots and Systems, pp. 5026–5033 (2012)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: [SAC] Soft actor-critic. In: 35th International Conference on Machine Learning, ICML 2018, vol. 5, pp. 2976–2989 (2018)
Benzi, F., Ferraguti, F., Riggio, G., Secchi, C.: An energy-based control architecture for shared autonomy. IEEE Trans. Robot. 38, 1–19 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zanella, R., Califano, F., Secchi, C., Stramigioli, S. (2024). Learning Passive Policies. In: Secchi, C., Marconi, L. (eds) European Robotics Forum 2024. ERF 2024. Springer Proceedings in Advanced Robotics, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-031-76424-0_60
Download citation
DOI: https://doi.org/10.1007/978-3-031-76424-0_60
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-76423-3
Online ISBN: 978-3-031-76424-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)