Skip to main content

Multi-agent Reinforcement Learning-Based Energy Orchestrator for Cyber-Physical Systems

  • Conference paper
  • First Online:
Algorithmic Aspects of Cloud Computing (ALGOCLOUD 2023)

Abstract

To reach a low-emission future it is necessary to change our behaviour and habits, and advances in embedded systems and artificial intelligence can help us. The smart building concept and energy management are key points to increase the use of renewable sources as opposed to fossil fuels. In addition, Cyber-Physical Systems (CPS) provide an abstraction of the management of services that allows the integration of both virtual and physical systems. In this paper, we propose to use Multi-Agent Reinforcement Learning (MARL) to model the CPS services control plane in a smart house, with the aim of minimising, by shifting or shutdown services, the use of non-renewable energy (fuel generator) by exploiting solar production and batteries. Moreover, our proposal is able to dynamically adapt its behaviour in real time according to the current and historical energy production, thus being able to address occasional changes in energy production due to meteorological phenomena or unexpected energy consumption. In order to evaluate our proposal, we have developed an open-source smart building energy simulator and deployed our use case. Finally several simulations are evaluated to verify the performance, showing that the reinforcement learning solution outperformed the heuristic-based solution in both power consumption and adaptability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abedi, S., Yoon, S.W., Kwon, S.: Battery energy storage control using a reinforcement learning approach with cyclic time-dependent markov process. Inter. J. Electrical Power Energy Syst. 134, 107368 (2022). https://doi.org/10.1016/j.ijepes.2021.107368

    Article  Google Scholar 

  2. Alfaverh, F., Denai, M., Sun, Y.: Demand response strategy based on reinforcement learning and fuzzy reasoning for home energy management. IEEE Access (2020). https://doi.org/10.1109/ACCESS.2020.2974286

  3. Belussi, L., et al.: A review of performance of zero energy buildings and energy efficiency solutions. J. Building Eng. 25, 100772 (2019). https://doi.org/10.1016/j.jobe.2019.100772

    Article  Google Scholar 

  4. Cao, K., Hu, S., Shi, Y., Colombo, A.W., Karnouskos, S., Li, X.: A survey on edge and edge-cloud computing assisted cyber-physical systems. IEEE Trans. Industr. Inf. 17(11), 7806–7819 (2021). https://doi.org/10.1109/TII.2021.3073066

    Article  Google Scholar 

  5. Chen, S.J., Chiu, W.Y., Liu, W.J.: User preference-based demand response for smart home energy management using multiobjective reinforcement learning. IEEE Access 9, 161627–161637 (2021). https://doi.org/10.1109/ACCESS.2021.3132962

    Article  Google Scholar 

  6. Farzaneh, H., Malehmirchegini, L., Bejan, A., Afolabi, T., Mulumba, A., Daka, P.P.: Artificial intelligence evolution in smart buildings for energy efficiency. Applied Sci. 11(2) (2021). https://doi.org/10.3390/app11020763

  7. Gielen, D., Boshell, F., Saygin, D., Bazilian, M.D., Wagner, N., Gorini, R.: The role of renewable energy in the global energy transformation. Energ. Strat. Rev. 24, 38–50 (2019). https://doi.org/10.1016/j.esr.2019.01.006

    Article  Google Scholar 

  8. Kell, A.J.M., McGough, A.S., Forshaw, M.: Optimizing a domestic battery and solar photovoltaic system with deep reinforcement learning. CoRR abs arXiv:2109.05024 (2021)

  9. Khujamatov, K., Reypnazarov, E., Khasanov, D., Akhmedov, N.: Networking and computing in internet of things and cyber-physical systems. In: 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–6 (2020). https://doi.org/10.1109/AICT50176.2020.9368793

  10. Kumar, A., Sharma, S., Goyal, N., Singh, A., Cheng, X., Singh, P.: Secure and energy-efficient smart building architecture with emerging technology iot. Comput. Commun. 176, 207–217 (2021). https://doi.org/10.1016/j.comcom.2021.06.003

    Article  Google Scholar 

  11. Kylili, A., Fokaides, P.A.: European smart cities: the role of zero energy buildings. Sustain. Urban Areas 15, 86–95 (2015). https://doi.org/10.1016/j.scs.2014.12.003

    Article  Google Scholar 

  12. Lee, H., Song, C., Kim, N., Cha, S.W.: Comparative analysis of energy management strategies for hev: dynamic programming and reinforcement learning. IEEE Access 8, 67112–67123 (2020). https://doi.org/10.1109/ACCESS.2020.2986373

    Article  Google Scholar 

  13. Li, Y., Wang, R., Yang, Z.: Optimal scheduling of isolated microgrids using automated reinforcement learning-based multi-period forecasting. IEEE Trans. Sustainable Energy 13(1), 159–169 (2022). https://doi.org/10.1109/TSTE.2021.3105529

    Article  Google Scholar 

  14. Liu, Y., Zhang, D., Gooi, H.B.: Optimization strategy based on deep reinforcement learning for home energy management. CSEE J. Power Energy Syst. 6(3), 572–582 (2020). https://doi.org/10.17775/CSEEJPES.2019.02890

  15. Lu, R., Hong, S.H., Yu, M.: Demand response for home energy management using reinforcement learning and artificial neural network. IEEE Trans. Smart Grid 10(6), 6629–6639 (2019). https://doi.org/10.1109/TSG.2019.2909266

    Article  Google Scholar 

  16. Mason, K., Grijalva, S.: A review of reinforcement learning for autonomous building energy management. Comput. Elect. Eng. 78, 300–312 (2019). https://doi.org/10.1016/j.compeleceng.2019.07.019

    Article  Google Scholar 

  17. Mazumder, S.K., Kulkarni, A., Sahoo, E.A.: A review of current research trends in power-electronic innovations in cyber-physical systems. IEEE J. Emerging Selected Topics Power Electronics 9(5), 5146–5163 (2021). https://doi.org/10.1109/JESTPE.2021.3051876

  18. Mbuwir, B.V., Ruelens, F., Spiessens, F., Deconinck, G.: Battery energy management in a microgrid using batch reinforcement learning. Energies 10(11) (2017). https://doi.org/10.3390/en10111846

  19. Mosterman, P., Zander, J.: Industry 4.0 as a cyber-physical system study. Softw. Syst. Modeling 15 (2016). https://doi.org/10.1007/s10270-015-0493-x

  20. Nazib, R.A., Moh, S.: Reinforcement learning-based routing protocols for vehicular ad hoc networks: a comparative survey. IEEE Access 9, 27552–27587 (2021). https://doi.org/10.1109/ACCESS.2021.3058388

    Article  Google Scholar 

  21. Radanliev, P., De Roure, D., Van Kleek, M., Santos, O., Ani, U.P.D.: Artificial intelligence in cyber physical systems. AI & Soc. 36 (2021). https://doi.org/10.1007/s00146-020-01049-0

  22. Recht, B.: A tour of reinforcement learning: The view from continuous control. ArXiv arXiv:1806.09460 (2019)

  23. Robles-Enciso, A.: MA-RL CPS Simulations results (2022). https://github.com/alb1183/MARL-CPS-results/tree/main/Conference

  24. Robles-Enciso, A.: Sim-PowerCS Simulator (2022). https://github.com/alb1183/Sim-PowerCS/tree/Conference

  25. Robles-Enciso, A., Skarmeta, A.F.: A multi-layer guided reinforcement learning-based tasks offloading in edge computing. Comput. Netw. 220, 109476 (2023). https://doi.org/10.1016/j.comnet.2022.109476

    Article  Google Scholar 

  26. Robles-Enciso, R.: Personal Weather Station - Casa Ruinas - IALGUA2 (2022). https://www.wunderground.com/dashboard/pws/IALGUA2

  27. Schranz, M., et al.: Swarm intelligence and cyber-physical systems: concepts, challenges and future trends. Swarm Evol. Comput. 60, 100762 (2021). https://doi.org/10.1016/j.swevo.2020.100762

    Article  Google Scholar 

  28. Schreiber, T., Netsch, C., Baranski, M., Müller, D.: Monitoring data-driven reinforcement learning controller training: a comparative study of different training strategies for a real-world energy system. Energy Build. 239, 110856 (2021). https://doi.org/10.1016/j.enbuild.2021.110856

    Article  Google Scholar 

  29. Serpanos, D.: The cyber-physical systems revolution. Computer 51(3), 70–73 (2018). https://doi.org/10.1109/MC.2018.1731058

    Article  Google Scholar 

  30. Xu, X., Jia, Y., Xu, Y., Xu, Z., Chai, S., Lai, C.S.: A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Trans. Smart Grid 11(4), 3201–3211 (2020). https://doi.org/10.1109/TSG.2020.2971427

    Article  Google Scholar 

  31. Zhou, S., Hu, Z., Gu, W., Jiang, M., Zhang, X.P.: Artificial intelligence based smart energy community management: a reinforcement learning approach. CSEE J. Power Energy Syst. 5(1), 1–10 (2019). https://doi.org/10.17775/CSEEJPES.2018.00840

Download references

Acknowledgements

This work was supported by the FPI Grant 21463/FPI/20 of the Seneca Foundation in Region of Murcia (Spain) and partially funded by FLUIDOS project of the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No. 101070473 and the ONOFRE project (Grant No. PID2020-112675RB-C44) funded by MCIN/AEI/10.13039/501100011033.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alberto Robles-Enciso .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Robles-Enciso, A., Robles-Enciso, R., Skarmeta, A.F. (2024). Multi-agent Reinforcement Learning-Based Energy Orchestrator for Cyber-Physical Systems. In: Chatzigiannakis, I., Karydis, I. (eds) Algorithmic Aspects of Cloud Computing. ALGOCLOUD 2023. Lecture Notes in Computer Science, vol 14053. Springer, Cham. https://doi.org/10.1007/978-3-031-49361-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-49361-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-49360-7

  • Online ISBN: 978-3-031-49361-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics