skip to main content
10.1145/3600100.3625684acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbuildsysConference Proceedingsconference-collections
research-article

Smart Home Energy Cost Minimisation Using Energy Trading with Deep Reinforcement Learning

Published: 15 November 2023 Publication History

Abstract

Dynamic pricing for electrical energy provision is recently being adopted in several countries. While it is primarily intended to help Distribution System Operators (DSOs) improve load balancing, it also provides an opportunity to smart homes equipped with power generation units such as photovoltaics (PV) and energy storage systems to take advantage of pricing changes and thus reduce overall costs. However, the latter case remains largely unexplored in the literature, as most research focuses on community-based optimisation or managing energy consumption by controlling smart home appliances. To address this gap, we propose a Deep Reinforcement Learning (DRL)-based approach to single house energy trading that can reduce energy costs in a smart home. We demonstrate in a simulated environment with two energy pricing schemes that our solution can minimise the total cost of energy provision. Moreover, we show that the learned energy trading approach leads to up to 47.66% lower costs for end users compared to conventional rule-based approaches.

References

[1]
Hunt Allcott. 2009. Real time pricing and electricity markets. Harvard University 7 (2009).
[2]
S. Bahramara. 2021. Robust Optimization of the Flexibility-constrained Energy Management Problem for a Smart Home with Rooftop Photovoltaic and an Energy Storage. Journal of Energy Storage 36 (2021), 102358.
[3]
I. Safak Bayram, Muhammad Z. Shakir, Mohamed Abdallah, and Khalid Qaraqe. 2014. A survey on energy trading in smart grid. In Proc. IEEE GlobalSIP. 258–262. https://doi.org/10.1109/GlobalSIP.2014.7032118
[4]
S Boeve, J Cherkasky, M Bons, and H Schult. 2021. ASSET study on dynamic retail electricity prices. European Commission. https://doi.org/doi/10.2833/87875
[5]
Donal Brown, Stephen Hall, and Mark E. Davis. 2020. What is Prosumerism for? Exploring the Normative Dimensions of Decentralised Energy Transitions. Energy Research & Social Science 66 (2020), 101475.
[6]
Berk Celik, Robin Roche, David Bouquain, and Abdellatif Miraoui. 2016. Coordinated energy management using agents in neighborhood areas with RES and storage. In Proc. IEEE ENERGYCON. 1–6. https://doi.org/10.1109/ENERGYCON.2016.7514081
[7]
Chen Changsong, Duan Shanxu, Cai Tao, Liu Bangyin, and Yin Jinjun. 2009. Energy trading model for optimal microgrid scheduling based on genetic algorithm. In Proc. IEEE IPEMC. 2136–2139. https://doi.org/10.1109/IPEMC.2009.5157753
[8]
Tao Chen and Wencong Su. 2018. Local Energy Trading Behavior Modeling With Deep Reinforcement Learning. IEEE Access 6 (2018), 62806–62814. https://doi.org/10.1109/ACCESS.2018.2876652
[9]
Ryo Kanamori, Takuya Yoshimura, Shogo Kawaguchi, and Takayuki Ito. 2013. Evaluation of Community-Based Electric Power Market with Agent-Based Simulation. In Proc. IEEE/WIC/ACM International Joint Conferences on WI and IAT, Vol. 2. 108–113. https://doi.org/10.1109/WI-IAT.2013.98
[10]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529–533.
[11]
Ali Esmaeel Nezhad, Pedro H.J. Nardelli, Subham Sahoo, Farideh Ghanavati, and Gerardo J. Osório. 2022. A Centralized Home Energy Management System to Minimize Consumer’s Electricity Bill. In Proc. IEEE EEEIC / I&CPS Europe. 1–5.
[12]
Dawei Qiu, Yujian Ye, Dimitrios Papadaskalopoulos, and Goran Strbac. 2021. Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach. Applied Energy 292 (2021), 116940.
[13]
Kin Cheong Sou, James Weimer, Henrik Sandberg, and Karl Henrik Johansson. 2011. Scheduling smart home appliances using mixed integer linear programming. In Proc. IEEE Conf. Decis. Control. 5144–5149.
[14]
Thibaut Théate and Damien Ernst. 2021. An application of deep reinforcement learning to algorithmic trading. Expert Systems with Applications 173 (2021), 114632.
[15]
Yunpeng Wang, Walid Saad, Zhu Han, H. Vincent Poor, and Tamer Başar. 2014. A Game-Theoretic Approach to Energy Trading in the Smart Grid. IEEE Trans. Smart Grid 5, 3 (2014), 1439–1450.
[16]
Liang Yu, Weiwei Xie, Di Xie, Yulong Zou, Dengyin Zhang, Zhixin Sun, Linghua Zhang, Yue Zhang, and Tao Jiang. 2020. Deep Reinforcement Learning for Smart Home Energy Management. IEEE Internet of Things J. 7, 4 (2020), 2751–2762.
[17]
Lampros Zyglakis, Stylianos Zikos, Konstantinos Kitsikoudis, Angelina D. Bintoudi, Apostolos C. Tsolakis, Dimosthenis Ioannidis, and Dimitrios Tzovaras. 2020. Greek Smart House Nanogrid Dataset. https://doi.org/10.5281/zenodo.4246525 Distributed by Zenodo. https://zenodo.org/record/4246525.

Cited By

View all
  • (2024)Smart Homes, Smarter Savings: Energy Trading with Deep Reinforcement Learning2024 IEEE 22nd Mediterranean Electrotechnical Conference (MELECON)10.1109/MELECON56669.2024.10608536(19-24)Online publication date: 25-Jun-2024

Index Terms

  1. Smart Home Energy Cost Minimisation Using Energy Trading with Deep Reinforcement Learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    BuildSys '23: Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
    November 2023
    567 pages
    ISBN:9798400702303
    DOI:10.1145/3600100
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 November 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Deep Reinforcement Learning
    2. Energy Trading
    3. HEMS

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    BuildSys '23

    Acceptance Rates

    Overall Acceptance Rate 148 of 500 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)61
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Smart Homes, Smarter Savings: Energy Trading with Deep Reinforcement Learning2024 IEEE 22nd Mediterranean Electrotechnical Conference (MELECON)10.1109/MELECON56669.2024.10608536(19-24)Online publication date: 25-Jun-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media