Impact Statement:Optimizing energy scheduling in MGs offers a substantial opportunity to decrease overall energy usage. However, the growing complexity of real-time scheduling models, inf...Show More
Abstract:
Microgrids (MGs) are essential for enhancing energy efficiency and minimizing power usage through the regulation of energy storage systems. Nevertheless, privacy-related ...Show MoreMetadata
Impact Statement:
Optimizing energy scheduling in MGs offers a substantial opportunity to decrease overall energy usage. However, the growing complexity of real-time scheduling models, influenced by fluctuations in supply and demand, presents significant obstacles. This study introduces a deep reinforcement learning (DRL)-based model for energy management that addresses these challenges. Utilizing interactions between the environment and agents, the model is capable of deriving optimal strategies without relying on intricate system models. The RMADDPG algorithm, which integrates a long short-term memory (LSTM) neural network within its hidden layer, is used to develop energy dispatch strategies, effectively managing gaps in data due to privacy concerns by leveraging historical data for predicting present conditions. When compared to conventional strategies, the RMADDPG-driven approach significantly lowers electrical consumption in systems, underscoring the effectiveness of DRL techniques in enhancing th...
Abstract:
Microgrids (MGs) are essential for enhancing energy efficiency and minimizing power usage through the regulation of energy storage systems. Nevertheless, privacy-related concerns obstruct the real-time precise regulation of these systems due to unavailable state-of-charge (SOC) data. This article introduces a self-adaptive energy scheduling optimization framework for MGs that operates without SOC information, utilizing a partially observable Markov game (POMG) to decrease energy usage. Furthermore, to develop an optimal energy scheduling strategy, a MG system optimization approach using recurrent multiagent deep deterministic policy gradient (RMADDPG) is presented. This method is evaluated against other existing techniques such as MADDPG, deterministic recurrent policy gradient (DRPG), and independent Q-learning (IQL), demonstrating reductions in electrical energy consumption by 4.29%, 5.56%, and 12.95%, respectively, according to simulation outcomes.
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 11, November 2024)