Comparative analysis of model-free and model-based HVAC control for residential demand response
- ORNL
In this paper, we present a comparative analysis of model-free reinforcement learning (RL) and model predictive control (MPC) approaches for intelligent control of heating, ventilation, and air-conditioning (HVAC). Deep-Q-network (DQN) is used as a candidate for model-free RL algorithm. The two control strategies were developed for residential demand-response (DR) HVAC system. We considered MPC as our golden standard to compare DQN's performance. The question we tried to answer through this work was, What % of MPC's performance can be achieved by model-free RL approach for intelligent HVAC control?. Based on our test result, RL achieved an average of ≈ 62% daily cost saving of MPC. Considering the pure optimization and model-based nature of MPC methods, the RL showed very promising performance. We believe that the interpretations derived from this comparative analysis provide useful insights to choose from various DR approaches and further enhance the performance of the RL-based methods for building energy managements.
- Research Organization:
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE Office of Energy Efficiency and Renewable Energy (EERE)
- DOE Contract Number:
- AC05-00OR22725
- OSTI ID:
- 1837403
- Resource Relation:
- Conference: Second SIGEnergy Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (RLEM) - Coimbra,, , Portugal - 11/17/2021 5:00:00 AM-
- Country of Publication:
- United States
- Language:
- English
Similar Records
Deep Reinforcement Learning for Autonomous Water Heater Control
Model-Based and Data-Driven HVAC Control Strategies for Residential Demand Response