当前位置: X-MOL 学术J. Build. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Predictive control of power demand peak regulation based on deep reinforcement learning
Journal of Building Engineering ( IF 6.4 ) Pub Date : 2023-06-02 , DOI: 10.1016/j.jobe.2023.106992
Qiming Fu , Lu Liu , Lifan Zhao , Yunzhe Wang , Yi Zheng , You Lu , Jianping Chen

As urbanization continues to accelerate, effectively managing peak electricity demand becomes increasingly critical to avoid power outages and system overloads that can negatively impact both buildings and power systems. To tackle this challenge, we propose a novel model-free predictive control method called “Dynamic Dual Predictive Control-Deep Deterministic Policy Gradient (D2PC-DDPG)" based on a deep reinforcement learning framework. Our method employs the Deep Forest-Deep Q-Network (DF-DQN) model to predict electricity demand across multiple buildings, and based on the output of the DF-DQN model, applies the Deep Deterministic Policy Gradient (DDPG) algorithm to optimize coordinated control of energy storage systems, including hot and chilled water storage tanks in multiple buildings. Experimental results show that our proposed DF-DQN model outperforms other traditional machine learning, deep learning, and reinforcement learning methods in terms of prediction accuracy, such as mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). Moreover, our D2PC-DDPG method achieves superior control performance and peak load reduction compared to other reinforcement learning methods and an RBC-based control method. Specifically, our method successfully reduced peak load by 27.1% and 21.4% over a two-week period in the same regions. To demonstrate the generalizability of our D2PC-DDPG method, we tested it in five different regions and compared its performance with an RBC-based control method. The results showed that our method achieved an average reduction of 16.6%, 7%, 9.2%, and 11% for ramping, 1-load_factor, average_daily_peak, and peak_demand, respectively. These findings demonstrate the effectiveness and practicality of our proposed method in addressing critical energy management issues in various urban environments.



中文翻译:

基于深度强化学习的电力需求调峰预测控制

随着城市化进程的不断加快,有效管理峰值电力需求变得越来越重要,以避免可能对建筑物和电力系统产生负面影响的停电和系统过载。为了应对这一挑战,我们提出了一种基于深度强化学习框架的新型无模型预测控制方法,称为“动态对偶预测控制-深度确定性策略梯度(D2PC-DDPG)”。我们的方法采用深度森林-深度Q-网络(DF-DQN)模型预测跨多个建筑物的电力需求,并基于DF-DQN模型的输出,应用深度确定性策略梯度(DDPG)算法优化储能系统的协调控制,包括多栋建筑中的热水和冷冻水储罐。实验结果表明,我们提出的 DF-DQN 模型在预测精度方面优于其他传统机器学习、深度学习和强化学习方法,例如平均绝对误差 (MAE)、平均绝对百分比误差 (MAPE) 和均方根错误(均方根误差)。此外,与其他强化学习方法和基于 RBC 的控制方法相比,我们的 D2PC-DDPG 方法实现了卓越的控制性能和峰值负载降低。具体来说,我们的方法在两周内成功地将同一地区的峰值负载降低了 27.1% 和 21.4%。为了证明我们的 D2PC-DDPG 方法的普遍性,我们在五个不同的区域对其进行了测试,并将其性能与基于 RBC 的控制方法进行了比较。结果表明,我们的方法在 ramping、1-load_factor、average_daily_peak 和 peak_demand 方面分别平均减少了 16.6%、7%、9.2% 和 11%。这些发现证明了我们提出的方法在解决各种城市环境中的关键能源管理问题方面的有效性和实用性。

更新日期:2023-06-07
down
wechat
bug