当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning for Smart Home Energy Management
IEEE Internet of Things Journal ( IF 10.6 ) Pub Date : 2019-12-03 , DOI: 10.1109/jiot.2019.2957289
Liang Yu , Weiwei Xie , Di Xie , Yulong Zou , Dengyin Zhang , Zhixin Sun , Linghua Zhang , Yue Zhang , Tao Jiang

In this article, we investigate an energy cost minimization problem for a smart home in the absence of a building thermal dynamics model with the consideration of a comfortable temperature range. Due to the existence of model uncertainty, parameter uncertainty (e.g., renewable generation output, nonshiftable power demand, outdoor temperature, and electricity price), and temporally coupled operational constraints, it is very challenging to design an optimal energy management algorithm for scheduling heating, ventilation, and air conditioning systems and energy storage systems in the smart home. To address the challenge, we first formulate the above problem as a Markov decision process, and then propose an energy management algorithm based on deep deterministic policy gradients. It is worth mentioning that the proposed algorithm does not require the prior knowledge of uncertain parameters and building the thermal dynamics model. The simulation results based on real-world traces demonstrate the effectiveness and robustness of the proposed algorithm.

中文翻译:

用于智能家居能源管理的深度强化学习

在本文中,我们在考虑到舒适的温度范围的情况下,研究了在没有建筑物热动力学模型的情况下,智能住宅的能源成本最小化问题。由于存在模型不确定性,参数不确定性(例如,可再生能源发电输出,不可移动的电力需求,室外温度和电价)以及时间相关的运行约束,因此设计用于调度供热的最佳能源管理算法非常具有挑战性,智能家居中的通风,空调系统和能量存储系统。为了解决这一挑战,我们首先将上述问题公式化为马尔可夫决策过程,然后提出基于深度确定性策略梯度的能源管理算法。值得一提的是,提出的算法不需要先验知识的不确定参数和建立热力学模型。基于真实世界轨迹的仿真结果证明了该算法的有效性和鲁棒性。
更新日期:2020-04-22
down
wechat
bug