当前位置: X-MOL 学术Sustain. Energy Grids Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep reinforcement learning for energy management in a microgrid with flexible demand
Sustainable Energy Grids & Networks ( IF 5.4 ) Pub Date : 2020-11-27 , DOI: 10.1016/j.segan.2020.100413
Taha Abdelhalim Nakabi , Pekka Toivanen

In this paper, we study the performance of various deep reinforcement learning algorithms to enhance the energy management system of a microgrid. We propose a novel microgrid model that consists of a wind turbine generator, an energy storage system, a set of thermostatically controlled loads, a set of price-responsive loads, and a connection to the main grid. The proposed energy management system is designed to coordinate among the different flexible sources by defining the priority resources, direct demand control signals, and electricity prices. Seven deep reinforcement learning algorithms were implemented and are empirically compared in this paper. The numerical results show that the deep reinforcement learning algorithms differ widely in their ability to converge to optimal policies. By adding an experience replay and a semi-deterministic training phase to the well-known asynchronous advantage actor–critic​ algorithm, we achieved the highest model performance as well as convergence to near-optimal policies.



中文翻译:

在具有灵活需求的微电网中进行能源管理的深度强化学习

在本文中,我们研究了各种深度强化学习算法的性能,以增强微电网的能量管理系统。我们提出了一种新颖的微电网模型,该模型包括风力发电机,储能系统,一组恒温控制负载,一组价格响应负载以及与主电网的连接。拟议中的能源管理系统旨在通过定义优先资源,直接需求控制信号和电价来协调不同的灵活能源。实施了七个深度强化学习算法,并在经验上进行了比较。数值结果表明,深度强化学习算法收敛于最优策略的能力差异很大。

更新日期:2020-12-05
down
wechat
bug