当前位置:
X-MOL 学术
›
arXiv.cs.SY
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
A Multi-Agent Deep Reinforcement Learning Approach for a Distributed Energy Marketplace in Smart Grids
arXiv - CS - Systems and Control Pub Date : 2020-09-23 , DOI: arxiv-2009.10905 Arman Ghasemi, Amin Shojaeighadikolaei, Kailani Jones, Morteza Hashemi, Alexandru G. Bardas, Reza Ahmadi
arXiv - CS - Systems and Control Pub Date : 2020-09-23 , DOI: arxiv-2009.10905 Arman Ghasemi, Amin Shojaeighadikolaei, Kailani Jones, Morteza Hashemi, Alexandru G. Bardas, Reza Ahmadi
This paper presents a Reinforcement Learning (RL) based energy market for a
prosumer dominated microgrid. The proposed market model facilitates a real-time
and demanddependent dynamic pricing environment, which reduces grid costs and
improves the economic benefits for prosumers. Furthermore, this market model
enables the grid operator to leverage prosumers storage capacity as a
dispatchable asset for grid support applications. Simulation results based on
the Deep QNetwork (DQN) framework demonstrate significant improvements of the
24-hour accumulative profit for both prosumers and the grid operator, as well
as major reductions in grid reserve power utilization.
中文翻译:
智能电网分布式能源市场的多代理深度强化学习方法
本文介绍了一个基于强化学习 (RL) 的能源市场,适用于产消者主导的微电网。提议的市场模型促进了实时且依赖于需求的动态定价环境,从而降低了电网成本并提高了产消者的经济效益。此外,这种市场模型使电网运营商能够利用产消者存储容量作为电网支持应用程序的可调度资产。基于深度 QNetwork (DQN) 框架的仿真结果表明,产消者和电网运营商的 24 小时累计利润都有显着提高,同时电网备用电力利用率也大幅降低。
更新日期:2020-09-24
中文翻译:
智能电网分布式能源市场的多代理深度强化学习方法
本文介绍了一个基于强化学习 (RL) 的能源市场,适用于产消者主导的微电网。提议的市场模型促进了实时且依赖于需求的动态定价环境,从而降低了电网成本并提高了产消者的经济效益。此外,这种市场模型使电网运营商能够利用产消者存储容量作为电网支持应用程序的可调度资产。基于深度 QNetwork (DQN) 框架的仿真结果表明,产消者和电网运营商的 24 小时累计利润都有显着提高,同时电网备用电力利用率也大幅降低。