当前位置: X-MOL 学术IEEE Trans. Sustain. Energy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Dispatch of Wind-Storage Integrated System: A Deep Reinforcement Learning Approach
IEEE Transactions on Sustainable Energy ( IF 8.6 ) Pub Date : 2022-03-07 , DOI: 10.1109/tste.2022.3156426
Xiangyu Wei 1 , Yue Xiang 1 , Junlong Li 2 , Xin Zhang 3
Affiliation  

The uncertainty of wind power and electricityprice restrict the profitability of wind-storage integrated system (WSS) participating in real-time market (RTM). This paper presents a self-dispatch model for WSS based on deep reinforcement learning (DRL). The designed model is able to learn the integrated bidding and charging policy of WSS from the historical data. Besides, the maximum entropy and distributed prioritized experience replay frame, known as Ape-X, is used in this model. The Ape-X decouples the acting and learning in training by a central shared replay memory to enhance the efficiency and performance of the DRL procedures. Besides, the maximum entropy framework enables the designed agent to explore various optimal possibilities, thus the learned policy is more stable considering the uncertainty of wind power and electricity price. Compared with traditional methods, this model brings more benefits to wind farms while ensuring robustness.

中文翻译:


风储一体化系统的自调度:一种深度强化学习方法



风电和电价的不确定性限制了参与实时市场(RTM)的风储集成系统(WSS)的盈利能力。本文提出了一种基于深度强化学习(DRL)的 WSS 自调度模型。所设计的模型能够从历史数据中学习WSS的综合竞价和计费策略。此外,该模型还使用了最大熵和分布式优先经验回放帧,称为Ape-X。 Ape-X 通过中央共享重放内存将训练中的表演和学习解耦,以提高 DRL 程序的效率和性能。此外,最大熵框架使设计的智能体能够探索各种最优可能性,因此考虑到风电和电价的不确定性,学习到的策略更加稳定。与传统方法相比,该模型在保证鲁棒性的同时为风电场带来更多效益。
更新日期:2022-03-07
down
wechat
bug