当前位置: X-MOL 学术IEEE Trans. Power Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Real-Time Optimal Power Flow: A Lagrangian based Deep Reinforcement Learning Approach
IEEE Transactions on Power Systems ( IF 6.6 ) Pub Date : 2020-07-01 , DOI: 10.1109/tpwrs.2020.2987292
Ziming Yan , Yan Xu

High-level penetration of intermittent renewable energy sources has introduced significant uncertainties and variabilities into modern power systems. In order to rapidly and economically respond to the changes in power system operating state, this letter proposes a real-time optimal power flow (RT-OPF) approach using Lagrangian-based deep reinforcement learning (DRL) in continuous action domain. A DRL agent to determine RT-OPF decisions is constructed and optimized using the deep deterministic policy gradient. The DRL action-value function is designed to simultaneously model RT-OPF objective and constraints. Instead of using the critic network, the deterministic gradient is derived analytically. The proposed method is tested on the IEEE 118-bus system. Compared with the state-of-the-art methods, the proposed method can achieve a high solution optimality and constraint compliance in real-time.

中文翻译:

实时最优潮流:一种基于拉格朗日的深度强化学习方法

间歇性可再生能源的高水平渗透给现代电力系统带来了重大的不确定性和可变性。为了快速、经济地响应电力系统运行状态的变化,本文提出了一种在连续动作域中使用基于拉格朗日的深度强化学习 (DRL) 的实时最优潮流 (RT-OPF) 方法。使用深度确定性策略梯度构建和优化用于确定 RT-OPF 决策的 DRL 代理。DRL 动作价值函数旨在同时对 RT-OPF 目标和约束进行建模。确定性梯度是通过分析推导出来的,而不是使用评论家网络。所提出的方法在 IEEE 118 总线系统上进行了测试。与最先进的方法相比,
更新日期:2020-07-01
down
wechat
bug