当前位置: X-MOL 学术arXiv.cs.SY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Load Shedding for Grid Emergency Control via Deep Reinforcement Learning
arXiv - CS - Systems and Control Pub Date : 2021-02-25 , DOI: arxiv-2102.12908
Ying Zhang, Meng Yue, Jianhui Wang

Emergency control, typically such as under-voltage load shedding (UVLS), is broadly used to grapple with low voltage and voltage instability issues in practical power systems under contingencies. However, existing emergency control schemes are rule-based and cannot be adaptively applied to uncertain and floating operating conditions. This paper proposes an adaptive UVLS algorithm for emergency control via deep reinforcement learning (DRL) and expert systems. We first construct dynamic components for picturing the power system operation as the environment. The transient voltage recovery criteria, which poses time-varying requirements to UVLS, is integrated into the states and reward function to advise the learning of deep neural networks. The proposed approach has no tuning issue of coefficients in reward functions, and this issue was regarded as a deficiency in the existing DRL-based algorithms. Extensive case studies illustrate that the proposed method outperforms the traditional UVLS relay in both the timeliness and efficacy for emergency control.

中文翻译:

通过深度强化学习实现电网应急控制的自适应减负荷

紧急控制,通常是欠压减载(UVLS)之类的应急控制,广泛用于应对应急情况下实际电力系统中的低压和电压不稳定性问题。然而,现有的紧急控制方案是基于规则的,并且不能适应性地应用于不确定和浮动的运行条件。本文提出了一种通过深度强化学习(DRL)和专家系统进行应急控制的自适应UVLS算法。我们首先构建动态组件,以将电力系统运行描绘为环境。瞬态电压恢复标准(对UVLS提出时变要求)已集成到状态和奖励函数中,以建议深度神经网络的学习。所提出的方法在奖励函数中没有系数调整的问题,该问题被认为是现有基于DRL的算法中的一个缺陷。大量的案例研究表明,该方法在及时性和应急控制效率方面均优于传统的UVLS中继。
更新日期:2021-02-26
down
wechat
bug