当前位置: X-MOL 学术Comput. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Control of battery charging based on reinforcement learning and long short-term memory networks
Computers & Electrical Engineering ( IF 4.3 ) Pub Date : 2020-07-01 , DOI: 10.1016/j.compeleceng.2020.106670
Fangyuan Chang , Tao Chen , Wencong Su , Qais Alsafasfeh

Abstract In an electricity market with time-varying pricing, uncontrolled charging of energy storage systems (ESSs) may increase charging costs. A novel battery charging control methodology based on reinforcement-learning (RL) is proposed in this paper to minimize the charging costs. A significant characteristic of this method is that it is model-free, with no need for a high-accuracy battery/ESS model. Therefore, it overcomes the challenges brought by limited types of battery models and non-ignorable parametric uncertainties in reality. Additionally, since an accurate prediction of fluctuating electricity prices can promote the control performance, a long short-term memory (LSTM) network is leveraged to improve the prediction precision. The final control objective is to seek an optimal charging portfolio to minimize charging costs. Moreover, the presented control algorithm provides a basic framework for a more complicated electricity market where various types of ESSs, generators, and loads exist.

中文翻译:

基于强化学习和长短期记忆网络的电池充电控制

摘要 在具有时变定价的电力市场中,储能系统(ESS)的不受控制的充电可能会增加充电成本。本文提出了一种基于强化学习 (RL) 的新型电池充电控制方法,以最大限度地降低充电成本。这种方法的一个显着特点是它是无模型的,不需要高精度的电池/ESS模型。因此,它克服了现实中电池模型类型有限和不可忽略的参数不确定性带来的挑战。此外,由于对电价波动的准确预测可以提高控制性能,因此利用长短期记忆 (LSTM) 网络来提高预测精度。最终控制目标是寻求最佳充电组合以最小化充电成本。而且,
更新日期:2020-07-01
down
wechat
bug