当前位置: X-MOL 学术IEEE Trans. Intell. Transp. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Effective Charging Planning Based on Deep Reinforcement Learning for Electric Vehicles
IEEE Transactions on Intelligent Transportation Systems ( IF 8.5 ) Pub Date : 2020-01-01 , DOI: 10.1109/tits.2020.3002271
Cong Zhang , Yuanan Liu , Fan Wu , Bihua Tang , Wenhao Fan

Electric vehicles (EVs) are viewed as an attractive option to reduce carbon emission and fuel consumption, but the popularization of EVs has been hindered by the cruising range limitation and the inconvenient charging process. In public charging stations, EVs usually spend a lot of time on queuing especially during peak hours of charging. Therefore, building an effective charging planning system has become a crucial task to reduce the total charging time for EVs. In this paper, we first introduce EVs charging scheduling problem and prove the NP-hardness of the problem. Then, we formalize the scheduling problem of EV charging as a Markov Decision Process and propose deep reinforcement learning algorithms to address it. The objective of the proposed algorithms is to minimize the total charging time of EVs and maximal reduction in the origin-destination distance. Finally, we experiment on real-world data and compare with two baseline algorithms to demonstrate the effectiveness of our approach. It shows that the proposed algorithms can significantly reduce the charging time of EVs compared to EST and NNCR algorithms.

中文翻译:

基于深度强化学习的电动汽车有效充电规划

电动汽车(EV)被视为减少碳排放和燃料消耗的有吸引力的选择,但电动汽车的普及受到续航里程限制和充电过程不便的阻碍。在公共充电站,电动汽车通常会在排队上花费大量时间,尤其是在充电高峰时段。因此,建立有效的充电规划系统已成为减少电动汽车总充电时间的关键任务。在本文中,我们首先介绍了电动汽车充电调度问题,并证明了该问题的 NP-hardness。然后,我们将电动汽车充电的调度问题形式化为马尔可夫决策过程,并提出深度强化学习算法来解决它。所提出算法的目标是最小化电动汽车的总充电时间并最大程度地缩短起点-终点距离。最后,我们对真实世界的数据进行实验,并与两种基线算法进行比较,以证明我们方法的有效性。结果表明,与 EST 和 NNCR 算法相比,所提出的算法可以显着减少电动汽车的充电时间。
更新日期:2020-01-01
down
wechat
bug