当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EV charging bidding by Multi-DQN reinforcement learning in electricity auction Market
Neurocomputing ( IF 5.5 ) Pub Date : 2020-07-01 , DOI: 10.1016/j.neucom.2019.08.106
Yang Zhang , Zhengfeng Zhang , Qingyu Yang , Dou An , Donghe Li , Ce Li

Abstract In this paper, we address the issue of optimal bidding strategy selection for Electric Vehicles (EVs) charging in an auction market. The problem of EV charging has attracted growing attention as EVs become more and more popular. We consider the scenario that EV owners submit their bids for charging to the charging station, and then charging station determines the winning EVs who are admitted to charge and the payments based on an online continuous progressive second price (OCPSP) auction mechanism. In light of this, how to formulate optimal bidding strategy and maximize the economic benefits is crucial for EV owners. To this end, we propose a Multi-Deep-Q-Network (Multi-DQN) reinforcement learning bidding strategy, in which, a value evaluation network and a target network are proposed for each agent to learn the optimal bidding strategy. The extensive experimental results show that our bidding strategy can achieve better economic benefits and help EV owners spend less time on charging compared to the Q-learning based approach and the random approach.

中文翻译:

Multi-DQN强化学习在电力拍卖市场中的电动汽车充电竞价

摘要 在本文中,我们解决了拍卖市场中电动汽车 (EV) 充电的最优竞价策略选择问题。随着电动汽车越来越受欢迎,电动汽车充电问题越来越受到关注。我们考虑电动车车主向充电站提交充电投标的场景,然后充电站根据在线连续累进二次价格(OCPSP)拍卖机制确定被允许充电的获胜电动车和付款。有鉴于此,如何制定最优竞价策略,实现经济效益最大化,对于电动车车主来说至关重要。为此,我们提出了一种多深度Q网络(Multi-DQN)强化学习竞价策略,其中,为每个代理提出了一个价值评估网络和一个目标网络,以学习最优竞价策略。
更新日期:2020-07-01
down
wechat
bug