当前位置: X-MOL 学术Meas. Control › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhanced Q-learning for real-time hybrid electric vehicle energy management with deterministic rule
Measurement and Control ( IF 2 ) Pub Date : 2020-08-01 , DOI: 10.1177/0020294020944952
Yang Li 1, 2 , Jili Tao 1 , Liang Xie 2 , Ridong Zhang 3 , Longhua Ma 1 , Zhijun Qiao 4
Affiliation  

Power allocation plays an important and challenging role in fuel cell and supercapacitor hybrid electric vehicle because it influences the fuel economy significantly. We present a novel Q-learning strategy with deterministic rule for real-time hybrid electric vehicle energy management between the fuel cell and the supercapacitor. The Q-learning controller (agent) observes the state of charge of the supercapacitor, provides the energy split coefficient satisfying the power demand, and obtains the corresponding rewards of these actions. By processing the accumulated experience, the agent learns an optimal energy control policy by iterative learning and maintains the best Q-table with minimal fuel consumption. To enhance the adaptability to different driving cycles, the deterministic rule is utilized as a complement to the control policy so that the hybrid electric vehicle can achieve better real-time power allocation. Simulation experiments have been carried out using MATLAB and Advanced Vehicle Simulator, and the results prove that the proposed method minimizes the fuel consumption while ensuring less and current fluctuations of the fuel cell.

中文翻译:

具有确定性规则的实时混合动力电动汽车能量管理的增强 Q 学习

动力分配在燃料电池和超级电容器混合动力电动汽车中扮演着重要且具有挑战性的角色,因为它显着影响燃油经济性。我们提出了一种具有确定性规则的新型 Q 学习策略,用于燃料电池和超级电容器之间的实时混合动力电动汽车能量管理。Q-learning控制器(agent)观察超级电容器的荷电状态,提供满足电力需求的能量分配系数,并获得这些动作的相应奖励。通过处理积累的经验,代理通过迭代学习学习最优的能量控制策略,并以最小的燃料消耗维持最佳的 Q 表。增强对不同驾驶循环的适应性,确定性规则作为控制策略的补充,使混合动力汽车能够实现更好的实时功率分配。使用MATLAB和Advanced Vehicle Simulator进行了仿真实验,结果证明所提出的方法在保证燃料电池较小和电流波动的同时,最大限度地减少了燃料消耗。
更新日期:2020-08-01
down
wechat
bug