当前位置: X-MOL 学术IEEE Trans. Transp. Electrif. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ensemble Reinforcement Learning-Based Supervisory Control of Hybrid Electric Vehicle for Fuel Economy Improvement
IEEE Transactions on Transportation Electrification ( IF 7 ) Pub Date : 2020-06-01 , DOI: 10.1109/tte.2020.2991079
Bin Xu , Xiaosong Hu , Xiaolin Tang , Xianke Lin , Huayi Li , Dhruvang Rathod , Zoran Filipi

This study proposes an ensemble reinforcement learning (RL) strategy to improve the fuel economy. A parallel hybrid electric vehicle model is first presented, followed by an introduction of ensemble RL strategy. The base RL algorithm is $Q$ -learning, which is used to form multiple agents with different state combinations. Two common energy management strategies, namely, thermostatic strategy and equivalent consumption minimization strategy, are used as two single agents in the proposed ensemble agents. During the learning process, multiple RL agents make an action decision jointly by taking a weighted average. After each driving cycle iteration, $Q$ -learning agents update their state-action values. A single RL agent is used as a reference for the proposed strategy. The results show that the fuel economy of the proposed ensemble strategy is 3.2% higher than that of the best single agent.

中文翻译:

基于集成强化学习的混合动力电动汽车燃油经济性改进监督控制

本研究提出了一种集成强化学习 (RL) 策略来提高燃油经济性。首先介绍并联混合动力电动汽车模型,然后介绍集成强化学习策略。基本的 RL 算法是 $Q$ -learning,用于形成具有不同状态组合的多个代理。两种常见的能源管理策略,即恒温策略和等效消耗最小化策略,被用作所提出的集成代理中的两个单一代理。在学习过程中,多个 RL 代理通过取加权平均来共同做出动作决策。每次驾驶循环迭代后, $Q$ - 学习代理更新它们的状态动作值。单个 RL 代理用作所提出策略的参考。结果表明,所提出的集成策略的燃油经济性比最佳单一代理的燃油经济性高 3.2%。
更新日期:2020-06-01
down
wechat
bug