当前位置: X-MOL 学术Electronics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Study on an Enhanced Autonomous Driving Simulation Model Based on Reinforcement Learning Using a Collision Prevention Model
Electronics ( IF 2.9 ) Pub Date : 2021-09-16 , DOI: 10.3390/electronics10182271
Jong-Hoon Kim , Jun-Ho Huh , Se-Hoon Jung , Chun-Bo Sim

This paper set out to revise and improve existing autonomous driving models using reinforcement learning, thus proposing a reinforced autonomous driving prediction model. The paper conducted training for a reinforcement learning model using DQN, a reinforcement learning algorithm. The main aim of this paper was to reduce the time spent on training and improve self-driving performance. Rewards for reinforcement learning agents were developed to mimic human driving behavior as much as possible. High rewards were given for greater distance travelled within lanes and higher speed. Negative rewards were given when a vehicle crossed into other lanes or had a collision. Performance evaluation was carried out in urban environments without pedestrians. The performance test results show that the model with the collision prevention model exhibited faster performance improvement within the same time compared to when the model was not applied. However, vulnerabilities to factors such as pedestrians and vehicles approaching from the side were not addressed, and the lack of stability in the definition of compensation functions and limitations with respect to the excessive use of memory were shown.

中文翻译:

基于强化学习的增强型自动驾驶仿真模型的碰撞预防模型研究

本文着手使用强化学习来修改和改进现有的自动驾驶模型,从而提出一种强化的自动驾驶预测模型。该论文使用强化学习算法 DQN 对强化学习模型进行了训练。本文的主要目的是减少训练时间并提高自动驾驶性能。开发强化学习代理的奖励是为了尽可能地模仿人类的驾驶行为。在车道内行驶的距离更远和速度更快,可以获得高奖励。当车辆越过其他车道或发生碰撞时,会给予负面奖励。性能评估是在没有行人的城市环境中进行的。性能测试结果表明,与未应用该模型时相比,带有防碰撞模型的模型在同一时间内表现出更快的性能提升。然而,没有解决诸如从侧面接近的行人和车辆等因素的脆弱性,并且显示出补偿函数的定义缺乏稳定性以及过度使用内存的限制。
更新日期:2021-09-16
down
wechat
bug