当前位置: X-MOL 学术Multimedia Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Visual driving assistance system based on few-shot learning
Multimedia Systems ( IF 3.5 ) Pub Date : 2021-07-22 , DOI: 10.1007/s00530-021-00830-5
Shan Liu 1 , Hansong Su 1 , Yichao Tang 2 , Ying Tian 3
Affiliation  

With the increase of vehicles and the diversification of road conditions, people pay more attention to the safety of driving. In recent years, autonomous driving technology by Franke et al. (IEEE Intell Syst Their Appl 13(6):40–48, 1998) and unmanned driving technology by Zhang et al. (CAAI Trans Intell Technol 1(1):4–13, 2016) have entered our field of vision. Both automatic driving by Levinson et al. (Towards fully autonomous driving: Systems and algorithms, 2011) and unmanned driving by Im et al. (Unmanned driving of intelligent robotic vehicle, 2009) use a variety of sensors to collect the environment around the vehicle, and use a variety of decision control algorithms to control the vehicle in motion. The visual driving assistance system by Watanabe, et al. (Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle, 2010), used in conjunction with the target recognition algorithm by Pantofaru et al. (Object recognition by integrating multiple image segmentations, 2008)), will provide drivers with real-time environment around the vehicle. In recent years, few-shot learning by Li et al. (Comput Electron Agric 2:2, 2020) has become a new direction of target recognition algorithm, which reduces the difficulty of collecting training samples. In this paper, on one hand, several low-light cameras with fish-eye lenses are used to collect and reconstruct the environment around the vehicle. On the other hand, we use infrared camera and lidar to collect the environment in front of the vehicle. Then, we use the method of few-shot learning to identify vehicles and pedestrians in the forward-view image. In addition, we develop the system on embedded devices according to miniaturization requirements. In conclusion, the system will adapt to the needs of most drivers at this stage, and will effectively cooperate with the development of automatic driving and unmanned driving.



中文翻译:

基于小样本学习的视觉驾驶辅助系统

随着车辆的增多和路况的多样化,人们更加注重行车安全。近年来,Franke 等人的自动驾驶技术。(IEEE Intell Syst 他们的应用程序 13(6):40–48, 1998)和 Zhang 等人的无人驾驶技术。(CAAI Trans Intell Technol 1(1):4–13, 2016)已经进入我们的视野。莱文森等人的自动驾驶。(迈向完全自主驾驶:系统和算法,2011 年)和 Im 等人的无人驾驶。(无人驾驶智能机器人车辆,2009)使用多种传感器采集车辆周围环境,并使用多种决策控制算法来控制车辆运动。Watanabe 等人的视觉驾驶辅助系统。(驾驶辅助系统,用于适当地使驾驶员识别当前车辆后面或旁边的另一辆车,2010),与 Pantofaru 等人的目标识别算法结合使用。(通过集成多个图像分割的对象识别,2008 年)),将为驾驶员提供车辆周围的实时环境。近年来,Li 等人的少镜头学习。(Comput Electron Agric 2:2, 2020) 成为目标识别算法的新方向,降低了收集训练样本的难度。在本文中,一方面,使用几个带有鱼眼镜头的低光相机来收集和重建车辆周围的环境。另一方面,我们使用红外摄像头和激光雷达来采集车辆前方的环境。然后,我们使用少镜头学习的方法来识别前视图像中的车辆和行人。此外,我们根据小型化要求在嵌入式设备上开发系统。综上所述,该系统将适应现阶段大部分驾驶员的需求,将有效配合自动驾驶和无人驾驶的发展。

更新日期:2021-07-22
down
wechat
bug