当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Multimodal Perception-Driven Self Evolving Autonomous Ground Vehicle
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2021-10-10 , DOI: 10.1109/tcyb.2021.3113804
Jamie Roche 1 , Varuna De-Silva 2 , Ahmet Kondoz 2
Affiliation  

Increasingly complex automated driving functions, specifically those associated with free space detection (FSD), are delegated to convolutional neural networks (CNNs). If the dataset used to train the network lacks diversity, modality, or sufficient quantities, the driver policy that controls the vehicle may induce safety risks. Although most autonomous ground vehicles (AGVs) perform well in structured surroundings, the need for human intervention significantly rises when presented with unstructured niche environments. To this end, we developed an AGV for seamless indoor and outdoor navigation to collect realistic multimodal data streams. We demonstrate one application of the AGV when applied to a self-evolving FSD framework that leverages online active machine-learning (ML) paradigms and sensor data fusion. In essence, the self-evolving AGV queries image data against a reliable data stream, ultrasound, before fusing the sensor data to improve robustness. We compare the proposed framework to one of the most prominent free space segmentation methods, DeepLabV3+ [1]. DeepLabV3+ [1] is a state-of-the-art semantic segmentation model composed of a CNN and an autodecoder. In consonance with the results, the proposed framework outperforms DeepLabV3+ [1]. The performance of the proposed framework is attributed to its ability to self-learn free space. This combination of online and active ML removes the need for large datasets typically required by a CNN. Moreover, this technique provides case-specific free space classifications based on the information gathered from the scenario at hand.

中文翻译:


多模态感知驱动的自我进化自主地面车辆



日益复杂的自动驾驶功能,特别是与自由空间检测 (FSD) 相关的功能,被委托给卷积神经网络 (CNN)。如果用于训练网络的数据集缺乏多样性、模态或足够的数量,则控制车辆的驾驶员策略可能会引发安全风险。尽管大多数自​​主地面车辆 (AGV) 在结构化环境中表现良好,但当遇到非结构化利基环境时,对人为干预的需求显着增加。为此,我们开发了一款用于室内外无缝导航的 AGV,以收集真实的多模式数据流。我们演示了 AGV 应用于自进化 FSD 框架的一种应用,该框架利用在线主动机器学习 (ML) 范式和传感器数据融合。本质上,自我进化的 AGV 在融合传感器数据以提高鲁棒性之前,会根据可靠的数据流(超声波)查询图像数据。我们将所提出的框架与最著名的自由空间分割方法之一 DeepLabV3+ [1] 进行比较。 DeepLabV3+ [1] 是一种最先进的语义分割模型,由 CNN 和自动解码器组成。与结果一致,所提出的框架优于 DeepLabV3+ [1]。所提出的框架的性能归因于其自学习自由空间的能力。这种在线和主动机器学习的结合消除了对 CNN 通常所需的大型数据集的需求。此外,该技术根据从当前场景收集的信息提供特定于案例的自由空间分类。
更新日期:2021-10-10
down
wechat
bug