当前位置: X-MOL 学术Comput. Electron. Agric. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution
Computers and Electronics in Agriculture ( IF 8.3 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.compag.2020.105523
Dashuai Wang , Wei Li , Xiaoguang Liu , Nan Li , Chunlong Zhang

Abstract In agriculture, Unmanned Aerial Vehicles (UAVs) have shown great potential for plant protection. Uncertain obstacles randomly distributed in the unstructured farmland usually pose significant collision risks to flight safety. In order to improve the UAV’s intelligence and minimize the obstacle’s adverse impacts on operating safety and efficiency, we put forward a comprehensive solution which consists of deep-learning based object detection, image processing, RGB-D information fusion and Task Control System (TCS). Taking full advantages of both deep learning and depth camera, this solution allows the UAV to perceive not only the presence of obstacles, but also their attributes like category, profile and 3D spatial position. Based on the object detection results, collision avoidance strategy generation method and the corresponding calculation approach of optimal collision avoidance flight path are elaborated detailly. A series of experiments are conducted to verify the UAV’s environmental perception ability and autonomous obstacle avoidance performance. Results show that the average detection accuracy of CNN model is 75.4% and the mean time cost for processing single image is 53.33 ms. Additionally, we find that the prediction accuracy of obstacle’s profile and position depends heavily on the relative distance between the object and the depth camera. When the distance is between 4.5 m and 8.0 m, errors of object’s depth data, width and height are −0.53 m, −0.26 m and −0.24 m respectively. Outcomes of simulation flight experiments indicated that the UAV can autonomously determine optimal obstacle avoidance strategy and generate distance-minimized flight path based on the results of RGB-D information fusion. The proposed solution has extensive potential to enhance the UAV’s environmental perception and autonomous obstacle avoidance abilities.

中文翻译:

无人机环境感知与自主避障:深度学习与深度相机结合的解决方案

摘要 在农业领域,无人机(UAV)在植物保护方面显示出巨大的潜力。在非结构化农田中随机分布的不确定障碍物通常对飞行安全构成重大碰撞风险。为了提高无人机的智能化程度,最大限度地降低障碍物对操作安全和效率的不利影响,我们提出了基于深度学习的目标检测、图像处理、RGB-D信息融合和任务控制系统(TCS)的综合解决方案. 该解决方案充分利用了深度学习和深度相机的优势,不仅可以让无人机感知障碍物的存在,还可以感知障碍物的类别、轮廓和 3D 空间位置等属性。根据物体检测结果,详细阐述了避碰策略的生成方法以及相应的最优避碰飞行路径的计算方法。通过一系列实验验证了无人机的环境感知能力和自主避障性能。结果表明,CNN模型的平均检测精度为75.4%,处理单幅图像的平均时间成本为53.​​33 ms。此外,我们发现障碍物轮廓和位置的预测精度在很大程度上取决于物体与深度相机之间的相对距离。当距离在4.5 m和8.0 m之间时,物体深度数据、宽度和高度的误差分别为-0.53 m、-0.26 m和-0.24 m。仿真飞行实验结果表明,无人机可以根据RGB-D信息融合的结果自主确定最佳避障策略并生成距离最小化的飞行路径。所提出的解决方案在增强无人机的环境感知和自主避障能力方面具有广泛的潜力。
更新日期:2020-08-01
down
wechat
bug