当前位置: X-MOL 学术IEEE/CAA J. Automatica Sinica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid
IEEE/CAA Journal of Automatica Sinica ( IF 15.3 ) Pub Date : 2021-06-17 , DOI: 10.1109/jas.2021.1004084
He Zhang , Lingqiu Jin , Cang Ye

There are about 253 million people with visual impairment worldwide. Many of them use a white cane and/or a guide dog as the mobility tool for daily travel. Despite decades of efforts, electronic navigation aid that can replace white cane is still research in progress. In this paper, we propose an RGB-D camera based visual positioning system (VPS) for real-time localization of a robotic navigation aid (RNA) in an architectural floor plan for assistive navigation. The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry (DVIO) method and a particle filter localization (PFL) method. DVIO estimates RNA's pose by using the data from an RGB-D camera and an inertial measurement unit (IMU). It extracts the floor plane from the camera's depth data and tightly couples the floor plane, the visual features (with and without depth data), and the IMU's inertial data in a graph optimization framework to estimate the device's 6-DOF pose. Due to the use of the floor plane and depth data from the RGB-D camera, DVIO has a better pose estimation accuracy than the conventional VIO method. To reduce the accumulated pose error of DVIO for navigation in a large indoor space, we developed the PFL method to locate RNA in the floor plan. PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose. Based on VPS, an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space. Experimental results demonstrate that: 1) DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation (18 Hz pose update rate) on a UP Board computer; 2) PFL reduces the DVIO-accrued pose error by 82.5% on average and allows for accurate wayfinding (endpoint position error ≤ 45 cm) in large indoor spaces.

中文翻译:


基于 RGB-D 相机的机器人导航辅助辅助导航视觉定位系统



全球约有 2.53 亿人有视力障碍。他们中的许多人使用白色手杖和/或导盲犬作为日常出行的代步工具。尽管经过数十年的努力,可替代白手杖的电子导航设备仍在研究中。在本文中,我们提出了一种基于 RGB-D 相机的视觉定位系统 (VPS),用于在建筑平面图中实时定位机器人导航辅助设备 (RNA),以实现辅助导航。该系统的核心是新型六自由度深度增强视觉惯性里程计(DVIO)方法和粒子滤波定位(PFL)方法的结合。 DVIO 使用来自 RGB-D 相机和惯性测量单元 (IMU) 的数据来估计 RNA 的姿态。它从相机的深度数据中提取地板平面,并将地板平面、视觉特征(有或没有深度数据)和 IMU 的惯性数据紧密耦合在图形优化框架中,以估计设备的 6-DOF 姿态。由于使用了来自 RGB-D 相机的地板平面和深度数据,DVIO 比传统的 VIO 方法具有更好的位姿估计精度。为了减少 DVIO 在大型室内空间导航时的累积位姿误差,我们开发了 PFL 方法来在平面图中定位 RNA。 PFL 利用室内空间建筑 CAD 绘图的几何信息来进一步减少 DVIO 估计姿态的误差。基于VPS,为RNA原型开发了辅助导航系统,以帮助视障人士在较大的室内空间中导航。 实验结果表明:1)DVIO方法比最先进的VIO方法实现了更好的姿态估计精度,并在UP Board计算机上执行实时姿态估计(18 Hz姿态更新率); 2) PFL 平均将 DVIO 累积位姿误差降低 82.5%,并允许在大型室内空间中实现精确寻路(端点位置误差 ≤ 45 cm)。
更新日期:2021-06-17
down
wechat
bug