当前位置: X-MOL 学术J. Lightw. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robot Localization and Navigation Using Visible Light Positioning and SLAM Fusion
Journal of Lightwave Technology ( IF 4.1 ) Pub Date : 2021-09-20 , DOI: 10.1109/jlt.2021.3113358
Weipeng Guan , Linyi Huang , Shang-sheng Wen , Zihong Yan , Wanlin Liang , Chen Yang , Ziyu Liu

Visible light positioning (VLP) is a promising technology since it can provide high accuracy indoor localization based on the existing lighting infrastructure. However, existing approaches often require dense LED distributions and persistent line-of-sight (LOS) between transmitter and receiver. What's more, sensors are imperfect, and their measurements are prone to errors. Through multi sensors fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable pose estimations. In this work, we propose a loosely-coupled multi-sensor fusion method based on VLP and Simultaneous Localization and Mapping (SLAM), using light detection and ranging (LiDAR), odometry, and rolling shutter camera. Our multi-sensor localizer can provide accurate and robust robot localization and navigation in LED shortage/outage situations. The experimental results show that our proposed scheme can provide an average accuracy of 2.5 cm with around 42 ms average positioning latency.

中文翻译:


使用可见光定位和 SLAM 融合进行机器人定位和导航



可见光定位(VLP)是一项很有前途的技术,因为它可以基于现有的照明基础设施提供高精度的室内定位。然而,现有方法通常需要密集的 LED 分布以及发射器和接收器之间的持续视线 (LOS)。更重要的是,传感器并不完美,其测量结果很容易出现错误。通过多传感器融合,我们可以弥补独立传感器的缺陷并提供更可靠的位姿估计。在这项工作中,我们提出了一种基于 VLP 和同步定位与建图 (SLAM) 的松耦合多传感器融合方法,使用光检测和测距 (LiDAR)、里程计和卷帘快门相机。我们的多传感器定位器可以在 LED 短缺/断电情况下提供准确而强大的机器人定位和导航。实验结果表明,我们提出的方案可以提供 2.5 厘米的平均精度,平均定位延迟约为 42 毫秒。
更新日期:2021-09-20
down
wechat
bug