Skip to main content
Log in

Monocular Visual SLAM with Points and Lines for Ground Robots in Particular Scenes: Parameterization for Lines on Ground

  • Short Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Visual simultaneous localization and mapping (V-SLAM) has attracted a lot of attention lately from the robotics communities due to its vast applications and importance. This paper addresses the problem of V-SLAM with points and lines in particular scenes where there are many lines on an approximately planar ground. All the lines in these particular scenes are treated as 3D lines with four degree-of-freedom (DoF) in most V-SLAM systems with lines. However, lines on ground only have two DoF. The redundant parameters will increase the estimation uncertainty of lines on ground. In order to restrict the lines on ground to the correct solution space, we propose two parameterization methods for it. The first method still treats lines on ground as 3D lines, and then we propose a planar constraint for the representation of 3D lines to loosely constrain the lines to the ground plane. Further, to strictly constrain the lines on ground to the ground plane, the second method treats these lines as 2D lines in a plane, and then we propose the corresponding parameterization method and geometric computation method from initialization to bundle adjustment. After that, to better exploit lines on ground during localization and mapping by using the proposed parameterization methods, we propose the graph optimization-based monocular V-SLAM system with points and lines to deal with lines on ground differently from general 3D lines. Assisted by wheel encoders, the proposed system generates a structural map. We perform experiments on both simulated data and real-world data to demonstrate that the proposed two parameterization methods can better exploit lines on ground than 3D line parameterization method that is used to represent the lines on ground in the state-of-the-art V-SLAM works with lines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gee, A.P., Mayol-Cuevas, W.: Real-time model-based slam using line segments. In: Proceedings of International Symposium on Visual Computing, pp. 354–363 (2006)

  2. Kottas, D.G., Roumeliotis, S.I.: Efficient and consistent vision-aided inertial navigation using line observations. In: IEEE International Conference on Robotics and Automation, pp. 1540–1547 (2013)

  3. Bartoli, A., Sturm, P.: Structure-from-motion using lines: representation, triangulation and bundle adjustment. Comput. Vis. Image Underst. 100(3), 416–441 (2005)

    Article  Google Scholar 

  4. Smith, P., Reid, I., Davison, A.: Real-time monocular slam with straight lines. In: Proceedings of the British Machine Vision Conference, pp. 17–26 (2006)

  5. Lemaire, T., Lacroix, S.: Monocular-vision based slam using line segments. In: IEEE International Conference on Robotics and Automation, pp. 2791–2796 (2007)

  6. Solà, J., Vidal-Calleja, T., Devy, M.: Undelayed Initialization of line segments in monocular slam. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1553–1558 (2009)

  7. Zhang, G., Suh, I.H.: Sof-Slam: segments-on-floor-based Monocular Slam. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2083–2088 (2010)

  8. Zhang, G., Suh, I.H.: Building a partial 3D line-based map using a monocular slam. In: IEEE International Conference on Robotics and Automation, pp. 1497–1502 (2011)

  9. Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., Yu, W.: Structslam: visual slam with building structure lines. IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015)

    Article  Google Scholar 

  10. Zhang, G., Lee, J.H., Lim, J., Suh, I.H.: Building a 3-d line-based map using stereo slam. IEEE Trans. Robot. 31(6), 1364–1377 (2015)

    Article  Google Scholar 

  11. Dong, R., Fremont, V., Lacroix, S., Fantoni, I., Liu, C.: Line-based monocular graph slam. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 494–500 (2017)

  12. Yang, Y., Huang, G.: Observability analysis of aided ins with heterogeneous features of points, lines, and planes. IEEE Trans. Robot. 35(6), 1399–1418 (2019)

    Article  Google Scholar 

  13. Kong, X., Wu, W., Zhang, L., Wang, Y.: Tightly-coupled stereo visual-inertial navigation using point and line features. Sensors 15, 12816–12833 (2015)

    Article  Google Scholar 

  14. Gomez-Ojeda, R., Gonzalez-Jimenez, J.: Robust stereo visual odometry through a probabilistic combination of points and line segments. In: IEEE International Conference on Robotics and Automation, pp. 2521–2526 (2016)

  15. Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F.: Pl-slam: real-time Monocular Visual Slam with Points and Lines. In: IEEE International Conference on Robotics and Automation, pp. 4503–4508 (2017)

  16. Zuo, X., Xie, X., Liu, Y., Huang, G.: Robust visual slam with point and line features. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1775–1782 (2017)

  17. He, Y., Zhao, J., Guo, Y., He, W., Yuan, K.: Pl-vio: tightly-coupled monocular visual-inertial odometry using point and line features. Sensors 18(4), 1159–1183 (2018)

    Article  Google Scholar 

  18. Gomez-Ojeda, R., Moreno, F., Zuñiga-Noël, D., Scaramuzza, D., Gonzalez-Jimenez, J.: Pl-slam: a stereo slam system through the combination of points and line segments. IEEE Trans. Robot. 35(3), 734–746 (2019)

    Article  Google Scholar 

  19. Zou, D., Wu, Y., Pei, L., Ling, H., Yu, W.: Structvio: visual-inertial odometry with structural regularity of man-made environments. IEEE Trans. Robot. 35(4), 999–1013 (2019)

    Article  Google Scholar 

  20. Solà, J., Vidal-Calleja, T., Civera, J., Montiel, J.M.M.: Impact of landmark parametrization on monocular ekf-slam with points and lines. Int. J. Comput. Vis. 97(3), 339–368 (2012)

    Article  MathSciNet  Google Scholar 

  21. Weng, J., Huang, T.S., Ahuja, N.: Motion and structure from line correspondences; closed-form solution, uniqueness, and optimization. IEEE Trans. Pattern Anal. Mach. Intell. 14(3), 318–336 (1992)

    Article  Google Scholar 

  22. Bartoli, A., Sturm, P.: The 3d line motion matrix and alignment of line reconstructions. Int. J. Comput. Vis. 57(3), 159–178 (2004)

    Article  Google Scholar 

  23. Zhang, L., Koch, R.: Structure and motion from line correspondences: representation, projection, initialization and sparse bundle adjustment. J. Vis. Commun. Image Represent. 25(5), 904–915 (2014)

    Article  Google Scholar 

  24. Mur-Artal, R., Montiel, J., Tardos, J.: Orb-slam: a versatile and accurate monocular slam system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  25. Mur-Artal, R., Tardós, J.D.: Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  26. Li, P., Qin, T., Hu, B., Zhu, F., Shen, S.: Monocular visual-inertial state estimation for mobile augmented reality. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 11–21 (2017)

  27. Qin, T., Li, P., Shen, S.: Vins-mono: a robust and versatile monocular visual-inertial state estimator. arXiv:1708.03852 (2017)

  28. Li, X., He, Y., Liu, X., Lin, J.: Leveraging planar regularities for point line visual-inertial odometry. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2020)

  29. Sola, J., Deray, J., Atchuthan, D.: A micro lie theory for state estimation in robotics. arXiv:1812.01537 (2018)

  30. Triggs, B., McLauchlan, P., Hartley, R., Fitzgibbon, A.: Bundle adjustment-a modern synthesis. In: ICCV ’99 Proceedings of the International Workshop on Vision Algorithms: Theory and Practice, pp. 298–372 (2000)

  31. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp. 24–28 (1981)

  32. Shi, J., Tomasi, C.: Good features to track. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (1994)

  33. Grompone von Gioi, R., Jakubowicz, J., Morel, J., Randall, G.: Lsd: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)

    Article  Google Scholar 

  34. Zhang, L., Koch, R.: An efficient and robust line segment matching approach based on lbd descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 24(7), 794–805 (2013)

    Article  Google Scholar 

  35. Bonarini, A., Burgard, W., Fontana, G., Matteucci, M., Sorrenti, D.G., Tardos, J.D.: Rawseeds: robotics advancement through web-publishing of sensorial and elaborated extensive data sets. In: Proceedings of IROS’06 Workshop on Benchmarks in Robotics Research (2006)

  36. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of rgb-d slam systems. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580 (2012)

Download references

Author information

Authors and Affiliations

Authors

Contributions

M. Quan: Conceptualization, Methodology, Software, Validation, Investigation, Writing-Original Draft, Writing-Review and Editing, Visualization. S. Piao: Writing-Review and Editing, Supervision. Y. He: Conceptualization, Validation, Writing-Review and Editing. X. Liu: Validation, Supervision. M. Z. Qadir: Writing-Review and Editing.

Corresponding author

Correspondence to Songhao Piao.

Ethics declarations

Conflict of Interests

There is no conflicts of interest in the manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Preintegrated Wheel Odometer Measurements

Appendix: Preintegrated Wheel Odometer Measurements

Each wheel encoder measures the traveled displacement \({\Delta } \tilde {d}_{k}\) of wheel between consecutive time-steps k − 1 and k at time-step k, which is assumed to be affected by a discrete-time zero-mean Gaussian noise ηw with varaince σw:

$$ {\Delta} \tilde{d}_{l_{k}} = {\Delta} d_{l_{k}} + \eta_{w_{l}} , \ \ {\Delta} \tilde{d}_{r_{k}} = {\Delta} d_{r_{k}} + \eta_{w_{r}} $$
(41)

where subscript \(\left (\cdot \right )_{l}\) and \(\left (\cdot \right )_{r}\) represent the left and right wheel respectively.

We assume that robot undergos the planar motion between consecutive wheel encoder readings. Based on the circular motion constraint of each wheel, the relative rotation vector and translation between two consecutive wheel frames {Ok− 1} and {Ok} measured by wheel encoders are:

$$ \begin{array}{ll} \tilde{\boldsymbol{\theta}}^{O_{k-1}}_{O_{k}} &= \left[ \begin{array}{c} 0 \\ 0 \\ {\Delta} \tilde{\theta}_{k} \end{array} \right] = \boldsymbol{\theta}^{O_{k-1}}_{O_{k}} + \boldsymbol{\eta}_{\theta_{k}}\\ \tilde{\mathbf{p}}^{O_{k-1}}_{O_{k}} &= \left[ \begin{array}{c} {\Delta} \tilde{d}_{k} \cos \frac{\Delta \tilde{\theta}_{k}}{2} \\ {\Delta} \tilde{d}_{k} \sin \frac{\Delta \tilde{\theta}_{k}}{2} \\ 0 \end{array} \right] = \mathbf{p}^{O_{k-1}}_{O_{k}} + \boldsymbol{\eta}_{p_{k}} \end{array} $$
(42)

where \({\Delta } \tilde {\theta }_{k} = \frac {\Delta \tilde {d}_{r_{k}} - {\Delta } \tilde {d}_{l_{k}}}{b}\) and \({\Delta } \tilde {d}_{k} = \frac {\Delta \tilde {d}_{r_{k}} + {\Delta } \tilde {d}_{l_{k}}}{2}\) are the rotation angle measurement and traveled distance measurement, b is the baseline length of wheels.

We define the transformation increment between non-consecutive frames i and j in wheel frame {Oi} as:

$$ \begin{array}{ll} \boldsymbol{\Delta} \mathbf{R}_{ij} &= \prod\limits_{k=i+1}^{j} \text{Exp}\left( \boldsymbol{\theta}^{O_{k-1}}_{O_{k}} \right) \\ \boldsymbol{\Delta} \mathbf{p}_{ij} &= \sum\limits_{k=i+1}^{j} \boldsymbol{\Delta} \mathbf{R}_{ik-1} \mathbf{p}^{O_{k-1}}_{O_{k}} \end{array} $$
(43)

From Eq. 43, we can obtain the preintegrated wheel odometer measurements as:

$$ \begin{array}{ll} \boldsymbol{\Delta} \tilde{\mathbf{R}}_{ij} &= \prod\limits_{k=i+1}^{j} \!\text{Exp}\left( \tilde{\boldsymbol{\theta}}^{O_{k-1}}_{O_{k}} \right) \\ \boldsymbol{\Delta} \tilde{\mathbf{p}}_{ij} &= \sum\limits_{k=i+1}^{j} \boldsymbol{\Delta} \tilde{\mathbf{R}}_{ik-1} \tilde{\mathbf{p}}^{O_{k-1}}_{O_{k}} \end{array} $$
(44)

Then, we can obtain the iterative propagation of the preintegrated measurements noise in matrix form as:

$$ \begin{array}{ll} &\left[ \! \begin{array}{c} \boldsymbol{\delta} \boldsymbol{\xi}_{ik+1} \\ \boldsymbol{\delta} \boldsymbol{p}_{ik+1} \end{array} \! \right] = \left[ \! \begin{array}{cc} \boldsymbol{\Delta} \tilde{\mathbf{R}}_{kk+1}^{\text{T}} \! & \! \mathbf{0}_{3 \times 3} \\ -\boldsymbol{\Delta} \tilde{\mathbf{R}}_{ik} \left[ \tilde{\mathbf{p}}^{O_{k}}_{O_{k+1}} \right]_{\times} \! & \! \mathbf{I}_{3 \times 3} \end{array} \! \right] \left[ \! \begin{array}{c} \boldsymbol{\delta} \boldsymbol{\xi}_{ik} \\ \boldsymbol{\delta} \boldsymbol{p}_{ik} \end{array} \! \right] \\ &+ \left[ \! \begin{array}{cc} \mathbf{J}_{r_{k+1}} \! & \! \mathbf{0}_{3 \times 3} \\ \mathbf{0}_{3 \times 3} \! & \! \boldsymbol{\Delta} \tilde{\mathbf{R}}_{ik} \end{array} \! \right] \left[ \! \begin{array}{c} \boldsymbol{\eta}_{\theta_{k+1}} \\ \boldsymbol{\eta}_{p_{k+1}} \end{array} \! \right] = \mathbf{A}_{k} \mathbf{n}_{ik} + \mathbf{B}_{k} \boldsymbol{\eta}_{k+1} \end{array} $$
(45)

Therefore, given the covariance \(\boldsymbol {\Sigma }_{\eta _{k+1}} \in \mathbb {R}^{6 \times 6}\) of the measurements noise ηk+ 1, we can compute the covariance of the preintegrated wheel odometer meausrements noise iteratively:

$$ \boldsymbol{\Sigma}_{O_{ik+1}} = \mathbf{A}_{k} \boldsymbol{\Sigma}_{O_{ik}} \mathbf{A}_{k}^{\text{T}} + \mathbf{B}_{k} \boldsymbol{\Sigma}_{\eta_{k+1}} \mathbf{B}_{k}^{\text{T}} $$
(46)

with initial condition \(\boldsymbol {\Sigma }_{O_{ii}} = \mathbf {0}_{6 \times 6}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quan, M., Piao, S., He, Y. et al. Monocular Visual SLAM with Points and Lines for Ground Robots in Particular Scenes: Parameterization for Lines on Ground. J Intell Robot Syst 101, 72 (2021). https://doi.org/10.1007/s10846-021-01315-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01315-3

Keywords

Navigation