Skip to main content
Log in

Real-time dense 3D reconstruction and camera tracking via embedded planes representation

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper proposes a novel approach for robust plane matching and real-time RGB-D fusion based on the representation of plane parameter space. In contrast to previous planar-based SLAM algorithms estimating correspondences for each plane-pair independently, our method instead explores the holistic topology of all relevant planes. We note that by adopting the low-dimensionality parameter space representation, the plane matching can be intuitively reformulated and solved as a point cloud registration problem. Besides estimating the plane correspondences, we contribute an efficient optimization framework, which employs both frame-to-frame and frame-to-model planar consistency constraints. We propose a global plane map to dynamically represent the reconstructed scene and alleviate accumulation errors that exist in camera pose tracking. We validate the proposed algorithm on standard benchmark datasets and additional challenging real-world environments. The experimental results demonstrate its outperformance to current state-of-the-art methods in tracking robustness and reconstruction fidelity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Adrien Kaiser, A.Y., Boubekeur, T.: Proxy clouds for live RGB-D stream processing and consolidation. In: ECCV (2018)

  2. Agamennoni, G., Fontana, S., Siegwart, R.Y., Sorrenti, D.G.: Point clouds registration with probabilistic data association. In: IROS (2016)

  3. Besl, P.J., Mckay, N.D.: A method for registration of 3-D shapes. In: Robotics, pp. 239–256 (1992)

  4. Bylow, E., Sturm, J., Kerl, C., Kahl, F., Cremers, D.: Real-time camera tracking and 3D reconstruction using signed distance functions. In: Robotics: Science and Systems (2013)

  5. Concha, A., Civera, J.: DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence. In: IROS (2015)

  6. Dai, A., Nießner, M., Zollöfer, M., Izadi, S., Theobalt, C.: BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration. ACM Trans. Graph. 36, 1 (2017)

    Article  Google Scholar 

  7. Dou, M., Guan, L., Frahm, J.M., Fuchs, H.: Exploring high-level plane primitives for indoor 3D reconstruction with a hand-held RGB-D camera. In: ACCV Workshops (2013)

  8. Dzitsiuk, M., Sturm, J., Maier, R., Ma, L., Cremers, D.: De-noising, stabilizing and completing 3D reconstructions on-the-go using plane priors. In: ICRA (2017)

  9. Feng, C., Taguchi, Y., Kamat, V.R.: Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In: ICRA (2014)

  10. Fernández-Moral, E., Mayol-Cuevas, W., Arévalo, V., González-Jiménez, J.: Fast place recognition with plane-based maps. In: ICRA (2013)

  11. Flint, A., Mei, C., Reid, I., Murray, D.: Growing semantically meaningful models for visual SLAM. In: CVPR (2010)

  12. Fu, Y., Yan, Q., Yang, L., Liao, J., Xiao, C.: Texture mapping for 3D reconstruction with RGB-D sensor. In: CVPR (2018)

  13. Halber, M., Funkhouser, T.: Fine-to-coarse global registration of RGB-D scans. In: CVPR (2017)

  14. Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: ICRA (2014)

  15. Jeon, J., Jung, Y., Kim, H., Lee, S.: Texture map generation for 3D reconstructed scenes. The Visual Computer 32(6–8), 955–965 (2016)

    Article  Google Scholar 

  16. Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for rgb-d cameras. In: ICRA (2013)

  17. Kim, P., Coltin, B., Kim, H.J.: Linear RGB-D SLAM for planar environments. In: ECCV (2018)

  18. Kim, P., Coltin, B., Kim, H.J.: Low-drift visual odometry in structured environments by decoupling rotational and translational motion. In: ICRA (2018)

  19. Le, P.H., Kosecka, J.: Dense piecewise planar RGB-D SLAM for indoor environments. In: IROS (2017)

  20. Lee, J.K., Yea, J., Park, M.G., Yoon, K.J.: Joint layout estimation and global multi-view registration for indoor reconstruction. In: ICCV (2017)

  21. Li, R., Liu, Q., Gui, J., Gu, D., Hu, H.: A novel RGB-D SLAM algorithm based on points and plane-patches. In: CASE (2016)

  22. Ma, L., Kerl, C., Stückler, J., Cremers, D.: CPA-SLAM: Consistent plane-model alignment for direct RGB-D SLAM. In: ICRA (2016)

  23. Ming, H., Westman, E., Zhang, G., Kaess, M.: Keyframe-based dense planar SLAM. In: ICRA (2017)

  24. Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  25. Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: real-time dense surface mapping and tracking. In: ISMAR (2012)

  26. Okada, K., Kagami, S., Inaba, M., Inoue, H.: Plane segment finder: algorithm, implementation and applications. In: ICRA (2001)

  27. Proença, P.F., Gao, Y.: Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty. Robot. Auton. Syst. 104, 25–39 (2018)

    Article  Google Scholar 

  28. Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F.: PL-SLAM: Real-time monocular visual SLAM with points and lines. In: ICRA (2017)

  29. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp. 2564–2571 (2011)

  30. Salas, M., Hussain, W., Concha, A., Montano, L., Civera, J., Montiel, J.M.M.: Layout aware visual tracking and mapping. In: IROS (2015)

  31. Salas-Moreno, R.F., Glocken, B., Kelly, P.H.J., Davison, A.J.: Dense planar SLAM. In: ISMAR (2014)

  32. Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H.J., Davison, A.J.: SLAM++: simultaneous localisation and mapping at the level of objects. In: CVPR (2013)

  33. Shi, Y., Xu, K., Nießner, M., Rusinkiewicz, S., Funkhouser, T.: PlaneMatch: patch coplanarity prediction for robust RGB-D reconstruction. In: ECCV (2018)

  34. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: IROS (2012)

  35. Sun, Q., Yuan, J., Zhang, X., Sun, F.: RGB-D SLAM in indoor environments with STING-based plane feature extraction. IEEE/ASME Trans. Mechatron. PP(99), 1–1 (2017)

  36. Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: SLAM using both points and planes for hand-held 3D sensors. In: ISMAR (2012)

  37. Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: Point-plane SLAM for hand-held 3D sensors. In: ICRA (2013)

  38. Wei, M., Yan, Q., Luo, F., Song, C., Xiao, C.: Joint bilateral propagation upsampling for unstructured multi-view stereo. The Visual Computer 35(6–8), 797–809 (2019)

    Article  Google Scholar 

  39. Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., Mcdonald, J.: Kintinuous: spatially extended kinectfusion. Robot. Auton. Syst. 69(C), 3–14 (2012)

  40. Whelan, T., Leutenegger, S., Moreno, R.S., Glocker, B., Davison, A.: ElasticFusion: Dense SLAM without a pose graph. In: Robotics: Science and Systems (2015)

  41. Xiao, J., Owens, A., Torralba, A.: SUN3D: A database of big spaces reconstructed using SfM and object labels. In: ICCV (2013)

  42. Yang, S., Scherer, S.: CubeSLAM: monocular 3-D object slam. IEEE Trans. Robot. 35(4), 925–938 (2019)

    Article  Google Scholar 

  43. Yang, L., Yan, Q., Fu, Y., Xiao, C.: Surface reconstruction via fusing sparse-sequence of depth images. IEEE Transactions on Visualization and Computer Graphics 24(2), 1190–1203 (2017)

    Article  Google Scholar 

  44. Yang, L., Yan, Q., Xiao, C.: Shape-controllable geometry completion for point cloud models. The Visual Computer 33(3), 385–398 (2017)

    Article  MathSciNet  Google Scholar 

  45. Zhang, Y., Xu, W., Tong, Y., Zhou, K.: Online structure analysis for real-time indoor scene reconstruction. ACM Trans. Graph. 34(5), 159 (2015)

    Article  Google Scholar 

  46. Zhou, Q.Y., Koltun, V.: Depth camera tracking with contour cues. In: CVPR (2015)

  47. Zuo, X., Xie, X., Liu, Y., Huang, G.: Robust visual SLAM with point and line features. In: IROS (2017)

Download references

Acknowledgements

Funding was provided by NSFC (Grant Nos. 61972298, 61672390), the National Key Research and Development Program of China (Grant No. 2017YFB1002600), and the Key Technological Innovation Projects of Hubei Province (Grant No. 2018AAA062).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunxia Xiao.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supporting Information: 371_2020_1899_MOESM1_ESM.pdf

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fu, Y., Yan, Q., Liao, J. et al. Real-time dense 3D reconstruction and camera tracking via embedded planes representation. Vis Comput 36, 2215–2226 (2020). https://doi.org/10.1007/s00371-020-01899-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01899-1

Keywords

Navigation