Skip to main content
Log in

Algorithm for the Design of a Three-Dimensional Map of the Environment with a Depth Camera

  • MATHEMATICAL MODELS AND COMPUTATIONAL METHODS
  • Published:
Journal of Communications Technology and Electronics Aims and scope Submit manuscript

Abstract—A new algorithm for a three-dimensional reconstruction of the surrounding environment is proposed. The algorithm is able to create accurate three-dimensional maps in real-time with the help of a RGB-D depth camera. This algorithm can be used in autonomous mobile robotics, where the robot needs to localize itself in unknown environments by processing onboard sensors without external reference systems such as a global positioning system. We analyze various combinations of common detectors and descriptors of visual features in terms of their recognition efficiency. To match the point clouds between consecutive frames, the iterative closest point (ICP) method is used. To improve the quality of the three-dimensional reconstruction, an adaptive approach to calculating the estimate of the camera position is suggested. The proposed system is able to efficiently process complex scenes at a high rate, and its performance on available benchmark databases is comparable with that of the state-of-the-art systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

Similar content being viewed by others

REFERENCES

  1. S. Thrun, W. Burgar, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) (MIT Press, Cambridge, 2005).

    Google Scholar 

  2. J. Gonzalez-Fraga, V. Kober, and E. Gutierrez, “Accurate alignment of rgb-d frames for 3d map generation,” Proc. SPIE 10752, 107522J-7 (2018).

    Google Scholar 

  3. H. Strasdat, J. M. M. Montiel, and A. J. Davison, “Visual SLAM: Why filter?” Image and Vision Comput. 30, 65−77 (2012).

    Article  Google Scholar 

  4. G. Grisetti, R. Kuemmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intell. Transport. Syst. Mag. 2 (4), 31−43 (2010).

    Article  Google Scholar 

  5. F. Endres, J. Hess, J. Sturm, D. Cremers, W. Burgard, “3D mapping with an RGB-D camera,” IEEE Trans. on Robotics 30 (1), 177−185 (2014).

    Article  Google Scholar 

  6. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24, 381−395 (1981).

    Article  MathSciNet  Google Scholar 

  7. P. J. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239−256 (1992).

    Article  Google Scholar 

  8. A. Makovetsii, S. Voronin, V. Kober, and A. Voronin, “A point-to-plane registration algorithm for orthogonal transformations,” Proc. SPIE 10752, 107522R-8 (2018).

    Google Scholar 

  9. A. Segal, D. Hahnel, and S. Thrun, “Generalized-ICP,” in Proc. Conf. Robotics: Science and Systems V, Univ. of Washington Seattle, USA, June 28–July 1, 2009 (MIT Press, Cambridge, Massachusetts, 2010), pp. 15607 (2009).https://doi.org/10.15607/RSS.2009.V.021.

  10. D. Miramontes-Jaramillo, V. Kober, V. H. Diaz-Ramirez, and V. Karnaukhov, Descriptor-based tracking algorithm using a depth camera," J. Commun. Techn. Electron. 62, 638−647 (2017).

    Article  Google Scholar 

  11. J. Diaz-Escobar, V. Kober, and J. A. Gonzalez-Fraga, “LUIFT: LUminance Invariant Feature Transform. Mathematical Problems in Engineering,” 2018, ID 3758102 (2018).

  12. J. Diaz-Escobar, V. Kober, V. Karnaukhov, and J. A. Gonzalez-Fraga, “A new invariant to illumination feature descriptor for pattern recognition,” J. Commun. Technol. Electron. 63, 1469−1474 (2018).

    Article  Google Scholar 

  13. J. Diaz-Escobar and V. Kober, “A robust HOG-based descriptor for pattern recognition,” Proc. SPIE 9971, 99712A-7 (2016).

    Google Scholar 

  14. B. A. Echeagaray-Patron, V. Kober, V. Karnaukhov, and V. Kuznetsov, “A method of face recognition using 3D facial surfaces,” J. Commun. Technol. Electron. 62, 648−652 (2017).

    Article  Google Scholar 

  15. B. A. Echeagaray-Patron and V. Kober, “Face recognition based on matching of local features on 3D dynamic range sequences,” Proc. SPIE 9971, 997131-6 (2016).

    Article  Google Scholar 

  16. J. Shi and C. Tomasi, “Good features to track,” Tech. Rep., Ithaca, NY, USA (1993).

    Google Scholar 

  17. E. Rosten and T. Drummond, Machine Learning for High-Speed Corner Detection (Proc. 9th Eur. Conf. on Computer Vision. V. Part I, ECCV'06, Berlin, Heidelberg,2006) (Springer-Verlag, Berlin, 2006), V, Part 1, pp. 430−443.

  18. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in Proc. 2011 Int. Conf. on Computer Vision, (ICCV’11), IEEE Computer Society, Washington, DC, USA,2011 (IEEE, New York, 2011), pp. 2564−2571.

  19. S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary robust invariant scalable keypoints,” in Proc. 2011 Int. Conf. on Computer Vision, (ICCV’11), IEEE Computer Society, Washington, DC, USA,2011 (IEEE, New York, 2011), pp. 2548−2555.

  20. K. Konolige, J. Bowman, J. Chen, P. Mihelich, M. Calonder, V. Lepetit, P. Fua, “View-based maps,” Int. J. Robotics Res. 29, 941−957 (2010).

    Article  Google Scholar 

  21. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Proc. 11th Eur. Conf. on Computer Vision, (ECCV’10)2010 (Springer-Verlag, Berlin, 2010), Part IV, pp. 778−792.

  22. R. Ortiz, “FREAK: Fast retina keypoint,” in Proc. 2012 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Washington, DC,2012 (IEEE, New York, 2012), pp. 510−517.

  23. G. Levi and T. Hassner, “LATCH: Learned arrangements of three patch codes,” in Proc. IEEE Winter Conf. on Applications of Computer Vision (WACV), Lake Placid, NY, USA,March 7−9,2016 (WACV, 2016), pp. 1−9.

  24. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision 60, 91−110 (2004).

    Article  Google Scholar 

  25. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346−359 (2008).

    Article  Google Scholar 

  26. E. Garcia-Fidalgo and A. Ortiz, “Vision-based topological mapping and localization methods,” Robot. Autonom. Syst. 64, 1−20 (2015).

    Article  Google Scholar 

  27. K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 1615−1630 (2005).

    Article  Google Scholar 

  28. R. Mur-Artal and J. M. M. Montiel, J. D. Tardґos, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robot. 31, 1147−1163 (2015).

    Article  Google Scholar 

  29. T. Pire, T. Fischer, G. Castro, P. De Cristoforis, J. Civera, J. Jacobo-Berlles, “S-PTAM: Stereo parallel tracking and mapping,” Robot. Autonom. Syst. (RAS) 93, 27−42 (2017).

    Article  Google Scholar 

  30. M. Paton and J. Kosecka, “Adaptive RGB-D localization,” in Proc. Ninth Conf. on Computer and Robot Vision (CRV), Toronto, Ontario, Canada,2012 (CRV, 2012), pp. 24−31.

  31. H. Strasdat, A. J. Davison, J. M. M. Montiel, and K. Konolige, “Double window optimisation for constant time visual slam,” in Proc. IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, Nov.2011 (IEEE, New York, 2011), pp. 2352−2359.

  32. D. Galvez-Lopez and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. Robot. 28, 1188−1197 (2012).

    Article  Google Scholar 

  33. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, W. Burgard, “OCTOMAP: An efficient probabilistic, 3D mapping framework based on octrees,” Auton. Robots 34, 189−206 (2013).

    Article  Google Scholar 

  34. J. Sturm, N. Engelhard, F. Endres, W. Burgard, D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in Proc. Int. Conf. on Intelligent Robot Systems (IROS), Vila Moura, Algarve, Portugal, Oct. 12, 2012 (IROS, 2012).

  35. C. Kerl, J. Sturm, and D. Cremers, “Robust odometry estimation for RGB-D cameras,” in Proc. IEEE Int. Conf. on Robotics and Automation (ICRA) Karlsruhe, Germany, May 6–10,2013 (IEEE, New York, 2013), pp. 3748−3754.

  36. R. Mur-Artal and J. D. Tardґos, “ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras,” IEEE Trans. Robot. 33, 1255−1262 (2017).

    Article  Google Scholar 

Download references

Funding

The work was supported by the Russian Foundation for Basic Research, project nos. 18-08-00782 and 18-07-00963.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. I. Kober.

Additional information

Translated by E. Smirnova

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ortiz-Gonzalez, A., Kober, V.I., Karnaukhov, V.N. et al. Algorithm for the Design of a Three-Dimensional Map of the Environment with a Depth Camera. J. Commun. Technol. Electron. 65, 690–697 (2020). https://doi.org/10.1134/S1064226920060224

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1064226920060224

Navigation