Skip to main content

Advertisement

Log in

Metric learning with generator for closed loop detection in VSLAM

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

The development of Driverless Car, Unmanned Aerial Vehicle, Human–Computer Interaction and Artificial Intelligence has promoted the Internet of Things (IoT) industry, in which, Visual Simultaneous Localization and Mapping (VSLAM) is an important Localization and Mapping technique. Closed loop detection can alleviate the error accumulation during the operation of VSLAM. The traditional closed loop detection methods mostly rely on manually defined features, subjective and unstable, which are difficult to cope with complex and repetitive scenarios. Thus, triplet loss-based metric learning has been considered as a better solution for closed loop detection. In this paper, first, constructed Generator is applied to generate feature vector of hard negative sample. Second, triplet loss and generative loss have been applied to construct loss function. The keyframes are converted into feature vectors with well-trained model, evaluating the similarity of keyframes by calculating their distance of feature vectors, which is used to determine whether a closed loop is formed. Finally, TUM dataset is introduced to evaluate the Precision and Recall of the proposed metric learning. The well-trained model is applied to establish loop closing thread for VSLAM system. The experimental results illustrate the feasibility and effectiveness of the metric learning-based closed loop detection, which can be further applied to practical VSLAM systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Ruan, F., et al.: Deep learning for real-time image steganalysis: a survey. J. Real Time Image Process. 17, 2 (2019)

    Google Scholar 

  2. Stasse, O.: SLAM and Vision-based Humanoid Navigation. A Reference, Humanoid Robotics (2019)

  3. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Robotics 32(1), 1–19 (2016)

    Article  Google Scholar 

  4. Fan, T., Wang, H., Rubenstein, M., Murphey, T.: CPL-SLAM: Efficient and Certifiably Correct Planar Graph-Based SLAM Using the Complex Number Representation. IEEE Trans. Robotics. (2020). https://doi.org/10.1109/TRO.2020.3006717

  5. Gomez-Ojeda, R., Moreno, F., Zuñiga-Noël, D., Scaramuzza, D., Gonzalez-Jimenez, J.: PL-SLAM: a Stereo SLAM system through the combination of points and line segments. IEEE Trans. Robot. 35(3), 734–746 (2019)

    Article  Google Scholar 

  6. Mangelson, J.G., Ghaffari, M., Vasudevan, R., Eustice, R.M.: Characterizing the Uncertainty of Jointly Distributed Poses in the Lie Algebra. IEEE Trans. Robot. (2020). https://doi.org/10.1109/TRO.2020.2994457

  7. Wang, C., Zhai, Y., Xu, S., et al.: Multi-sensors based simultaneous mapping and global pose optimization. In: 2019 IEEE International Conference on Unmanned Systems (ICUS). IEEE (2020)

  8. Memon, A.R., Wang, H., Hussain, A.: Loop closure detection using supervised and unsupervised deep neural networks for monocular SLAM systems. Robot. Autonom. Syst. 126, 103470 (2020)

    Article  Google Scholar 

  9. Liu, R., Marakkalage, S.H., Padmal, M., et al.: Collaborative SLAM Based on WiFi fingerprint similarity and motion information. IEEE Internet Things J. 7(3), 1826–1840 (2020)

    Article  Google Scholar 

  10. Filliat, D.: A visual bag of words method for interactive qualitative localization and mapping. In: 2007 IEEE international conference on robotics and automation (ICRA) (pp. 3921–3926), IEEE (2007)

  11. Kwon, H., Yousef, K.M.A., Kak, A.C.: Building 3d visual maps of interior space with a new hierarchical sensor fusion architecture. Robot. Autonom. Syst. 61(8), 749–767 (2013)

    Article  Google Scholar 

  12. Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647-665 (2008)

    Article  Google Scholar 

  13. Milford, M.J., Wyeth, G. F.: SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In: Proc. IEEE Int. Conf. Robot. Automat., pp. 1643–1649 (2012)

  14. Maddern, W., Milford, M., Wyeth, G.: CAT-SLAM: Probabilistic localisation and mapping using a continuous appearance-based trajectory. Int. J. Robot. Res. 31(4), 429–451 (2012)

    Article  Google Scholar 

  15. Pepperell, E., Corke, P.I., Milford, M.J.: All-environment visual place recognition with SMART. In: Proc. IEEE Int. Conf. Robot. Automat., pp. 1612–1618 (2014)

  16. Dong, R., Wei, Z., Liu, C., Kan, J.: A novel loop closure detection method using line features. IEEE Access 7, 111245–111256 (2019)

    Article  Google Scholar 

  17. Yang, Z., Pan, Y., Huan, R., Bao, Y.: Gridding place recognition for fast loop closure detection on mobile platforms. Electron. Lett. 55(17), 931–933 (2019)

    Article  Google Scholar 

  18. Zhang, G., Yan, X., Ye, Y.: Loop closure detection via maximization of mutual information. IEEE Access 7, 124217–124232 (2019)

    Article  Google Scholar 

  19. Kostavelis, I., Gasteratos, A.: Learning spatially semantic representations for cognitive robot navigation. Robot. Autonom. Syst. 61(12), 1460–1475 (2013)

    Article  Google Scholar 

  20. Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1465–1479 (2006)

    Article  Google Scholar 

  21. Chen, Z.T., Lam, O., Jacobson, A., et al.: Convolutional neural network-based place recognition[EB/OL]. (09-06) (2014)

  22. Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647-665 (2008)

    Article  Google Scholar 

  23. Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, pp. 580–587 (2014)

  24. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 17981828 (2013)

    Google Scholar 

  25. Wang, Z., Peng, Z., Guan, Y., Wu, L.: Manifold regularization graph structure auto-encoder to detect loop closure for visual SLAM. IEEE Access 7, 59524–59538 (2019)

    Article  Google Scholar 

  26. Khaliq, A., Ehsan, S., Chen, Z., Milford, M., McDonald-Maier, K.: A Holistic visual place recognition approach using lightweight CNNs for significant ViewPoint and appearance changes. IEEE Trans. Robot. 36(2), 561–569 (2020)

    Article  Google Scholar 

  27. Wang, S., Lv, X., Liu, X., Ye, D.: Compressed Holistic ConvNet Representations for Detecting Loop Closures in Dynamic Environments. IEEE Access 8, 60552–60574 (2020)

    Article  Google Scholar 

  28. Hou, Y., Zhang, H., Zhou, S.L.: Convolutional neural networkbased image representation for visual loop closure detection. In: IEEE International Conference on Information and Automation. Piscataway, USA, IEEE, pp. 2238–2245 (2015)

  29. Gao, X., Zhang, T.: Unsupervised learning to detect loops using deep neural networks for visual SLAM system. Autonom. Robots 41(1), 1–18 (2017)

    Article  MathSciNet  Google Scholar 

  30. Gao, X., Zhang, T.: Loop closure detection for visual SLAM systems using deep neural networks. In: 2015 34th Chinese Control Conference (CCC). IEEE (2015)

  31. Sünderhauf, N., Dayoub, F., Shirazi, S., Upcroft, B., Milford, M.: On the performance of ConvNet features for place recognition. In: Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., pp. 4297–4304 (2015)

  32. Chen, W., Chen, X., Zhang, J., Huang, K.: A multi-task deep network for person re-identification. AAAI 1, 5 (2017)

    Google Scholar 

  33. Zheng, L., Yang, Y., Hauptmann. , A.G.: Person Reidentification: past, present and future. arXiv:1610.02984 (2016)

  34. Geng, M., Wang, Y., Xiang, T., Tian, Y.: Deep transfer learning for person re-identification. arXiv:1611.05244 (2016)

  35. Li, W., Zhu, X., Gong, S.: Person re-identification by deep joint learning of multi-loss classification. In IJCAI (2017)

  36. Zheng, Z., Zheng, L., Yang, Y.: A Discriminatively Learned CNN Embedding for Person Re-identification. arXiv:1611.05666 (2016)

  37. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: A unified embedding for face recognition and clustering. In: CVPR (2015)

  38. Shi, H., Yang, Y., Zhu, X., Liao, S., Lei, Z., Zheng, W., Li, S.Z.: Embedding Deep Metric for Person Re-identification: a study against Large Variations. In: ECCV (2016)

  39. Faghri, F., Fleet, D.J., Kiros, J.R., et al.: VSE++: Improving visual-semantic embeddings with hard negatives (2017)

  40. Duan, Y., Zheng, W., Lin, X., Lu, J., Zhou, J.: Deep adversarial metric learning. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2018)

  41. Duan, Y., Lu, J., Zheng, W., Zhou, J.: Deep adversarial metric learning. IEEE Trans. Image Process. 29, 2037–2051 (2020)

    Article  Google Scholar 

  42. Weinberger, K.Q.: Distance metric learning for large margin nearest neighbor classification. JMLR 2019, 10 (2009)

    MATH  Google Scholar 

  43. Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D Cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2016)

    Article  Google Scholar 

  44. https://www.robots.ox.ac.uk. Accessed 16 Jan 2021

  45. https://www.robots.ox.ac.uk/mjc/Software.htm. Accessed 16 Jan 2021

  46. https://github.com/rmsalinas/DBow3. Accessed 16 Jan 2021

  47. Glover, A., Maddern, W., Warren, M., Reid, S., Milford, M., Wyeth, G.: OpenFABMAP: An open source toolbox for appearance-based loop closure detection.In: 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, pp. 4730–4735 (2012)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under the No. 61773282. The authors sincerely thank the anonymous reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Na Dong.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, J., Dong, N., Li, D. et al. Metric learning with generator for closed loop detection in VSLAM. J Real-Time Image Proc 18, 1025–1036 (2021). https://doi.org/10.1007/s11554-020-01067-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-020-01067-7

Keywords

Navigation