Skip to main content
Log in

Qualitative vision-based navigation based on sloped funnel lane concept

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

Funnel lane is a map-less visual navigation technique that tries qualitatively to follow a path that has been recorded before by a camera. Unlike some other methods, funnel lane does not require any calculation to relate world coordinates to image coordinates. However, the funnel lane has some shortcomings. First, it does not provide any information about the radius of rotation of the robot. This reduces the robot maneuverability of the robots and, on some occasions, does not let the robot to correct its path if a deviation occurs. Second, funnel lane constraints sometimes do not distinguish between forward or turning movement of the robot while the robot is in the funnel lane, and command the robot to go forward. This prevents the robot to follow the desired path and leads to failure of the robot’s mission. This paper introduces the sloped funnel lane technique to address these shortcomings. It sets the rotation radius based on the observed frames. Moreover, it reduces translation and rotation ambiguity. Therefore, the robot can follow any desired path leading to more robust and accurate navigation. Experimental results on challenging scenarios on a real ground robot demonstrate the effectiveness of the sloped funnel lane technique.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Diosi A, Remazeilles A, šegvić S, Chaumette F (2007) Experimental evaluation of an urban visual path following framework. In: IFAC proceedings volumes (IFAC-PapersOnline)

  2. Chen Z, Birchfield ST (2009) Qualitative vision-based path following. IEEE Trans Robot 25:749–754

    Article  Google Scholar 

  3. Zhichao C, Birchfield ST (2006) Qualitative vision-based mobile robot navigation. In: Proceedings—IEEE international conference on robotics and automation

  4. Guerrero JJ, Martinez-Cantin R, Sagüś C (2005) Visual map-less navigation based on homographies. J Robot Syst 22:569–581

    Article  Google Scholar 

  5. Liang BLB, Pears N (2002) Visual navigation using planar homographies. In: Proceedings of 2002 IEEE international conference on robotic automation (Cat. No.02CH37292)

  6. Royer E, Lhuillier M, Dhome M, Lavest J-M (2007) Monocular vision for mobile robot localization and autonomous navigation. Int J Comput Vis 74:237–260

    Article  Google Scholar 

  7. Remazeilles A, Chaumette F, Gros P (2006) 3D navigation based on a visual memory. In: International conference on robotics and automation

  8. Matsumoto Y, Ikeda K, Inaba M, Inoue H (1999) Visual navigation using omnidirectional view sequence. In: 1999 IEEE/RSJ international conference on intelligent robots and systems

  9. Pasteau F, Narayanan VK, Babel M, Chaumette F (2016) visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot Auton Syst 75:28–40

    Article  Google Scholar 

  10. David J, Manivannan PV (2014) Control of truck-trailer mobile robots: a survey. Intell Serv Robot 7:245–258

    Article  Google Scholar 

  11. Remazeilles A, Chaumette F (2007) Image-based robot navigation from an image memory. Robot Auton Syst 55:345–356

    Article  Google Scholar 

  12. šegvić S, Remazeilles A, Diosi A, Chaumette F (2007) Large scale vision-based navigation without an accurate global reconstruction. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition

  13. Do T, Carrillo-Arce LC, Roumeliotis SI (2018) Autonomous flights through image-defined paths. In: Bicchi A, Burgard W (eds) Robotics research. Springer Proceedings in Advanced Robotics, vol 2. Springer, Cham, pp 39–55

  14. Nguyen T, Mann GKI, Gosine RG (2014) Vision-based qualitative path-following control of quadrotor aerial vehicle. In: 2014 international conference on unmanned aircraft systems, ICUAS 2014—conference proceedings

  15. Royer E, Bom J, Dhome M, Thuilot B, Lhuillier M, Marmoiton F (2005) Outdoor autonomous navigation using monocular vision. In: Proceedings of IEEE/RSJ international conference intelligent robotics system

  16. Do T, Carrillo-Arce LC, Roumeliotis SI (2019) High-speed autonomous quadrotor navigation through visual and inertial paths. Int J Robot Res 38:486–504

    Article  Google Scholar 

  17. Bonin-Font F, Ortiz A, Oliver G (2008) Visual navigation for mobile robots: a survey. J Intell Robot Syst 53:263

    Article  Google Scholar 

  18. Kidono K, Miura J, Shirai Y (2002) Autonomous visual navigation of a mobile robot using a human-guided experience. Robot Auton Syst 40:121–130

    Article  Google Scholar 

  19. Chao H, Gu Y, Gross J (2013) A comparative study of optical flow and traditional sensors in UAV navigation. In: 2013 American control..

  20. Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol Rev 91:413–460

    Article  Google Scholar 

  21. Chao H, Gu Y, Napolitano M (2013) A survey of optical flow techniques for UAV navigation applications. In: 2013 international conference on unmanned aircraft systems, ICUAS 2013—conference proceedings

  22. King P, Vardy A, Forrest AL (2018) Teach-and-repeat path following for an autonomous underwater vehicle. J Field Robot 35:748–763

    Article  Google Scholar 

  23. Furgale P, Barfoot TD (2010) Visual teach and repeat for long-range rover autonomy. J Field Robot 27:534–560

    Article  Google Scholar 

  24. Ostafew CJ, Schoellig AP, Barfoot TD (2013) Visual teach and repeat, repeat, repeat: iterative learning control to improve mobile robot path tracking in challenging outdoor environments. In: IEEE international conference on intelligent robots and systems

  25. Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD (2019) There’s no place like home: visual teach and repeat for emergency return of multirotor UAVs during GPS failure. IEEE Robot Autom Lett 4:161–168

    Article  Google Scholar 

  26. Clement L, Kelly J, Barfoot TD (2017) Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. J Field Robot 34:74–97

    Article  Google Scholar 

  27. Wang Z, Lambert A (2018) ICSP based visual teach and repeat for outdoor car-like robot localization. In: 2018 10th computer science and electronic engineering conference (CEEC)

  28. Bista SR, Giordano PR, Chaumette F (2016) Appearance-based indoor navigation by IBVS using mutual information. In: 2016 14th international conference on control. Robotics and vision, ICARCV, automation, p 2017

  29. Krajnik T, Majer F, Halodova L, Vintr T (2018) Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE international conference on intelligent robots and systems

  30. Burschka D, Hager G (2001) Vision-based control of mobile robots. In: Proceedings of IEEE international conference on robotic automation

  31. Nguyen T, Mann GKI, Gosine RG, Vardy A (2016) Appearance-based visual-teach-and-repeat navigation technique for micro aerial vehicle. J Intell Robot Syst Theory Appl 84:217–240

    Article  Google Scholar 

  32. Toudeshki AG, Shamshirdar F, Vaughan R (2018) UAV visual teach and repeat using only semantic object features. CoRR

  33. Tomasi C (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon University, Pittsburgh

    Google Scholar 

  34. http://www.vexrobotics.com. Accessed 2018

  35. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110

    Article  Google Scholar 

  36. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Artificial Intelligence laboratory members for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamad Mahdi Kassir.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kassir, M.M., Palhang, M. & Ahmadzadeh, M.R. Qualitative vision-based navigation based on sloped funnel lane concept. Intel Serv Robotics 13, 235–250 (2020). https://doi.org/10.1007/s11370-019-00308-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-019-00308-4

Keywords

Navigation