Skip to main content
Log in

PDCAT: a framework for fast, robust, and occlusion resilient fiducial marker tracking

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

A Correction to this article was published on 18 September 2020

This article has been updated

Abstract

Square binary patterns have become the de facto fiducial marker for most computer vision applications. Existing tracking solutions suffer a number of limitations, such as the low frame-rate and sensitivity to partial occlusions. This work aims at overcoming these limitations, by exploiting temporal information in video-sequences. We propose a parallel detection, compensation and tracking (PDCAT) framework, which can be integrated into any binary marker system. Our solution is capable of recovering markers even when they become mostly occluded. Furthermore, the low processing time of the tracking task makes PDCAT more than an order of magnitude faster than a track-by-detect solution. This is particularly important for embedded computer vision applications, wherein the detection run at a very low frame rate. In the experiments conducted on an embedded computer, the processing frame rate of the track-by-detect solution was merely 11 FPS. Our solution, on the other hand, was capable of processing more than 100 FPS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Change history

References

  1. La Delfa, G.C., Monteleone, S., Catania, V., De Paz, J.F., Bajo, J.: Performance analysis of visualmarkers for indoor navigation systems. Front. Inf. Technol. Electron. Eng. 17(8), 730–740 (2016)

    Article  Google Scholar 

  2. Spławski, M., Staszak, R., Jarecki, F., Chudziński, J., Kaczmarek, P., Drapikowski, P., Belter, D.: “Motion planning of the cooperative robot with visual markers. In: Conference on Automation, Springer, pp. 206–215 (2020)

  3. Sivčev, S., Rossi, M., Coleman, J., Dooly, G., Omerdić, E., Toal, D.: Fully automatic visual servoing control for work-class marine intervention rovs. Control Eng. Pract. 74, 153–167 (2018)

    Article  Google Scholar 

  4. Araar, O., Aouf, N., Vitanov, I.: Vision based autonomous landing of multirotor uav on moving platform. J. Intell. Robot. Syst. 85(2), 369–384 (2017)

    Article  Google Scholar 

  5. Barandiaran, I., Paloc, C., Graña, M.: Real-time optical markerless tracking for augmented reality applications. J. Real Time Image Process. 5, 129–138 (2010)

    Article  Google Scholar 

  6. Park, J., Seo, B.-K., Park, J.-I.: Binocular mobile augmented reality based on stereo camera tracking. J. Real Time Image Process. 13, 571–580 (2017)

    Article  Google Scholar 

  7. Chandaria, J., Thomas, G.A., Stricker, D.: The matris project: real-time markerless camera tracking for augmented reality and broadcast applications. J. Real Time Image Process. 2, 69–79 (2007)

    Article  Google Scholar 

  8. Santos, P.C., Stork, A., Buaes, A., Pereira, C.E., Jorge, J.: A real-time low-cost marker-based multiple camera tracking solution for virtual reality applications. J. Real Time Image Process. 5(2), 121–128 (2010)

    Article  Google Scholar 

  9. Lampert, C.H., Peters, J.: Real-time detection of colored objects in multiple camera streams with off-the-shelf hardware components. J. Real Time Image Process. 7, 31–41 (2012)

    Article  Google Scholar 

  10. Bagherinia, H., Manduchi, R.: Robust real-time detection of multi-color markers on a cell phone. Real Time Image Process. 8(2), 207–223 (2013)

    Article  Google Scholar 

  11. Gatrell, L.B., Hoff, W.A., Sklair, C.W.: Robust image features: concentric contrasting circles and their image extraction. In: Stoney, W.E. (ed.) Cooperative Intelligent Robotics in Space II, vol. 1612, pp. 235–244. International Society for Optics and Photonics, Bellingham (1992)

    Chapter  Google Scholar 

  12. Calvet, L., Gurdjos, P., Griwodz, C., Gasparini, S.: Detection and accurate localization of circular fiducials under highly challenging conditions. In: 2016 IEEE Conference on cmputer vision and pattern recognition (CVPR), pp. 562–570, (2016)

  13. Bergamasco, F., Albarelli, A., Rodolà, E., Torsello, A.: Rune-tag: a high accuracy fiducial marker with strong occlusion resilience. CVPR 2011, 113–120 (2011)

    Google Scholar 

  14. Fiala, M.: Designing highly reliable fiducial markers. IEEE Trans. Pattern Anal. Mach. Intell. 32(7), 1317–1324 (2010)

    Article  Google Scholar 

  15. Kato, H., Billinghurst, M.: Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In: Proceedings on 2nd IEEE and ACM International Workshop on Augmented Reality. (IWAR ’99), pp. 85–94, (1999)

  16. Olson, E.: AprilTag: a robust and flexible visual fiducial system. In: 2011 IEEE International conference on robotics and automation, IEEE, pp. 3400–3407, (2011)

  17. Klokmose, C.N., Kristensen, J.B., Bagge, R., Halskov, K.: Bullseye: high-precision fiducial tracking for table-based tangible interaction. In: Proceedings of the Ninth ACM international conference on interactive tabletops and surfaces, pp. 269 – 278, (2014)

  18. Kaltenbrunner, M., Bencina, R.: Reactivision: a computer-vision framework for table-based tangible interaction. In: Proceedings of the 1st international conference on Tangible and embedded interaction, pp. 69–74, (2007)

  19. Flohr, D., Fischer, J.: A Lightweight ID-based extension for marker tracking systems. In: Froehlich, B., Blach, R., van Liere, R. (eds.) Eurographics Symposium on Virtual Environments, Short Papers and Posters. The Eurographics Association, Switzerland (2007)

    Google Scholar 

  20. Fiala, M.: ARTag, a fiducial marker system using digital techniques. In: IEEE computer society conference on computer vision and pattern recognition, pp. 590–596, (2005)

  21. Wagner, D., Schmalstieg, D.: ARToolKitPlus for pose tracking on mobile devices ARToolKit. In: Computer vision winter workshop, (2007)

  22. Wang, J., Olson, E.: AprilTag 2 : efficient and robust fiducial detection. In: International conference on intelligent robots and systems, pp. 4193–4198, (2016)

  23. Garrido-jurado, S.: Automatic generation and detection of highly reliable fi ducial markers under occlusion. Pattern Recogn. 47(6), 2280–2292 (2014)

    Article  Google Scholar 

  24. Romero-ramirez, F.J., Mu, R.: Speeded up detection of squared fiducial markers. Image Vis. Comput. 76, 38–47 (2018). https://doi.org/10.1016/j.imavis.2018.05.004

    Article  Google Scholar 

  25. Ciaparrone, G., Sánchez, F.L., Tabik, S., Troiano, L., Tagliaferri, R., Herrera, F.: Deep learning in video multi-object tracking: a survey. Neurocomputing 381, 61–88 (2020)

    Article  Google Scholar 

  26. Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.: Fully-convolutional siamese networks for object tracking. In: European conference on computer vision, Springer, pp. 850–865, (2016)

  27. Huang, K., Shi, Y., Zhao, F., Zhang, Z., Tu, S.: Multiple instance deep learning for weakly-supervised visual object tracking. Signal Process. Image Commun. 84, 115807 (2020)

    Article  Google Scholar 

  28. Chu, J., Tu, X., Leng, L., Miao, J.: Double-channel object tracking with position deviation suppression. IEEE Access 8, 856–866 (2019)

    Article  Google Scholar 

  29. Yuan, Y., Chu, J., Leng, L., Miao, J., Kim, B.-G.: A scale-adaptive object-tracking algorithm with occlusion detection. EURASIP J. Image Video Process. 2020(1), 1–15 (2020)

    Article  Google Scholar 

  30. Chu, J., Guo, Z., Leng, L.: Object detection based on multi-layer convolution feature fusion and online hard example mining. IEEE Access 6, 19959–19967 (2018)

    Article  Google Scholar 

  31. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: 7th international joint conference on artificial intelligence, vol. 130, pp. 674–679, (1981)

  32. Shi, J., Tomasi, C.: Good features to track. In: IEEE computer society conference on computer vision and pattern recognition CVPR, pp. 593–600, (1994)

  33. Bouguet, J.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel corporation (2001)

  34. Chu, J., GuoLu, A., Wang, L.: Chessboard corner detection under image physical coordinate. Opt. Laser Technol. 48, 599–605 (2013)

    Article  Google Scholar 

  35. Moravec, H.: Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover. Carnegie-Mellon University, Pittsburgh (1980)

    Google Scholar 

  36. Harris, C., Stephens, M.: A combined corner and edge detector. In: Procedings of the Alvey vision conference 1988, pp. 23.1–23.6, (1988)

  37. Márquez-Neila, P., López-Alberca, J., Buenaposada, J.M., Baumela, L.: Speeding-up homography estimation in mobile devices. J. Real Time Image Process. 11, 141–154 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oualid Araar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of the article was revised: “In the original publication of the article, the family name of the 1st author has been changed to Araar”.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Araar, O., Mokhtari, I.E. & Bengherabi, M. PDCAT: a framework for fast, robust, and occlusion resilient fiducial marker tracking. J Real-Time Image Proc 18, 691–702 (2021). https://doi.org/10.1007/s11554-020-01010-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-020-01010-w

Keywords

Navigation