Skip to main content
Log in

Motion Guided LiDAR-Camera Self-calibration and Accelerated Depth Upsampling for Autonomous Vehicles

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This work proposes a novel motion guided method for targetless self-calibration of a LiDAR and camera and use the re-projection of LiDAR points onto the image reference frame for real-time depth upsampling. The calibration parameters are estimated by optimizing an objective function that penalizes distances between 2D and re-projected 3D motion vectors obtained from time-synchronized image and point cloud sequences. For upsampling, a simple, yet effective and time efficient formulation that minimizes depth gradients subject to an equality constraint involving the LiDAR measurements is proposed. Validation is performed on recorded real data from urban environments and demonstrations that our two methods are effective and suitable to mobile robotics and autonomous vehicle applications imposing real-time requirements is shown.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Bertsekas, D.: The auction algorithm: a distributed relaxation method for the assignment problem. Annals of Operations Research 14(1), 105–123 (1988)

    Article  MathSciNet  Google Scholar 

  2. Castorena, J., Kamilov, U., Boufounos, P.: Autocalibration of lidar and optical cameras via edge alignment. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, pp 2862–2866 (2016)

  3. Degraux, K., Kamilov, U., Boufounos, P., Liu, D.: Online convolutional dictionary learning for multimodal imaging. In: IEEE International Conference on Image Processing. Beijing, China (2017)

  4. Diebel, J., Thrun, S.: An application of markov random fields to range sensing. In: Proceedings of the 18th International Conference on Neural Information Processing Systems (NIPS), pp 291–298. Vancouver, Canada (2005)

  5. Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T.: Flownet: learning optical flow with convolutional networks. In: The IEEE International Conference on Computer Vision (ICCV) (2015)

  6. Ferstl, D., Reinbacher, C., Ranftl, R., Ruether, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proc. IEEE Int. Conf. Comp. Vis. Sydney, NSW, Australia, pp 993–1000 (2013)

  7. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. of Rob. Res. (2013)

  8. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Transactions Pattern Analysis and Machine Intelligence 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  9. Kamilov, U., Boufounos, P.: Motion-adaptive depth superresolution. IEEE Transactions on Image Processing 26(4), 1723–1731 (2017)

    Article  MathSciNet  Google Scholar 

  10. Kendall, A., Martirosyan, H., Dasgupta, S., Henry, P., Kennedy, R., Bachrach, A., Bry, A.: End-to-end learning of geometry and context for deep stereo regression. In: Proceedings of the International Conference on Computer Vision (ICCV) (2017)

  11. Lagarias, J.C., Reeds, J.A., Wright, M.H., Wright, P.E.: Convergence properties of the nelder-mead simplex method in low dimensions. SIAM Journal of Optimization 9(1), 112–147 (1998)

    Article  MathSciNet  Google Scholar 

  12. Levinson, J., Thrun, S.: Automatic online calibration of cameras and Lasers. In: Robotics: Science and Systems, pp 29–36. Berlin, Germany (2013)

  13. Li, S., Xu, C., Xie, M.: A robust o(n) solution to the perspective-n-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1444–1450 (2012). https://doi.org/10.1109/TPAMI.2012.41

    Article  Google Scholar 

  14. Lu, J., Forsyth, D.: Sparse depth super resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA (2015)

  15. Nesterov, Y.: A method for solving convex programming problem with convergence rate o(1/k2). Dokl. Akad. Nauk SSSR 269(3), 543–547 (1983)

    MathSciNet  Google Scholar 

  16. Pandey, G., McBride, J., Savarese, S., Eustice, R.: Automatic extrinsic calibration of vision and lidar by maximizing mutual information. Journal of Field Robotics 32(5), 1–27 (2014)

    Google Scholar 

  17. Saxena, A., Chung, S.H., Ng, A.Y.: Learning depth from single monocular images. In: Proceedings of the 18th International Conference on Neural Information Processing Systems, NIPS’05, pp 1161–1168. MIT Press, Cambridge (2005)

  18. Scott, T., Morye, A., Piniés, P., Paz, L., Posner, I., Newman, P.: Choosing a time and place for calibration of lidar-camera systems. In: IEEE International Conference on Robotics and Automation (ICRA). Stockholm, Sweden (2016)

  19. žbontar, J., LeCun, Y.: Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17(1), 2287–2318 (2016)

    MATH  Google Scholar 

  20. Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-l1 optical flow. In: Hamprecht, F.A., Schnörr, C., Jähne, B. (eds.) Pattern Recognition, pp 214–223. Springer, Berlin (2007)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Castorena.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Castorena, J., Puskorius, G.V. & Pandey, G. Motion Guided LiDAR-Camera Self-calibration and Accelerated Depth Upsampling for Autonomous Vehicles. J Intell Robot Syst 100, 1129–1138 (2020). https://doi.org/10.1007/s10846-020-01233-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-020-01233-w

Keywords

Navigation