Skip to main content
Log in

ST-CSNN: a novel method for vehicle counting

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Vehicle counting using computer vision techniques has potential to alleviate traffic congestion in intelligent transportation system. In this paper, we propose a novel method to count vehicles in a human-like manner. This paper has two main contributions. Firstly, we propose ST-CSNN, which is an efficient, lightweight vehicle counting method. The method counts based on vehicle identity comparison to omit duplicate instances. Combined with the spatio-temporal information between frames, it is able to accelerate speed and improve accuracy of counting. Secondly, we strengthen the method’s performance by proposing an improved loss function on the basis of Siamese neural network. Besides, we conduct experiments on several datasets to evaluate the performance of the proposed loss function for verification and the whole method for counting. The experimental results show the practicability of this method for real counting scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Kong, X., et al.: Urban traffic congestion estimation and prediction based on floating car trajectory data. Future Gener. Comput. Syst. 61, 97–107 (2016)

    Article  Google Scholar 

  2. Zheng, Y., et al.: Urban computing: concepts, methodologies, and applications. ACM Trans. Intell. Syst. Technol. (TIST) 5(3), 1–55 (2014)

    Google Scholar 

  3. Sweet, M.: Does traffic congestion slow the economy? J. Plan. Lit. 26(4), 391–404 (2011)

    Article  Google Scholar 

  4. Zhang, J., Ioannou, P.A.: Longitudinal control of heavy trucks in mixed traffic: environmental and fuel economy considerations. IEEE Trans. Intell. Transp. Syst. 7(1), 92–104 (2006)

    Article  Google Scholar 

  5. Alnawaiseh, N.A., Hashim, J.H., Md Isa, Z.: Relationship between vehicle count and particulate air pollution in Amman, Jordan. Asia Pac. J. Public Health 27(2), NP1742–NP1751 (2015)

    Article  Google Scholar 

  6. Zhou, T., Guo, Y., Yang, Y., et al.: The School District Reorganization by Combining with Traffic Congestion Data[C]//2019 International Conference on Computer, Information and Telecommunication Systems (CITS). IEEE, 2019: 1-5. [] Adebisi, Olusegun. “Improving manual counts of turning traffic volumes at road junctions.” Journal of transportation engineering 113.3 (1987): 256-267

  7. Sun, D., Zhang, C., Zhang, L., et al.: Urban travel behavior analyses and route prediction based on floating car data. Transp. Lett. 6(3), 118–125 (2014)

    Article  Google Scholar 

  8. Zheng, Y., Silong P.: Model based vehicle localization for urban traffic surveillance using image gradient based matching. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems. IEEE, (2012)

  9. Toropov, E. et al.: Traffic flow from a low frame rate city camera. In: 2015 IEEE International Conference on Image Processing (ICIP). IEEE, (2015)

  10. Chen, Y.-L., et al.: A real-time vision system for nighttime vehicle detection and traffic surveillance. IEEE Trans. Ind. Electron. 58(5), 2030–2044 (2010)

    Article  Google Scholar 

  11. Chen, Z., Ellis, T., Velastin, S.A.: Vehicle detection, tracking and classification in urban traffic. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems. IEEE, (2012)

  12. Mo, G., Sanyuan, Z.: Vehicles detection in traffic flow. In: 2010 Sixth International Conference on Natural Computation. Vol. 2. IEEE, (2010)

  13. Xia, F., Shanghang, Z.: Block-coordinate frank-wolfe optimization for counting objects in images. In: Advances in neural information processing systems workshops. 2,(2016)

  14. Gonçalves, W.N., Machado, B.B., Bruno, O.M.: Spatiotemporal Gabor filters: a new method for dynamic texture recognition. arXiv preprint arXiv:1201.3612 (2012)

  15. Lempitsky, V., Zisserman, A.: Learning to count objects in images. Adv. Neural Inform. Process. Syst. 23, 1324–1332 (2010)

    Google Scholar 

  16. Onoro-Rubio, D., Roberto, J.L.-S.: Towards perspective-free object counting with deep learning. In: European Conference on Computer Vision. Springer, Cham, (2016)

  17. Arteta, C., Lempitsky, V., Zisserman, A.: Counting in the wild. In: European conference on computer vision, Springer, Cham (2016)

  18. Zhang, S. et al.: Understanding traffic density from large-scale web camera data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017)

  19. Zhang, S. et al.: Fcn-rlstm: Deep spatio-temporal neural networks for vehicle counting in city cameras. In: Proceedings of the IEEE international conference on computer vision. (2017)

  20. Adebisi, O.: Improving manual counts of turning traffic volumes at road junctions. J. Transp. Engi. 113(3), 256–267 (1987)

    Article  Google Scholar 

  21. Zheng, P., Mike, M.D.: An investigation on the manual traffic count accuracy. Procedia Soc. Behav. Sci. 43, 226–231 (2012)

    Article  Google Scholar 

  22. Agarwal, V., Murali, N.V., Chandramouli, C.: A cost-effective ultrasonic sensor-based driver-assistance system for congested traffic conditions. IEEE Trans. Intell. Transp. Syst. 10(3), 486–498 (2009)

    Article  Google Scholar 

  23. Ramezani, A., Behzad, M.: The traffic condition likelihood extraction using incomplete observation in distributed traffic loop detectors. In: 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, (2011)

  24. Zhuang, P., Shang, Y., Hua, B.: Statistical methods to estimate vehicle count using traffic cameras. Multidimens. Syst. Signal Process. 20(2), 121–133 (2009)

    Article  Google Scholar 

  25. Robert, K.: Video-based traffic monitoring at day and night vehicle features detection tracking. In: 2009 12th International IEEE Conference on Intelligent Transportation Systems. IEEE, (2009)

  26. Khan, G. et al.: Deep-Learning Based Vehicle Count and Free Parking Slot Detection System. In: 2019 22nd International Multitopic Conference (INMIC). IEEE, (2019)

  27. Liang, H., et al.: Vehicle Counting System using Deep Learning and Multi-Object Tracking Methods. Transp. Res. Rec. 2674, 114–128 (2020)

    Article  Google Scholar 

  28. Bui, N., Yi, H., Cho, J.: A vehicle counts by class framework using distinguished regions tracking at multiple intersections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 578-579) (2020)

  29. Van Pham, H., Byung-Ryong, L.: Front-view car detection and counting with occlusion in dense traffic flow. Int. J. Control Autom. Syst. 13(5), 1150–1160 (2015)

    Article  Google Scholar 

  30. Xiang, X., et al.: Vehicle counting based on vehicle detection and tracking from aerial videos. Sensors 18(8), 2560 (2018)

    Article  Google Scholar 

  31. Cazzato, D., Cimarelli, C., Sanchez-Lopez, J.L., Voos, H., Leo, M.: A survey of computer vision methods for 2D Object detection from unmanned aerial vehicles. J. Imaging 6(8), 78 (2020)

    Article  Google Scholar 

  32. Khan, N.A., Jhanjhi, N.Z., Brohi, S.N., Usmani, R.S.A., Nayyar, A.: Smart traffic monitoring system using unmanned aerial vehicles (UAVs). Comput. Commun. 157, 434–443 (2020)

    Article  Google Scholar 

  33. Liu, H. et al.: Deep relative distance learning: Tell the difference between similar vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016)

  34. Bai, Y., et al.: Group-sensitive triplet embedding for vehicle reidentification. IEEE Trans. Multimed. 20(9), 2385–2399 (2018)

    Article  Google Scholar 

  35. Zhou, Y., Ling, S.: Aware attentive multi-view inference for vehicle re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2018)

  36. He, X. et al.: Triplet-center loss for multi-view 3d object retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2018)

  37. Lou, Y. et al. :Veri-wild: a large dataset and a new method for vehicle re-identification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2019)

  38. Bromley, J., et al.: Signature verification using a “siamese” time delay neural network. Adv. Neural Inf. Process. Syst. (1994)

  39. Chopra, S., Raia, H., Yann, L.: Learning a similarity metric discriminatively, with application to face verification. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). Vol. 1. IEEE, (2005)

  40. Taigman, Y. et al. Closing the gap to human-level performance in face verification. deepface. IEEE Comput. Vis. Pattern Recogn. (CVPR). 5. (2014)

  41. Paisios, N., Subramanian, L., Rubinsteyn, A.: Choosing which clothes to wear confidently: a tool for pattern matching. In: Workshop on Frontiers in Accessibility for Pervasive Computing. ACM. (2012)

  42. Berlemont, S. et al.: Siamese neural network based similarity metric for inertial gesture classification and rejection. (2015)

  43. Liu, X., et al.: Siamese convolutional neural networks for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 16(8), 1200–1204 (2019)

    Article  Google Scholar 

  44. He, H., et al.: Matching of remote sensing images with complex background variations via Siamese convolutional neural network. Remote Sens. 10(2), 355 (2018)

    Article  Google Scholar 

  45. Liu, X., et al.: A deep learning-based approach to progressive vehicle re-identification for urban surveillance. In: European conference on computer vision, Springer, Cham (2016)

  46. Wen, L. et al.: UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking. arXiv preprint arXiv:1511.04136 (2015)

  47. He, K. et al.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. (2015)

  48. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)

  49. Yang, L. et al.: A large-scale car dataset for fine-grained categorization and verification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015)

  50. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015)

  51. Yuan, Y., Yang, K., Zhang, C.: Hard-aware deeply cascaded embedding. In: Proceedings of the IEEE international conference on computer vision. (2017)

  52. Liao, S. et al.: Person re-identification by local maximal occurrence representation and metric learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015)

  53. Xiao, T. et al.: Learning deep feature representations with domain guided dropout for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016)

  54. Zhou, Y., Shao, L.: Cross-view GAN based vehicle generation for re-identification. BMVC. 1,(2017)

  55. Shen, Y. et al.: Learning deep neural networks for vehicle re-id with visual-spatio-temporal path proposals. In: Proceedings of the IEEE International Conference on Computer Vision. (2017)

  56. Wu, Y. et al.: Detectron2

  57. Chatfield, K. et al.: Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liantao Wang.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by the Fundamental Research Funds for the Central Universities (No. B200202213), the Natural Science Foundation of Jiangsu Province (No. BK20201160). The corresponding author is Liantao Wang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, K., Wang, L. & Zhang, J. ST-CSNN: a novel method for vehicle counting. Machine Vision and Applications 32, 108 (2021). https://doi.org/10.1007/s00138-021-01233-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-021-01233-2

Keywords

Navigation