Skip to main content
Log in

Predicting vehicle collisions using data collected from video games

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Training a deep learning model for identifying dangerous vehicles requires a large amount of labeled accident data. However, it is difficult to collect a sufficient amount of accident data in the real world. To address this challenge, we introduce a driving-simulator-based data generator that can arbitrarily produce a wide variety of accident scenarios. Furthermore, in order to reduce the gap between synthetic data and real data, we propose a new domain adaptation algorithm that refines both features and labels. We conduct extensive real-data experiments to demonstrate that our dangerous vehicle classifier can reduce the missed detection rate by at least \(23.2\%\), as compared to those trained only with scarce real data, for an interested scenario in which time-to-collision is 1.6–1.8 s. We also find that our algorithm can identify various accident-related factors (such as wheel angles, vehicle orientations, and velocities of nearby vehicles) to enable high prediction accuracy for complex accident scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. Code and dataset: https://github.com/gnsrla12/predicting-vehicle-collisions-using-data-collected-from-video-games.

  2. To be more precise, each scene may contain less than or equal to 18 samples per vehicle since some samples are removed if the dangerous vehicle is invisible (due to occlusion or so) in the scene.

References

  1. Bojarski, M., Testa, D.D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to end learning for self-driving cars. arXiv:1604.07316 (2016)

  2. Bojarski, M., Yeres, P., Choromanaska, A., Choromanski, K., Firner, B., Jackel, L., Muller, U.: Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv:1704.07911 (2017)

  3. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  4. CarCrashesTime: Car Crashes Time. https://www.youtube.com/user/CarCrashesTime (2014). Accessed 01 Sep, 2018

  5. Chan, F.H., Chen, Y.T., Xiang, Y., Sun, M.: Anticipating accidents in dashcam videos. In: Computer Vision—ACCV 2016, pp. 136–153 (2017)

  6. Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: Learning affordance for direct perception in autonomous driving. In: IEEE International Conference on Computer Vision (ICCV), pp. 2722–2730 (2015)

  7. Deo, N., Rangesh, A., Trivedi, M.M.: How would surround vehicles move? A unified framework for maneuver classification and motion prediction. IEEE Trans. Intel. Veh. 3(2), 129–140 (2018)

    Article  Google Scholar 

  8. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361 (2012)

  9. Herzig, R., Levi, E., Xu, H., Gao, H., Brosh, E., Wang, X., Globerson, A., Darrell, T.: Spatio-temporal action graph networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

  10. Houénou, A., Bonnifait, P., Cherfaoui, V.: Risk assessment for collision avoidance systems. In: IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), pp. 386–391 (2014)

  11. Kahn, G., Villaflor, A., Pong, V., Abbeel, P., Levine, S.: Uncertainty-aware reinforcement learning for collision avoidance. arXiv: 1702.01182 (2017)

  12. Kim, H., Lee, K., Hwan, G., Suh, C.: Crash to not crash: learn to identify dangerous vehicles using a simulator. Assoc. Adv. Artif. Intell. (AAAI) 33, 978–985 (2019)

    Google Scholar 

  13. Kim, H., Lee, K., Hwang, G., Suh, C.: Crash to not crash: Learn to identify dangerous vehicles using a simulator. https://sites.google.com/view/crash-to-not-crash (2018). Accessed 2018-11-01

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2015)

  15. Le, T.N., Ono, S., Sugimoto, A., Kawasaki, H.: Attention R-CNN for accident detection. In: 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 313–320 (2020)

  16. Lee, K., Kim, H., Suh, C.: Simulated+unsupervised learning with adaptive data generation and bidirectional mappings. In: International Conference on Learning Representations (ICLR) (2018)

  17. Lefèvre, S., Vasquez, D., Laugier, C.: A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH J 1(1), 1 (2014)

    Article  Google Scholar 

  18. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: Single shot multibox detector. In: Computer Vision—ECCV 2016, pp. 21–37 (2016)

  19. Pomerleau, D.A.: ALVINN: An autonomous land vehicle in a neural network. In: Advances in Neural Information Processing Systems, pp. 305–313 (1989)

  20. Raut, S.B., Malik, L.G.: Survey on vehicle collision prediction in VANET. In: IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1–5 (2014)

  21. Redmon, J., Farhadi, A.: YOLO9000: Better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)

  22. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural. Inf. Process. Syst. 28, 91–99 (2015)

    Google Scholar 

  23. Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Computer Vision—ECCV 2016, pp. 102–118 (2016)

  24. Schubert, R., Richter, E., Wanielik, G.: Comparison and evaluation of advanced motion models for vehicle tracking. In: 11th International Conference on Information Fusion, pp. 1–6 (2008)

  25. Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2242–2251 (2017)

  26. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  27. Sivaraman, S., Trivedi, M.M.: Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans. Intell. Transp. Syst. 14(4), 1773–1795 (2013)

    Article  Google Scholar 

  28. Suzuki, T., Aoki, Y., Kataoka, H.: Pedestrian near-miss analysis on vehicle-mounted driving recorders. In: Fifteenth IAPR International Conference on Machine Vision Applications (MVA), pp. 416–419 (2017)

  29. Thorsson, J.L., Steinert, O.: Neural networks for collision avoidance. Master’s thesis, Chalmers University of Technology (2016)

  30. Tzutalin: LabelImg. https://github.com/tzutalin/labelImg (2015). Accessed: 2018-09-01

  31. Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. Adv. Neural. Inf. Process. Syst. 26, 809–817 (2013)

    Google Scholar 

  32. Wang, Y., Kato, J.: Collision risk rating of traffic scene from dashboard cameras. In: International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–6 (2017)

  33. Yao, Y., Xu, M., Wang, Y., Crandall, D.J., Atkins, E.M.: Unsupervised traffic accident detection in first-person videos. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 273–280 (2019)

  34. You, T., Han, B.: Traffic accident benchmark for causality recognition. In: European Conference on Computer Vision, pp. 540–556 (2020)

  35. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017)

Download references

Acknowledgements

This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA2386-19-1-4050.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changho Suh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was presented in part at Association for the Advancement of Artificial Intelligence (AAAI) 2019 [12]

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, H., Lee, K., Hwang, G. et al. Predicting vehicle collisions using data collected from video games. Machine Vision and Applications 32, 93 (2021). https://doi.org/10.1007/s00138-021-01217-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-021-01217-2

Keywords

Navigation