Skip to main content
Log in

Nighttime trajectory extraction framework for traffic investigations at intersections based on improved SSD and DeepSort

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Obtaining the nighttime trajectory data of traffic objects at intersections can be of great significance for traffic investigations. But there are usually many interferences in the nighttime videos recorded by traffic surveillance, which can make it difficult to obtain the trajectory information of traffic objects. This paper presents the nighttime trajectory extraction framework, and applies Single Shot Multi-Box Detector (SSD) with the improved modules and the Simple Online and Real-time Tracking with a Deep Association Metric (DeepSort) algorithm to collect traffic trajectory data from nighttime traffic videos recorded by roadside surveillance cameras at the intersections. Based on the improved SSD method, the traffic objects at the intersection can be detected in various degrees of visibility. The object locations and bounding boxes provided by the improved SSD can be post-processed and then used as the input for the DeepSort. The performance evaluations on the different datasets (MOT16, 120 m-visibility, 100 m-visibility, 70 m-visibility and 50 m-visibility) can be carried out. For the performance evaluations on the MOT16 dataset, our framework can achieve 12.80% ML and 366 IDs; for the performance evaluations on the 120 m-visibility dataset, our framework can achieve 14.72% ML and 430 IDs; for the performance evaluations on the 100 m-visibility dataset, our framework can achieve 15.22% ML and 466 IDs;for the performance evaluations on the 70 m-visibility dataset, our framework can achieve 43.06% MOTA, 27.74% MT, 16.56% ML and 512 IDs; for the performance evaluations on the 50 m-visibility dataset, our framework can achieve 32.86% MOTA, 20.08% MT, 18.48% ML and 562 IDs. And in some challenging situations, the presented framework can have excellent performance. From the results, we conclude that the presented framework can achieve excellent performance for traffic investigations at night.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

The experiment data used to support the findings of this study are available from the corresponding author upon request.

References

  1. Dey, B., Kundu, M.K.: Turning video into traffic data—an application to urban intersection analysis using transfer learning. IET Image Proc. 13(4), 673–679 (2019)

    Article  Google Scholar 

  2. Wang, X.X., Zhao, X.M., Shen, Y.: A video traffic flow detection system based on machine vision. J. Inf. Process. Syst. 15(5), 1218–1230 (2019)

    Google Scholar 

  3. Wang, Y., Yang, X., Liang, H., Liu, Y.: A review of the self-adaptive traffic signal control system based on future traffic environment. J. Adv. Transp. (2018)

  4. Chen, J., Chen, Y.: Short variable lane setting method, involves calculating traffic amount uneven coefficient, shortening length of lane in urban road, and calculating total length of lane during heavy traffic flow by using variable lane length, CN102867412-A; CN102867412-B,to Univ Southeast (Uyse-C)

  5. Li, M.,Wang Xiao, H., Shi, K.: Traffic conflict identification technology of vehicle intersection based on vehicle video trajectory extraction. In: Shakshuki, E. (ed.) 8th International Conference on Ambient Systems, Networks and Technologies, Procedia Computer Science, pp. 963–968 (2017)

  6. Shuldiner, P.W.: Video technology in traffic engineering and transportation planning (vol 125, p. 169, 1999). J. Transp. Eng. 125(5), 377–383 (1999)

  7. Morales Rosales, L.A., Algredo Badillo, I., Hernandez Gracidas, C.A., Rodriguez Rangel, H., Lobato Baez, M.: On-road obstacle detection video system for traffic accident prevention. J. Intell. Fuzzy Syst. 35(1), 533–547 (2018)

  8. Jiang, J., Qin, C.-Z., Yu, J., Cheng, C., Liu, J., Huang, J.: Obtaining urban waterlogging depths from video images using synthetic image data. Remote Sens. 12(6) (2020)

  9. Huang, Y., Wang, Y., Ruan, Z., Li, Y., Huang, L., Zhou, X., Yu, J., Wu, Q., Zhu, G., Gan, P.: FPGA based road dangerous rock collapse size measuring and alarming system, has video image collecting module for collecting image by using geometric correction and fusion algorithm to realize collapse size measuring and alarming operation, CN107169969-A; CN107169969-B,to Univ Chongqing

  10. Xia, Y., Shi, X., Song, G., Geng, Q., Liu, Y.: Towards improving quality of video-based vehicle counting method for traffic flow estimation. Signal Process. 120, 672–681 (2016)

    Article  Google Scholar 

  11. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2016)

    Article  Google Scholar 

  12. Uijlings, J.R.R., van de Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. Int. J. Comput. Vision 104(2), 154–171 (2013)

    Article  Google Scholar 

  13. Huang, S.-C., Le, T.-H., Jaw, D.-W.: DSNet: joint semantic learning for object detection in inclement weather conditions. IEEE Trans. Pattern Anal. Mach. Intell. 43(8), 2623–2633 (2021)

  14. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  15. He, K., Gkioxari, G., Dollar, P., Girshick,R.: Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 386–397 (2020)

  16. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788 (2016)

  17. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. ArXiv, vol. abs/1804.02767 (2018)

  18. Bewley, A., Ge, Z., Ott, L., Ramov, F., Upcroft, B.: SimpleI online and real time tracking. In: IEEE International Conference on Image Processing ICIP. pp. 3464–3468 (2016)

  19. Kalman, R.E.: A new approach to linear filtering and prediction problems. J. Basic Eng. 82D, 35–45 (1960)

    Article  MathSciNet  Google Scholar 

  20. Kuhn, H.W.: The hungarian method for the assignment problem. Nav. Res. Logist. 52(1), 7–21 (2005)

    Article  MathSciNet  Google Scholar 

  21. Wojke, N., Bewley, A., Paulus, D.,Simple online and real time tracking with a deep association meric. In: IEEE International Conference on Image Processing ICIP. pp. 3645–3649 (2017)

  22. Zhang, X., Hao, X., Liu, S., Wang, J., Xu, J., Hu, J.: Multi-target tracking of surveillance video with differential YOLO and DeepSort. In: Proceedings of SPIE (2019)

  23. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y, Berg, A.C.: SSD: single shot multibox detector

  24. Neubeck, A., Van Gool, L.: Efficient non-maximum suppression. In: International Conference on Pattern Recognition (2006)

  25. Rothe, R., Guillaumin, M., Gool, L.V.: Non-maximum suppression for object detection by passing messages between windows. In: Lecture Notes in Computer Science. pp. 290–306 (2015)

  26. Hosang, J., Benenson, R., Schiele, B.: Learning non-maximum suppression. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 6469–6477 (2017)

  27. Shen, Z., Liu, Z., Li, J., Jiang, Y.-G., Chen, Y., Xue, X.: DSOD: learning deeply supervised object detectors from scratch. In: IEEE International Conference on Computer Vision. pp. 1937–1945 (2017)

  28. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of Machine Learning Research. pp. 448–456 (2015)

  29. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: IEEE, pp. 3645–3649 (2017)

  30. Zbthou, Z., Xing, J., Zhang, M., Hu, W.: Online multi-Target tracking with tensor-based high-order graph matching. In: International Conference on Pattern Recognition. pp. 1809–1814 (2018)

  31. Mahmoudi, N., Ahadi, S.M., Rahmati, M.: Multi-target tracking using CNN-based features: CNNMTT. Multimedia Tools Appl. 78(6), 7077–7096 (2019)

    Article  Google Scholar 

  32. Yu, F., Li, W., Li, Q., Liu, Y., Shi, X.,Yan, J.: POI: multiple Oobject tracking with high performance detection and appearance feature. In: Lecture Notes in Computer Science. pp. 36–42 (2016)

  33. Peng, J., Wang, C., Wan, F., Wu, Y., Fu, Y.: Chained-tracker: chaining paired attentive regression results for end-to-end joint multiple-object detection and tracking (2020)

  34. Zhang, Y., Wang, C., Wang, X., Zeng, W., Liu, W.: FairMOT: on the fairness of detection and re-identification in multiple object tracking. Int. J. Comput. Vision 129(11), 3069–3087 (2021)

    Article  Google Scholar 

Download references

Funding

This work was supported in part by National Natural Science Foundation of China (grant number 52272344), Key Research and Develop-ment Program of Jiangsu Province (grant number BE2019713), Key Research and Development Program of Jiangsu Province (grant number BE2018754) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Contributions

Acquisition of data: XH; QZ Conception and design of study: QZ Analysis and/or interpretation of data: XH; QZ Drafting the manuscript: XH; QZ We confirm that the manuscript has been read and approved by all named authors. All authors reviewed the manuscript.

Corresponding author

Correspondence to Qiang Zhang.

Ethics declarations

Conflict of interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, X., Zhang, Q. Nighttime trajectory extraction framework for traffic investigations at intersections based on improved SSD and DeepSort. SIViP 17, 2907–2914 (2023). https://doi.org/10.1007/s11760-023-02511-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02511-4

Keywords

Navigation