Skip to main content

Efficient and Fast Traffic Congestion Classification Based on Video Dynamics and Deep Residual Network

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2020)

Abstract

Real-time implementation and robustness against illumination variation are two essential issues for traffic congestion classification systems, which are still challenging issues. This paper proposes an efficient automated system for traffic congestion classification based on compact image representation and deep residual networks. Specifically, the proposed system comprises three steps: video dynamics extraction, feature extraction, and classification. In the first step, we propose two approaches for modeling the dynamics of each video and produce a compact representation. In the first approach, we aggregate the optical flow in front direction, while in the second approach, we use a temporal pooling method to generate a dynamic image describing the input video. In the second step, we use a deep residual neural network to extract texture features from the compact representation of each video. In the third step, we build a classification model to discriminate between the classes of traffic congestion (low, medium, or high). We use the UCSD and NU1 traffic congestion datasets to assess the performance of the proposed method. The two datasets contain different illumination and shadow variations. The proposed method gives excellent results compared to state-of-the-art methods. It also can classify the input video in a short time (37 fps), and thus, we can use it with real-time applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdelwahab, M.A.: Fast approach for efficient vehicle counting. Electron. Lett. 55(1), 20–22 (2019)

    Article  MathSciNet  Google Scholar 

  2. Abdelwahab, M.A., Abdelwahab, M.M.: Human action recognition and analysis algorithm for fixed and moving cameras. Electron. Lett. 51(23), 1869–1871 (2015)

    Article  Google Scholar 

  3. Abdelwahab, M.A.: Accurate vehicle counting approach based on deep neural networks. In: 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), pp. 1–5. IEEE (2019)

    Google Scholar 

  4. Abdelwahab, M.A., Abdelwahab, M.M.: A novel algorithm for vehicle detection and tracking in airborne videos. In: IEEE International Symposium on Multimedia (ISM), pp. 65–68 (2015)

    Google Scholar 

  5. Aly, S.A., Mamdouh, A., Abdelwahab, M.: Vehicles detection and tracking in videos for very crowded scenes. In: MVA, pp. 311–314 (2013)

    Google Scholar 

  6. Asmaa, O., Mokhtar, K., Abdelaziz, O.: Road traffic density estimation using microscopic and macroscopic parameters. Image Vis. Comput. 31(11), 887–894 (2013)

    Article  Google Scholar 

  7. Chan, A.B., Vasconcelos, N.: Classification and retrieval of traffic video using auto-regressive stochastic processes. In: Intelligent Vehicles Symposium, 2005. Proceedings. IEEE, pp. 771–776. IEEE (2005)

    Google Scholar 

  8. Datondji, S.R.E., Dupuis, Y., Subirats, P., Vasseur, P.: A survey of vision-based traffic monitoring of road intersections. IEEE Trans. Intell. Transp. Syst. 17(10), 2681–2698 (2016)

    Article  Google Scholar 

  9. Fernando, B., Gavves, E., Oramas, J., Ghodrati, A., Tuytelaars, T.: Rank pooling for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 773–787 (2016)

    Article  Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  11. Huang, L., Barth, M.: Real-time multi-vehicle tracking based on feature detection and color probability model. In: 2010 IEEE Intelligent Vehicles Symposium, pp. 981–986. IEEE (2010)

    Google Scholar 

  12. Kim, J., Lee, C.W., Lee, K., Yun, T., Kim, H.: Wavelet-based vehicle tracking for automatic traffic surveillance. In: Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON 2001 (Cat. No. 01CH37239), vol. 1, pp. 313–316. IEEE (2001)

    Google Scholar 

  13. Liu, T.Y., et al.: Learning to rank for information retrieval. Found. Trends® Inf. Retriev. 3(3), 225–331 (2009)

    Article  Google Scholar 

  14. Luo, Z., Jodoin, P.M., Li, S.Z., Su, S.Z.: Traffic analysis without motion features. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 3290–3294. IEEE (2015)

    Google Scholar 

  15. Mo, G., Zhang, S.: Vehicles detection in traffic flow. In: 2010 Sixth International Conference on Natural Computation, vol. 2, pp. 751–754. IEEE (2010)

    Google Scholar 

  16. Porikli, F., Li, X.: Traffic congestion estimation using HMM models without vehicle tracking. In: Intelligent Vehicles Symposium, 2004 IEEE, pp. 188–193. IEEE (2004)

    Google Scholar 

  17. Riaz, A., Khan, S.A.: Traffic congestion classification using motion vector statistical features. In: Sixth International Conference on Machine Vision (ICMV 2013), vol. 9067, p. 90671A. International Society for Optics and Photonics (2013)

    Google Scholar 

  18. Ribas, L.C., Goncalves, W.N., Bruno, O.M.: Dynamic texture analysis with diffusion in networks (2018). arXiv preprint arXiv:1806.10681

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556

  20. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  21. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  22. Wang, Y., Wang, L., Kong, D., Yin, B.: Extrinsic least squares regression with closed-form solution on product grassmann manifold for video-based recognition. Math. Probl. Eng. 2018 (2018)

    Google Scholar 

  23. Yang, H., Qu, S.: Real-time vehicle detection and counting in complex traffic scenes using background subtraction model with low-rank decomposition. IET Intell. Transp. Syst. 12(1), 75–85 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed A. Abdelwahab .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Abdelwahab, M.A., Abdel-Nasser, M., Taniguchi, Ri. (2020). Efficient and Fast Traffic Congestion Classification Based on Video Dynamics and Deep Residual Network. In: Ohyama, W., Jung, S. (eds) Frontiers of Computer Vision. IW-FCV 2020. Communications in Computer and Information Science, vol 1212. Springer, Singapore. https://doi.org/10.1007/978-981-15-4818-5_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4818-5_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4817-8

  • Online ISBN: 978-981-15-4818-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics