Skip to main content

Improving Surveillance Object Detection with Adaptive Omni-Attention over Both Inter-frame and Intra-frame Context

  • Conference paper
  • First Online:
  • 259 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13842))

Abstract

Surveillance object detection is a challenging and practical sub-branch of object detection. Factors such as lighting variations, smaller objects, and motion blur in video frames affect detection results, but on the other hand, the temporal information and stable background of a surveillance video are major advantages that does not exist in generic object detection. In this paper, we propose an adaptive omni-attention model for surveillance object detection, which effectively and efficiently integrates inter-frame contextual information to improve the detection of low-quality frames and intra-frame attention to suppress false positive detections in the background regions. In addition, the training of the proposed network can converge quickly with less epochs because during multi-frame fusion stage, the pre-trained weights of the single-frame network can be used to update simultaneously in reverse in both single-frame and multi-frame feature maps. The experimental results on the UA-DETRAC and the UAVDT datasets have demonstrated the promising performance of our proposed detector in both accuracy and speed. (Code is available at https://github.com/Yubzsz/Omni-Attention-VOD.)

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  2. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  3. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems 28 (2015)

    Google Scholar 

  4. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  5. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  6. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  7. Law, H., Deng, J.: CornerNet: detecting objects as paired keypoints. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 765–781. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_45

    Chapter  Google Scholar 

  8. Zhou, X., Wang, D., Krähenbühl, P.: Objects as points. arXiv preprint arXiv:1904.07850 (2019)

  9. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627–9636 (2019)

    Google Scholar 

  10. Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: RepPoints: point set representation for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9657–9666 (2019)

    Google Scholar 

  11. Wen, L., et al.: UA-DETRAC: a new benchmark and protocol for multi-object detection and tracking. Comput. Vis. Image Underst. 193, 102907 (2020)

    Article  Google Scholar 

  12. Du, D., et al.: The unmanned aerial vehicle benchmark: object detection and tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 375–391. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_23

    Chapter  Google Scholar 

  13. Loganathan, G.B., Fatah, T.H., Yasin, E.T., Hamadamen, N.I.: To develop multi-object detection and recognition using improved GP-FRCNN method. In: 2022 8th International Conference on Smart Structures and Systems (ICSSS), pp. 1–7. IEEE (2022)

    Google Scholar 

  14. Wang, T., He, X., Cai, Y., Xiao, G.: Learning a layout transfer network for context aware object detection. IEEE Trans. Intell. Transp. Syst. 21(10), 4209–4224 (2019)

    Article  Google Scholar 

  15. Fu, Z., Chen, Y., Yong, H., Jiang, R., Zhang, L., Hua, X.S.: Foreground gating and background refining network for surveillance object detection. IEEE Trans. Image Process. 28(12), 6077–6090 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Wang, X., Hu, X., Chen, C., Fan, Z., Peng, S.: Illuminating vehicles with motion priors for surveillance vehicle detection. In: 2020 IEEE International Conference on Image Processing (ICIP), pp. 2021–2025. IEEE (2020)

    Google Scholar 

  17. Zhu, X., Wang, Y., Dai, J., Yuan, L., Wei, Y.: Flow-guided feature aggregation for video object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 408–417 (2017)

    Google Scholar 

  18. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  19. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  20. Zhu, X., Cheng, D., Zhang, Z., Lin, S., Dai, J.: An empirical study of spatial attention mechanisms in deep networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6688–6697 (2019)

    Google Scholar 

  21. Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)

    Google Scholar 

  22. Zhu, X., Lyu, S., Wang, X., Zhao, Q.: TPH-YOLOv5: improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2778–2788 (2021)

    Google Scholar 

  23. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

    Chapter  Google Scholar 

  24. Zhu, X., Dai, J., Yuan, L., Wei, Y.: Towards high performance video object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7210–7218 (2018)

    Google Scholar 

  25. Chen, Y., Cao, Y., Hu, H., Wang, L.: Memory enhanced global-local aggregation for video object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10337–10346 (2020)

    Google Scholar 

  26. Perreault, H., Bilodeau, G.A., Saunier, N., Héritier, M.: FFAVOD: feature fusion architecture for video object detection. Pattern Recogn. Lett. 151, 294–301 (2021)

    Article  Google Scholar 

  27. Li, S., Chen, F.: 3D-DETNet: a single stage video-based vehicle detector. In: 3rd International Workshop on Pattern Recognition, vol. 10828, pp. 60–66. SPIE (2018)

    Google Scholar 

  28. Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5533–5541 (2017)

    Google Scholar 

  29. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6450–6459 (2018)

    Google Scholar 

  30. Huang, Z., et al.: TAda! temporally-adaptive convolutions for video understanding. arXiv preprint arXiv:2110.06178 (2021)

  31. Cao, Z., Huang, Z., Pan, L., Zhang, S., Liu, Z., Fu, C.: TCTrack: temporal contexts for aerial tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14798–14808 (2022)

    Google Scholar 

  32. Li, X., Wang, Y., Zhou, Z., Qiao, Y.: SmallBigNe: integrating core and contextual views for video classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1092–1101 (2020)

    Google Scholar 

  33. Everingham, M., Eslami, S., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vision 111(1), 98–136 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (62172227) and National Key R &D Program of China (2021YFF0602101).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiyuan Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, T., Chen, C., Zhou, Y., Hu, X. (2023). Improving Surveillance Object Detection with Adaptive Omni-Attention over Both Inter-frame and Intra-frame Context. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13842. Springer, Cham. https://doi.org/10.1007/978-3-031-26284-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26284-5_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26283-8

  • Online ISBN: 978-3-031-26284-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics