skip to main content
research-article

E-detector: Asynchronous Spatio-temporal for Event-based Object Detection in Intelligent Transportation System

Authors Info & Claims
Published:27 September 2023Publication History
Skip Abstract Section

Abstract

In intelligent transportation systems, various sensors, including radar and conventional frame cameras, are used to improve system robustness in various challenging scenarios. An event camera is a novel bio-inspired sensor that has attracted the interest of several researchers. It provides a form of neuromorphic vision to capture motion information asynchronously at high speeds. Thus, it possesses advantages for intelligent transportation systems that conventional frame cameras cannot match, such as high temporal resolution, high dynamic range, as well as sparse and minimal motion blur. Therefore, this study proposes an E-detector based on event cameras that asynchronously detect moving objects. The main innovation of our framework is that the spatiotemporal domain of the event camera can be adjusted according to different velocities and scenarios. It overcomes the inherent challenges that traditional cameras face when detecting moving objects in complex environments, such as high speed, complex lighting, and motion blur. Moreover, our approach adopts filter models and transfer learning to improve the performance of event-based object detection. Experiments have shown that our method can detect high-speed moving objects better than conventional cameras using state-of-the-art detection algorithms. Thus, our proposed approach is extremely competitive and extensible, as it can be extended to other scenarios concerning high-speed moving objects. The study findings are expected to unlock the potential of event cameras in intelligent transportation system applications.

REFERENCES

  1. [1] Argyriou Andreas, Evgeniou Theodoros, and Pontil Massimiliano. 2006. Multi-task feature learning. Advances in Neural Information Processing Systems, vol. 19. MIT Press.Google ScholarGoogle Scholar
  2. [2] Badue Claudine, Guidolini Rânik, Carneiro Raphael Vivacqua, Azevedo Pedro, Cardoso Vinicius B., Forechi Avelino, Jesus Luan, Berriel Rodrigo, Paixao Thiago M., Mutz Filipe et al. 2021. Self-driving cars: A survey. Expert Syst. Appl. 165 (2021), 113816.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Bardow Patrick, Davison Andrew J., and Leutenegger Stefan. 2016. Simultaneous optical flow and intensity estimation from an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 884892.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Bochkovskiy Alexey, Wang Chien-Yao, and Liao Hong-Yuan Mark. 2020. YOLOV4: Optimal speed and accuracy of object detection. Retrieved from https://arXiv:2004.10934.Google ScholarGoogle Scholar
  5. [5] Brandli Christian, Berner Raphael, Yang Minhao, Liu Shih-Chii, and Delbruck Tobi. 2014. A 240 \(\times\) 180 130 db 3 \(\mu\)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circ. 49, 10 (2014), 23332341.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Carion Nicolas, Massa Francisco, Synnaeve Gabriel, Usunier Nicolas, Kirillov Alexander, and Zagoruyko Sergey. 2020. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision (ECCV’20). Springer, 213229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Carneiro João, Ieng Sio-Hoi, Posch Christoph, and Benosman Ryad. 2013. Event-based 3D reconstruction from neuromorphic retinas. Neural Netw. 45 (2013), 2738.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Chan Vincent, Liu Shih-Chii, and van Schaik Andr. 2007. AER EAR: A matched silicon cochlea pair with address event representation interface. IEEE Trans. Circ. Syst. I: Reg. Papers 54, 1 (2007), 4859.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Chen Long, Ding Qiwei, Zou Qin, Chen Zhaotang, and Li Lingxi. 2020. DenseLightNet: A light-weight vehicle detection network for autonomous driving. IEEE Trans. Industr. Electr. 67, 12 (2020), 1060010609.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Chen Long, Fan Lei, Xie Guodong, Huang Kai, and Nüchter Andreas. 2017. Moving-object detection from consecutive stereo pairs using slanted plane smoothing. IEEE Trans. Intell. Transport. Syst. 18, 11 (2017), 30933102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chen Shoushun and Guo Menghan. 2019. Live demonstration: CeleX-V: A 1M pixel multi-mode event-based sensor. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’19). IEEE, 16821683.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Cheng Wensheng, Luo Hao, Yang Wen, Yu Lei, Chen Shoushun, and Li Wei. 2019. DET: A high-resolution dvs dataset for lane extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’19). 00.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Chin Tat-Jun, Bagchi Samya, Eriksson Anders, and Schaik Andre Van. 2019. Star tracking using an event camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’19).Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] de Tournemire Pierre, Nitti Davide, Perot Etienne, Migliore Davide, and Sironi Amos. 2020. A large scale event-based detection dataset for automotive. Retrieved from https://arXiv:2001.08499.Google ScholarGoogle Scholar
  15. [15] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Fei-Fei Li. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Retrieved from https://arXiv:1810.04805.Google ScholarGoogle Scholar
  17. [17] Duan Kaiwen, Bai Song, Xie Lingxi, Qi Honggang, Huang Qingming, and Tian Qi. 2019. CenterNet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’19). 65696578.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Fu Cheng-Yang, Liu Wei, Ranga Ananth, Tyagi Ambrish, and Berg Alexander C.. 2017. DSSD: Deconvolutional single shot detector. Retrieved from https://arXiv:1701.06659.Google ScholarGoogle Scholar
  19. [19] Gallego Guillermo, Delbruck Tobi, Orchard Garrick Michael, Bartolozzi Chiara, Taba Brian, Censi Andrea, Leutenegger Stefan, Davison Andrew, Conradt Jorg, Daniilidis Kostas, and Scaramuzza Davide. 2020. Event-based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 1 (2020), 154180.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Gallego Guillermo, Gehrig Mathias, and Scaramuzza Davide. 2019. Focus is all you need: Loss functions for event-based vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19). 1228012289.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Gallego Guillermo, Lund Jon E. A., Mueggler Elias, Rebecq Henri, Delbruck Tobi, and Scaramuzza Davide. 2017. Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Trans. Pattern Anal. Mach. Intell. 40, 10 (2017), 24022412.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Gallego Guillermo, Rebecq Henri, and Scaramuzza Davide. 2018. A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 38673876.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Gao Zan, Li Yinming, and Wan Shaohua. 2020. Exploring deep learning for view-based 3D model retrieval. ACM Trans. Multimedia Comput., Commun. Appl. 16, 1 (2020), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Gehrig Mathias, Shrestha Sumit Bam, Mouritzen Daniel, and Scaramuzza Davide. 2020. Event-based angular velocity regression with spiking networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE, 41954202.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Girshick Ross. 2015. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’15). 14401448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Girshick Ross, Donahue Jeff, Darrell Trevor, and Malik Jitendra. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’14). 580587.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Gou Jianping, Sun Liyuan, Yu Baosheng, Wan Shaohua, and Tao Dacheng. 2022. Hierarchical multi-attention transfer for knowledge distillation. ACM Trans. Multimedia Comput., Commun. Appl. (2022). Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] He Kaiming, Gkioxari Georgia, Dollár Piotr, and Girshick Ross. 2017. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 29612969.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Huang Xiaoming and Zhang Yu-Jin. 2021. Fast video saliency detection via maximally stable region motion and object repeatability. IEEE Trans. Multimedia 24 (2021), 44584470.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Jiang Zhuangyi, Xia Pengfei, Huang Kai, Stechele Walter, Chen Guang, Bing Zhenshan, and Knoll Alois. 2019. Mixed frame-/event-driven fast pedestrian detection. In Proceedings of the International Conference on Robotics and Automation (ICRA’19). IEEE, 83328338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Kim Hanme, Leutenegger Stefan, and Davison Andrew J.. 2016. Real-time 3D reconstruction and 6-DoF tracking with an event camera. In Proceedings of the European Conference on Computer Vision (ECCV’16). Springer, 349364.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Kong Tao, Sun Fuchun, Liu Huaping, Jiang Yuning, Li Lei, and Shi Jianbo. 2020. Foveabox: Beyound anchor-based object detection. IEEE Trans. Image Process. 29 (2020), 73897398.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25 (2012), 10971105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Lee Youngwan and Park Jongyoul. 2020. Centermask: Real-time anchor-free instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’20). 1390613915.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Li Honglei, Wang Wenmin, Yu Cheng, and Zhang Shixiong. 2021. SwapInpaint: Identity-specific face inpainting with identity swapping. IEEE Trans. Circ. Syst. Video Technol. 32, 7 (2021), 42714281.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Li Jiachen, Cheng Bowen, Feris Rogerio, Xiong Jinjun, Huang Thomas S., Hwu Wen-Mei, and Shi Humphrey. 2021. Pseudo-IoU: Improving label assignment in anchor-free object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21). 23782387.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Li Qingquan, Chen Long, Li Ming, Shaw Shih-Lung, and Nüchter Andreas. 2013. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Trans. Vehic. Technol. 63, 2 (2013), 540555.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Lichtsteiner Patrick and Delbruck Tobi. 2005. A 64 \(\times\) 64 AER logarithmic temporal derivative silicon retina. In Research in Microelectronics and Electronics, 2005 PhD, Vol. 2. IEEE, 202205.Google ScholarGoogle Scholar
  39. [39] Lin Che-Tsung, Chen Shu-Ping, Santoso Patrisia Sherryl, Lin Hung-Jin, and Lai Shang-Hong. 2019. Real-time single-stage vehicle detector optimized by multi-stage image-based online hard example mining. IEEE Trans. Vehic. Technol. 69, 2 (2019), 15051518.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Lin Tsung-Yi, Maire Michael, Belongie Serge, Hays James, Perona Pietro, Ramanan Deva, Dollár Piotr, and Zitnick C. Lawrence. 2014. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision. Springer, 740755.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Liu Wei, Anguelov Dragomir, Erhan Dumitru, Szegedy Christian, Reed Scott, Fu Cheng-Yang, and Berg Alexander C.. 2016. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV’16). Springer, 2137.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Mitrokhin Anton, Fermüller Cornelia, Parameshwara Chethan, and Aloimonos Yiannis. 2018. Event-based moving object detection and tracking. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’18). IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Mondal Anindya, Giraldo Jhony H., Bouwmans Thierry, Chowdhury Ananda S., et al. 2021. Moving object detection for event-based vision using graph spectral clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’21). 876884.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Pan Liyuan, Scheerlinck Cedric, Yu Xin, Hartley Richard, Liu Miaomiao, and Dai Yuchao. 2019. Bringing a blurry frame alive at high frame-rate with an event camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19). 68206829.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Pan Sinno Jialin and Yang Qiang. 2009. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (2009), 13451359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Perot Etienne, de Tournemire Pierre, Nitti Davide, Masci Jonathan, and Sironi Amos. 2020. Learning to detect objects with a 1 megapixel event camera. Advances in Neural Information Processing Systems 33 (2020), 1663916652.Google ScholarGoogle Scholar
  47. [47] Rebecq Henri, Ranftl René, Koltun Vladlen, and Scaramuzza Davide. 2019. High speed and high dynamic range video with an event camera. IEEE Trans. Pattern Anal. Mach. Intell. 43, 6 (2019), 19641980.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Redmon Joseph, Divvala Santosh, Girshick Ross, and Farhadi Ali. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 779788.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Redmon Joseph and Farhadi Ali. 2017. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 72637271.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Redmon Joseph and Farhadi Ali. 2018. YOLOV3: An incremental improvement. Retrieved from https://arXiv:1804.02767.Google ScholarGoogle Scholar
  51. [51] Ren Shaoqing, He Kaiming, Girshick Ross, and Sun Jian. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems 28 (2015), 9199.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Rogers Anna, Kovaleva Olga, and Rumshisky Anna. 2020. A primer in bertology: What we know about how bert works. Trans. Assoc. Comput. Linguist. 8 (2020), 842866.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Rozumnyi Denys, Kotera Jan, Sroubek Filip, Novotny Lukas, and Matas Jiri. 2017. The world of fast moving objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 52035211.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Rozumnyi Denys, Matas Jiri, Sroubek Filip, Pollefeys Marc, and Oswald Martin R.. 2021. FMODetect: Robust detection of fast moving objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV’21). 35413549.Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Russakovsky Olga, Deng Jia, Su Hao, Krause Jonathan, Satheesh Sanjeev, Ma Sean, Huang Zhiheng, Karpathy Andrej, Khosla Aditya, Bernstein Michael et al. 2015. Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115, 3 (2015), 211252.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Scheerlinck Cedric, Barnes Nick, and Mahony Robert. 2018. Continuous-time intensity estimation using event cameras. In Proceedings of the Asian Conference on Computer Vision. Springer, 308324.Google ScholarGoogle Scholar
  57. [57] Stoffregen Timo and Kleeman Lindsay. 2019. Event cameras, contrast maximization and reward functions: An analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19). 1230012308.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Sultana Maryam, Mahmood Arif, and Jung Soon Ki. 2020. Unsupervised moving object detection in complex scenes using adversarial regularizations. IEEE Trans. Multimedia 23 (2020), 20052018.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Tian Zhi, Shen Chunhua, Chen Hao, and He Tong. 2019. FCOS: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 96279636.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Wang Zhou, Bovik Alan C., Sheikh Hamid R., and Simoncelli Eero P.. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Wu Yue and Ji Qiang. 2016. Constrained deep transfer feature learning and its applications. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 51015109.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Xu Xin, Wang Shiqin, Wang Zheng, Zhang Xiaolong, and Hu Ruimin. 2021. Exploring image enhancement for salient object detection in low light images. ACM Trans. Multimedia Comput., Commun. Appl. 17, 1s (2021), 119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Yan Yichao, Li Jinpeng, Qin Jie, Bai Song, Liao Shengcai, Liu Li, Zhu Fan, and Shao Ling. 2021. Anchor-free person search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’21). 76907699.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Yazdi Mehran and Bouwmans Thierry. 2018. New trends on moving object detection in video images captured by a moving camera: A survey. Comput. Sci. Rev. 28 (2018), 157177.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Zhang Shixiong, Wang Wenmin, Li Honglei, and Zhang Shenyong. 2022. EVtracker: An event-driven spatiotemporal method for dynamic object tracking. Sensors 22, 16 (2022), 6090.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Zhang Shifeng, Wen Longyin, Bian Xiao, Lei Zhen, and Li Stan Z.. 2018. Single-shot refinement neural network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18). 42034212.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Zhang Xiaosong, Wan Fang, Liu Chang, Ji Xiangyang, and Ye Qixiang. 2021. Learning to match anchors for visual object detection. IEEE Trans. Pattern Anal. Mach. Intell. 44, 6 (2021), 30963109.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Zhang Yue, Zhang Fanghui, Jin Yi, Cen Yigang, Voronin Viacheslav, and Wan Shaohua. 2023. Local correlation ensemble with GCN based on attention features for cross-domain person Re-ID. ACM Trans. Multimedia Comput., Commun. Appl. 19, 1 (2023), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Zhao Jiang, Ji Shilong, Cai Zhihao, Zeng Yiwen, and Wang Yingxun. 2022. Moving object detection and tracking by event frame from neuromorphic vision sensors. Biomimetics 7, 1 (2022), 31.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Zhao Xiangmo, Sun Pengpeng, Xu Zhigang, Min Haigen, and Yu Hongkai. 2020. Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications. IEEE Sensors J. 20, 9 (2020), 49014913.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Zhao Zhong-Qiu, Zheng Peng, Xu Shou-tao, and Wu Xindong. 2019. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 30, 11 (2019), 32123232.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Zhen Peining, Wang Shuqi, Zhang Suming, Yan Xiaotao, Wang Wei, Ji Zhigang, and Chen Hai-Bao. 2023. Towards accurate oriented object detection in aerial images with adaptive multi-level feature fusion. ACM Trans. Multimedia Comput., Commun. Appl. 19, 1 (2023), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Zheng Zhedong, Ruan Tao, Wei Yunchao, Yang Yi, and Mei Tao. 2020. VehicleNet: Learning robust visual representation for vehicle re-identification. IEEE Trans. Multimedia 23 (2020), 26832693.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Zhong Yuanyi, Wang Jianfeng, Peng Jian, and Zhang Lei. 2020. Anchor box optimization for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 12861294.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Zhou Daquan, Kang Bingyi, Jin Xiaojie, Yang Linjie, Lian Xiaochen, Jiang Zihang, Hou Qibin, and Feng Jiashi. 2021. Deepvit: Towards deeper vision transformer. Retrieved from https://arXiv:2103.11886.Google ScholarGoogle Scholar
  76. [76] Zhou Jie, Gao Dashan, and Zhang David. 2007. Moving vehicle detection for automatic traffic monitoring. IEEE Trans. Vehic. Technol. 56, 1 (2007), 5159.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Zhou Xingyi, Wang Dequan, and Krähenbühl Philipp. 2019. Objects as points. Retrieved from https://arXiv:1904.07850.Google ScholarGoogle Scholar
  78. [78] Zhu Chenchen, He Yihui, and Savvides Marios. 2019. Feature selective anchor-free module for single-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19). 840849.Google ScholarGoogle ScholarCross RefCross Ref
  79. [79] Zhu Xizhou, Su Weijie, Lu Lewei, Li Bin, Wang Xiaogang, and Dai Jifeng. 2020. Deformable DETR: Deformable transformers for end-to-end object detection. Retrieved from https://arXiv:2010.04159.Google ScholarGoogle Scholar

Index Terms

  1. E-detector: Asynchronous Spatio-temporal for Event-based Object Detection in Intelligent Transportation System

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Multimedia Computing, Communications, and Applications
      ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 20, Issue 2
      February 2024
      548 pages
      ISSN:1551-6857
      EISSN:1551-6865
      DOI:10.1145/3613570
      • Editor:
      • Abdulmotaleb El Saddik
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 September 2023
      • Online AM: 17 February 2023
      • Accepted: 10 February 2023
      • Revised: 7 February 2023
      • Received: 29 April 2022
      Published in tomm Volume 20, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)255
      • Downloads (Last 6 weeks)23

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text