Skip to main content

Towards Real-Time Edge Detection for Event Cameras Based on Lifetime and Dynamic Slicing

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1153))

Abstract

Retinal cameras, such as dynamic vision sensors (DVF), transmit asynchronous events with ultra-low latency (\(\sim \)10 \(\upmu \)s) only at significant luminance changes, unlike traditional CMOS cameras which transmit the absolute brightness of all pixels including redundant backgrounds. Due to these significant characteristics, they offer great potential to obtain efficient localization of high-speed and agile platforms. Moreover, event cameras have a high dynamic range (\({\sim }\)140 dB), which makes them suitable for platforms that operate indoors in low-lighting scenarios and in outdoor environments, where the camera might be pointing at a strong light source, e.g. the sun. In this paper, we propose an algorithm to detect edges in event streams coming from retinal cameras. To do that, an algorithm is developed to extract edges from events by augmenting a batch of events with their lifetimes. The lifetime of each event is computed using a local plane fitting technique. We use a batching technique to increase the frame rate of generated images since events with a high sample rate cause the processing of a single event to be computationally expensive. The size of the batch will be adjusted based on the mean optical flow of the previously generated batch. The obtained experimental results show that our proposed technique can significantly reduce the response time with the same sharpness in generating the edges.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Lichtsteiner, P., Posch, C., Delbruck, T.: A \(128\times 128\) 120 dB 15 \(\upmu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)

    Google Scholar 

  2. Brandli, C., Berner, R., Yang, M., Liu, S., Delbruck, T.: A \(240\times 180\) 130 dB 3 \(\upmu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circuits 49(10), 2333–2341 (2014)

    Google Scholar 

  3. Gallego, G., Delbrück, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., Scaramuzza, D.: Event-based vision: a survey. CoRR, vol. abs/1904.08405 (2019)

    Google Scholar 

  4. Mohamed, S.A.S., Haghbayan, M., Westerlund, T., Heikkonen, J., Tenhunen, H., Plosila, J.: A survey on odometry for autonomous navigation systems. IEEE Access 7, 97466–97486 (2019)

    Google Scholar 

  5. Liu, M., Delbruck, T.: Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors, September 2018

    Google Scholar 

  6. Rebecq, H., Horstschaefer, T., Gallego, G., Scaramuzza, D.: EVO: a geometric approach to event-based 6-DOF parallel tracking and mapping in real time. IEEE Robot. Autom. Lett. 2(2), 593–600 (2017)

    Google Scholar 

  7. Mohamed, S.A.S., Haghbayan, M., Heikkonen, J., Tenhunen, H., Plosila, J.: Towards dynamic monocular visual odometry based on an event camera and IMU sensor. In: Intelligent Transport Systems, From Research and Development to the Market Uptake (INTSYS 2019). Springer (2020)

    Google Scholar 

  8. Mlsna, P., Rodriguez, J.: Gradient and Laplacian edge detection, pp. 495–524. Elsevier Inc. (2009)

    Google Scholar 

  9. Marr, D., Hildreth, E.: Theory of edge detection. Proc. R. Soc. Lond. Ser. B 207, 187–217 (1980)

    Google Scholar 

  10. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI–8(6), 679–698 (1986)

    Google Scholar 

  11. Leavers, V.: Which hough transform? CVGIP Image Underst. 58(2), 250–264 (1993)

    Google Scholar 

  12. Seifozzakerini, S., Yau, W.-Y., Zhao, B., Mao, K.: Event-based hough transform in a spiking neural network for multiple line detection and tracking using a dynamic vision sensor. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 94.1–94.12. BMVA Press, January 2016

    Google Scholar 

  13. von Gioi, R.G., Jakubowicz, J., Morel, J., Randall, G.: LSD: a line segment detector. IPOL J. 2, 35–55 (2012)

    Google Scholar 

  14. Brändli, C., Strubel, J., Keller, S., Scaramuzza, D., Delbruck, T.: ELiSeD – an event-based line segment detector. In: 2016 Second International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP), pp. 1–7, June 2016

    Google Scholar 

  15. Barranco, F., Teo, C.L., Fermüller, C., Aloimonos, Y.: Contour detection and characterization for asynchronous event sensors. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 486–494, December 2015

    Google Scholar 

  16. Mueggler, E., Forster, C., Baumli, N., Gallego, G., Scaramuzza, D.: Lifetime estimation of events from dynamic vision sensors. In: IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, WA, USA, 26–30 May 2015, pp. 4874–4881 (2015)

    Google Scholar 

  17. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981). http://doi.acm.org/10.1145/358669.358692

    MathSciNet  Google Scholar 

  18. Benosman, R., Clercq, C., Lagorce, X., Ieng, S., Bartolozzi, C.: Event-based visual flow. IEEE Trans. Neural Netw. Learn. Syst. 25(2), 407–417 (2014)

    Google Scholar 

  19. Mueggler, E., Rebecq, H., Gallego, G., Delbrück, T., Scaramuzza, D.: The event-camera dataset and simulator: event-based data for pose estimation, visual odometry, and SLAM. I. J. Robot. Res. 36(2), 142–149 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sherif A. S. Mohamed .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mohamed, S.A.S., Haghbayan, MH., Heikkonen, J., Tenhunen, H., Plosila, J. (2020). Towards Real-Time Edge Detection for Event Cameras Based on Lifetime and Dynamic Slicing. In: Hassanien, AE., Azar, A., Gaber, T., Oliva, D., Tolba, F. (eds) Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV2020). AICV 2020. Advances in Intelligent Systems and Computing, vol 1153. Springer, Cham. https://doi.org/10.1007/978-3-030-44289-7_55

Download citation

Publish with us

Policies and ethics