ABSTRACT
Integrating event cameras are asynchronous sensors wherein incident light values may be measured directly through continuous integration, with individual pixels' light sensitivity being adjustable in real time, allowing for extremely high frame rate and high dynamic range video capture. This paper builds on lessons learned with previous attempts to compress event data and presents a new scheme for event compression that has many analogues to traditional framed video compression techniques. We show how traditional video can be transcoded to an event-based representation, and describe the direct encoding of motion data in our event-based representation. Finally, we present experimental results proving how our simple scheme already approaches the state-of-the-art compression performance for slow-motion object tracking. This system introduces an application "in the loop" framework, where the application dynamically informs the camera how sensitive each pixel should be, based on the efficacy of the most recent data received.
- Z. Bi, S. Dong, Y. Tian, and T. Huang. 2018. Spike Coding for Dynamic Vision Sensors. In 2018 Data Compression Conference. 117--126.Google Scholar
- Christian Brandli, Lorenz Muller, and Tobi Delbruck. 2014. Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS). 686--689. Google ScholarCross Ref
- S. Dong, Z. Bi, Y. Tian, and T. Huang. 2019. Spike Coding for Dynamic Vision Sensor in Intelligent Driving. IEEE Internet of Things Journal 6, 1 (2019), 60--71. Google ScholarCross Ref
- FFmpeg Project. 2020. FFmpeg. https://ffmpeg.org/Google Scholar
- Andrew C. Freeman and Ketan Mayer-Patel. 2020. Integrating Event Camera Sensor Emulator. In Proceedings of the 28th ACM International Conference on Multimedia (Seattle, WA, USA) (MM '20). Association for Computing Machinery, New York, NY, USA, 4503--4505. Google ScholarDigital Library
- Andrew C. Freeman and Ketan Mayer-Patel. 2021. Lossy Compression for Integrating Event Cameras. In 2021 Data Compression Conference.Google Scholar
- G. Gallego, T. Delbruck, G. M. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza. 2020. Event-based Vision: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020), 1--1. Google ScholarDigital Library
- H. K. Galoogahi, A. Fagg, C. Huang, D. Ramanan, and S. Lucey. 2017. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking. In 2017 IEEE International Conference on Computer Vision (ICCV). 1134--1143. Google ScholarCross Ref
- Nabeel Khan, Khurram Iqbal, and Maria Martini. 2020. Lossless Compression of Data From Static and Mobile Dynamic Vision Sensors-Performance and Tradeoffs. IEEE Access PP (05 2020), 1--1. Google ScholarCross Ref
- P. Lichtsteiner, C. Posch, and T. Delbruck. 2006. A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change. In 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers. 2060--2069.Google Scholar
- D. P. Moeys, F. Corradi, C. Li, S. A. Bamford, L. Longinotti, F. F. Voigt, S. Berry, G. Taverni, F. Helmchen, and T. Delbruck. 2018. A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications. IEEE Transactions on Biomedical Circuits and Systems 12, 1 (Feb 2018), 123--136. Google ScholarCross Ref
- D. P. Moeys, C. Li, J. N. P. Martel, S. Bamford, L. Longinotti, V. Motsnyi, D. San Segundo Bello, and T. Delbruck. 2017. Color temporal contrast sensitivity in dynamic vision sensors. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS). 1--4. Google ScholarCross Ref
- S. M. Mostafavi I., J. Choi, and K. J. Yoon. 2020. Learning to Super Resolve Intensity Images From Events. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2765--2773. Google ScholarCross Ref
- OpenCV. 2020. OpenCV. https://opencv.org/Google Scholar
- António R. C. Paiva, Il Park, and José C. Príncipe. 2009. A Reproducing Kernel Hilbert Space Framework for Spike Train Signal Processing. Neural Comput. 21, 2 (Feb. 2009), 424--449. Google ScholarDigital Library
- H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza. 2019. Events-To-Video: Bringing Modern Computer Vision to Event Cameras. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3852--3861. Google ScholarCross Ref
- M. Singh, P. Zhang, A. Vitkus, K. Mayer-Patel, and L. Vicci. 2017. A Frameless Imaging Sensor with Asynchronous Pixels: An Architectural Evaluation. In 2017 23rd IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC). 110--117. Google ScholarCross Ref
- Aaron J. Smith, Montek Singh, and Ketan Mayer-Patel. 2017. A System Model For Frameless Asynchronous High Dynamic Range Sensors. In NOSSDAV'17.Google ScholarDigital Library
- Timo Stoffregen, Guillermo Gallego, Tom Drummond, Lindsay Kleeman, and Davide Scaramuzza. 2019. Event-Based Motion Segmentation by Motion Compensation. CoRR abs/1904.01293 (2019). arXiv:1904.01293 http://arxiv.org/abs/1904.01293Google Scholar
- G. Taverni, D. Paul Moeys, C. Li, C. Cavaco, V. Motsnyi, D. San Segundo Bello, and T. Delbruck. 2018. Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison. IEEE Transactions on Circuits and Systems II: Express Briefs 65, 5 (May 2018), 677--681. Google ScholarCross Ref
- Y. Wu, J. Lim, and M. Yang. 2013. Online Object Tracking: A Benchmark. In 2013 IEEE Conference on Computer Vision and Pattern Recognition. 2411--2418. Google ScholarDigital Library
Index Terms
- Motion segmentation and tracking for integrating event cameras
Recommendations
Integrating Event Camera Sensor Emulator
MM '20: Proceedings of the 28th ACM International Conference on MultimediaEvent cameras are biologically-inspired sensors that upend the framed, synchronous nature of traditional cameras. Singh et al. proposed a novel sensor design wherein incident light values may be measured directly through continuous integration, with ...
Multiple Motion Scene Reconstruction with Uncalibrated Cameras
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene ...
Spatio-temporal motion tracking with unsynchronized cameras
CVPR '12: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)We present a new spatio-temporal method for markerless motion capture. We reconstruct the pose and motion of a character from a multi-view video sequence without requiring the cameras to be synchronized and without aligning captured frames in time. By ...
Comments