ABSTRACT
Standard cameras are frame-based sensors that capture the scene at a fixed rate. They cannot provide information between two frames and suffer from the motion blur problem in high-speed robotic and vision applications. By contrast, event-based cameras are a novel type of sensors that generate asynchronous ”events” if the intensity changes at a particular pixel. The data types of these two sensors are fundamentally different. In this paper, we leverage the complementarity of event-based and standard frame-based cameras and propose a fusion strategy for feature tracking. Features are extracted in frames, tracked by the event stream and updated when new frames come. The event camera tracks and predicts features on new frames, and the standard camera modifies features. This paradigm is different from existing fusion-based tracking methods which only use the first frame for initialization and abandon the following frames. We evaluate our method on Event-Camera Dataset [17] and show that the feature update process improves the tracking qualities.
- I. Alzugaray and M. Chli. 2018. Asynchronous Corner Detection and Tracking for Event Cameras in Real Time. IEEE Robotics and Automation Letters 3, 4 (2018), 3177–3184. https://doi.org/10.1109/LRA.2018.2849882Google ScholarCross Ref
- I. Alzugaray and M. Chli. 2019. Asynchronous Multi-Hypothesis Tracking of Features with Event Cameras. In 2019 International Conference on 3D Vision (3DV). 269–278. https://doi.org/10.1109/3DV.2019.00038Google Scholar
- C. Brandli, R. Berner, M. Yang, S. Liu, and T. Delbruck. 2014. A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor. IEEE Journal of Solid-State Circuits 49, 10 (2014), 2333–2341. https://doi.org/10.1109/JSSC.2014.2342715Google ScholarCross Ref
- C. Brändli, J. Strubel, S. Keller, D. Scaramuzza, and T. Delbruck. 2016. ELiSeD — An event-based line segment detector. In 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). 1–7. https://doi.org/10.1109/EBCCSP.2016.7605244Google Scholar
- J. Canny. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8, 6(1986), 679–698. https://doi.org/10.1109/TPAMI.1986.4767851Google ScholarDigital Library
- A. Censi, J. Strubel, C. Brandli, T. Delbruck, and D. Scaramuzza. 2013. Low-latency localization by active LED markers tracking using a dynamic vision sensor. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 891–898. https://doi.org/10.1109/IROS.2013.6696456Google ScholarCross Ref
- G. D. Evangelidis and E. Z. Psarakis. 2008. Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 10(2008), 1858–1865. https://doi.org/10.1109/TPAMI.2008.113Google ScholarDigital Library
- L. Everding and J. Conradt. 2018. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors. Frontiers in neurorobotics 12 (2018), 4. https://doi.org/10.3389/fnbot.2018.00004Google Scholar
- G. Gallego, T. Delbruck, G.M. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and et al.2020. Event-based Vision: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020), 1–1. https://doi.org/10.1109/tpami.2020.3008413Google ScholarDigital Library
- D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza. 2020. EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames. International Journal of Computer Vision 128, 3 (2020), 601 – 618. https://doi.org/10.1007/s11263-019-01209-wGoogle ScholarCross Ref
- B. Kueng, E. Mueggler, G. Gallego, and D. Scaramuzza. 2016. Low-latency visual odometry using event-based feature tracks. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 16–23. https://doi.org/10.1109/IROS.2016.7758089Google ScholarDigital Library
- K. Li, D. Shi, Y. Zhang, R. Li, W. Qin, and R. Li. 2019. Feature Tracking Based on Line Segments With the Dynamic and Active-Pixel Vision Sensor (DAVIS). IEEE Access 7(2019), 110874–110883. https://doi.org/10.1109/ACCESS.2019.2933594Google ScholarCross Ref
- R. X. Li, D. X. Shi, Y. J. Zhang, K. Y. Li, and R. H. Li. 2019. FA-Harris: A Fast and Asynchronous Corner Detector for Event Cameras. arxiv:1906.10925 [cs.CV]Google Scholar
- H. Liu, D. P. Moeys, G. Das, D. Neil, S. Liu, and T. Delbrück. 2016. Combined frame- and event-based detection and tracking. In 2016 IEEE International Symposium on Circuits and Systems (ISCAS). 2511–2514. https://doi.org/10.1109/ISCAS.2016.7539103Google ScholarDigital Library
- B. D. Lucas and T. Kanade. 1981. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence - Volume 2 (Vancouver, BC, Canada) (IJCAI’81). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 674–679.Google Scholar
- E. Mueggler, B. Huber, and D. Scaramuzza. 2014. Event-based, 6-DOF pose tracking for high-speed maneuvers. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2761–2768. https://doi.org/10.1109/IROS.2014.6942940Google ScholarCross Ref
- E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza. 2017. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research 36, 2 (2017), 142–149. https://doi.org/10.1177/0278364917691115Google ScholarDigital Library
- E. Muggler, C. Bartolozzi, and D. Scaramuzza. 2017. Fast event-based corner detection. In British Machine Vision Conference (BMVC). British Machine Vision Conference (BMVC), London, 2017., 1–8. https://doi.org/10.5167/uzh-138925Google Scholar
- J. B. Paul and D. M. Neil. 1992. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Paul S. Schenker (Ed.), Vol. 1611. International Society for Optics and Photonics, SPIE, 586 – 606. https://doi.org/10.1117/12.57955Google Scholar
- C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck. 2014. Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output. Proc. IEEE 102, 10 (2014), 1470–1484. https://doi.org/10.1109/JPROC.2014.2346153Google Scholar
- T. Serrano-Gotarredona and B. Linares-Barranco. 2013. A 128 × 128 1.5% Contrast Sensitivity 0.9% FPN 3 µs Latency 4 mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Preamplifiers. IEEE Journal of Solid-State Circuits 48, 3 (2013), 827–838. https://doi.org/10.1109/JSSC.2012.2230553Google ScholarCross Ref
- J. Shi and Tomasi. 1994. Good features to track. In 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 593–600. https://doi.org/10.1109/CVPR.1994.323794Google Scholar
- D. R. Valeiras, X. Clady, S. H. Ieng, and R. Benosman. 2018. Event-Based Line Fitting and Segment Detection Using a Neuromorphic Visual Sensor. IEEE transactions on neural networks and learning systems (September 2018). https://doi.org/10.1109/tnnls.2018.2807983Google Scholar
- A. R. Vidal, H. Rebecq, T. Horstschaefer, and D. Scaramuzza. 2018. Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios. IEEE Robotics and Automation Letters 3, 2 (2018), 994–1001. https://doi.org/10.1109/LRA.2018.2793357Google ScholarCross Ref
- A. Z. Zhu, N. Atanasov, and K. Daniilidis. 2017. Event-based feature tracking with probabilistic data association. In 2017 IEEE International Conference on Robotics and Automation (ICRA). 4465–4470. https://doi.org/10.1109/ICRA.2017.7989517Google ScholarDigital Library
Index Terms
- Standard and Event Cameras Fusion for Feature Tracking
Recommendations
Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking
MM '19: Proceedings of the 27th ACM International Conference on MultimediaEvent cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in a variety of situations, such as fast motion and low illumination scenes. However, most of the event-based object tracking methods are designed for ...
6-DoF Pose Relocalization for Event Cameras With Entropy Frame and Attention Networks
VRCAI '22: Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in IndustryCamera relocalization is an important task in computer vision, mainly used in applications such as VR, AR, and robotics. Camera relocalization solves the problem of estimating the 6-DoF camera pose of an input image in a known scene. There are large ...
Event fusion photometric stereo network
AbstractPhotometric stereo methods typically rely on RGB cameras and are usually performed in a dark room to avoid ambient illumination. Ambient illumination poses a great challenge in photometric stereo due to the restricted dynamic range of the RGB ...
Highlights- We introduce a novel photometric stereo method with event camera.
- We propose a method for the photometric stereo task under ambient illumination.
- We curate the event camera photometric stereo dataset in the real world.
Comments