Abstract
We describe a method of optical flow extraction for high-speed high-brightness targets based on a pulse array image sensor (PAIS). PAIS is a retina-like image sensor with pixels triggered by light; it can convert light into a series of pulse intervals. This method can obtain optical flow from pulse data directly by accumulating continuous pulses. The triggered points can be used to filter redundant data when the target is brighter than the background. The method takes full advantage of the rapid response of PAIS to high-brightness targets. We applied this method to extract the optical flow of high-speed turntables with different background brightness, with the sensor model and actual data, respectively. Under the sampling condition of 2×104 frames/s, the optical flow could be extracted from a high-speed turntable rotating at 1000 r/min. More than 90% of redundant points could be filtered by our method. Experimental results showed that the optical flow extraction algorithm based on pulse data can extract the optical flow information of high-brightness objects efficiently without the need to reconstruct images.
摘要
提出一种基于脉冲阵列图像传感器(PAIS)的高速高亮目标光流提取方法。PAIS是将光信号转换成一系列脉冲间隔的仿视网膜图像传感器。通过累积连续脉冲直接从脉冲数据流中获得光流,当目标相对于背:提出一种基于脉冲阵列图像传感器(PAIS)的高速高亮目标光流提取方法。PAIS是将光信号转换成一系列脉冲间隔的仿视网膜图像传感器。通过累积连续脉冲直接从脉冲数据流中获得光流,当目标相对于背景亮度较大时,触发点可过滤冗余数据。该方法充分利用PAIS对高亮度目标快速响应特性。将该方法用于不同背景亮度高速转盘的光流提取,在传感器模型和实际拍摄数据中进行实验。在2×104帧/秒的采样条件下拍摄转速为1000转/分的高速转盘,可以滤除90%以上冗余点。实验结果表明,基于脉冲数据的光流提取算法可在无需重构灰度图像基础上有效提取高亮目标光流信息。
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Almatrafi M, Hirakawa K, 2020. DAViS camera optical flow. IEEE Trans Comput Imag, 6:396–407. https://doi.org/10.1109/TCI.2019.2948787
Barron JL, Fleet DJ, Beauchemin SS, 1994. Performance of optical flow techniques. Int J Comput Vis, 12(1):43–77. https://doi.org/10.1007/bf01420984
Benosman R, Ieng SH, Clercq C, et al., 2012. Asynchronous frameless event-based optical flow. Neur Netw, 27:32–37. https://doi.org/10.1016/j.neunet.2011.11.001
Benosman R, Clercq C, Lagorce X, et al., 2014. Event-based visual flow. IEEE Trans Neur Netw Learn Syst, 25(2): 407–417. https://doi.org/10.1109/tnnls.2013.2273537
Berner R, Brandli C, Yang MH, et al., 2013. A 240×180 120dB 10mW 12µs-latency sparse output vision sensor for mobile applications. Symp on VLSI Circuits, p.C186-C187. https://doi.org/10.5167/uzh-91116
Brooks JM, Gupta AK, Smith MS, et al., 2018. Particle image velocimetry measurements of Mach 3 turbulent boundary layers at low Reynolds numbers. Exp Fluids, 59(5):83. https://doi.org/10.1007/s00348-018-2536-x
Brosch T, Tschechne S, Neumann H, 2015. On event-based optical flow detection. Front Neurosci, 9:137. https://doi.org/10.3389/fnins.2015.00137
Chae Y, Cheon J, Lim S, et al., 2010. A 2.1Mpixel 120frame/s CMOS image sensor with column-parallel ΔΣ ADC architecture. IEEE Int Solid-State Circuits Conf, p.394–395. https://doi.org/10.1109/ISSCC.2010.5433974
Denman S, Fookes C, Sridharan S, 2009. Improved simultaneous computation of motion detection and optical flow for object tracking. Proc Digital Image Computing: Techniques and Applications, p.175–182. https://doi.org/10.1109/DICTA.2009.35
Denman S, Fookes C, Sridharan S, 2010. Group segmentation during object tracking using optical flow discontinuities. 4th Pacific-Rim Symp on Image and Video Technology, p.270–275. https://doi.org/10.1109/PSIVT.2010.52
Fülöp T, Zarándy Á, 2010. Bio-inspired looming object detector algorithm on the Eye-RIS focal plane-processor system. 12th Int Workshop on Cellular Nanoscale Networks and Their Applications, p.1–5. https://doi.org/10.1109/CNNA.2010.5430290
Gao J, Wang YZ, Nie KM, et al., 2018. The analysis and suppressing of non-uniformity in a high-speed spike-based image sensor. Sensors, 18(12):4232. https://doi.org/10.3390/s18124232
Lance S, Brock CA, Rogers D, et al., 2010. Water droplet calibration of the cloud droplet probe (CDP) and in-flight performance in liquid, ice and mixed-phase clouds during ARCPAC. Atmos Meas Techn, 3(6):1683–1706. https://doi.org/10.5194/amt-3-1683-2010
Lichtsteiner P, Posch C, Delbruck T, 2008. A 128×128 120 dB 15 µs latency asynchronous temporal contrast vision sensor. IEEE J Sol-State Circ, 43(2):566–576. https://doi.org/10.1109/JSSC.2007.914337
Loucks T, Ghosh BK, Lund J, 1992. An optical flow based approach for motion and shape parameter estimation in computer vision. Proc 31st IEEE Conf on Decision and Control, p.819–823. https://doi.org/10.1109/CDC.1992.371611
Low WF, Gao Z, Xiang C, et al., 2020. SOFEA: a non-iterative and robust optical flow estimation algorithm for dynamic vision sensors. IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.368–377. https://doi.org/10.1109/CVPRW50498.2020.00049
Moeys DP, Corradi F, Li C, et al., 2018. A sensitive dynamic and active pixel vision sensor for color or neural imaging applications. IEEE Trans Biomed Circ Syst, 12(1): 123–136. https://doi.org/10.1109/TBCAS.2017.2759783
Pan JJ, Tian Y, Zhang X, et al., 2018. Infrared target detection based on local contrast method and LK optical flow. IEEE 3rd Optoelectronics Global Conf, p.176–179. https://doi.org/10.1109/OGC.2018.8529967
Pan YJ, Sun XY, Wu F, 2020. Enriching optical flow with appearance information for action recognition. IEEE Int Conf on Visual Communications and Image Processing, p.251–254. https://doi.org/10.1109/VCIP49819.2020.9301827
Pantilie CD, Nedevschi S, 2010. Real-time obstacle detection in complex scenarios using dense stereo vision and optical flow. Proc 13th Int IEEE Conf on Intelligent Transportation Systems, p.439–444. https://doi.org/10.1109/ITSC.2010.5625174
Posch C, Matolin D, Wohlgenannt R, et al., 2009. A microbolometer asynchronous dynamic vision sensor for LWIR. IEEE Sens J, 9(6):654–664. https://doi.org/10.1109/JSEN.2009.2020658
Ridwan I, Cheng H, 2017. An event-based optical flow algorithm for dynamic vision sensors. In: Karray F, Campilho A, Cheriet F (Eds.), Image Analysis and Recognition. Springer, Cham, p.182–189. https://doi.org/10.1007/978-3-319-59876-5_21
Rueckauer B, Delbruck T, 2016. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor. Front Neurosci, 10:176. https://doi.org/10.3389/fnins.2016.00176
Suh Y, Choi S, Ito M, et al., 2020. A 1280×960 dynamic vision sensor with a 4.95-µm pixel pitch and motion artifact minimization. IEEE Int Symp on Circuits and Systems, p.1–5. https://doi.org/10.1109/ISCAS45731.2020.9180436
Sun DQ, Roth S, Black MJ, 2010. Secrets of optical flow estimation and their principles. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2432–2439. https://doi.org/10.1109/CVPR.2010.5539939
Valeiras DR, Clady X, Ieng SH, et al., 2019. Event-based line fitting and segment detection using a neuromorphic visual sensor. IEEE Trans Neur Netw Learn Syst, 30(4): 1218–1230. https://doi.org/10.1109/TNNLS.2018.2807983
Wang Z, Yang XJ, 2018. Moving target detection and tracking based on pyramid Lucas-Kanade optical flow. IEEE 3rd Int Conf on Image, Vision and Computing, p.66–69. https://doi.org/10.1109/ICIVC.2018.8492786
Wang ZR, Sun X, Diao W, et al., 2019. Ground moving target indication based on optical flow in single-channel SAR. IEEE Geosci Remote Sens Lett, 16(7):1051–1055. https://doi.org/10.1109/LGRS.2019.2892488
Wang ZY, Guo W, Sun ZY, et al., 2007. Demonstration of a task-flow based aircraft collaborative design application in optical grid. Proc 33rd European Conf and Exhibition of Optical Communication, p.1–2. https://doi.org/10.1049/ic:20070188
Xu JT, Yang Z, Gao ZY, et al., 2019. A method of biomimetic visual perception and image reconstruction based on pulse sequence of events. IEEE Sens J, 19(3):1008–1018. https://doi.org/10.1109/JSEN.2018.2880748
Zhang CX, Chen Z, Li M, 2015. Linear model for 3D motion estimation and shape reconstruction based on the straight-line optical flow. Proc 12th IEEE Int Conf on Electronic Measurement & Instruments, p.1172–1177. https://doi.org/10.1109/ICEMI.2015.7494462
Zhu AZ, Yuan LZ, Chaney K, et al., 2018. EV-FlowNet: self-supervised optical flow estimation for event-based cameras. Proc 14th Conf on Robotics-Science and Systems, p.1–9. https://doi.org/10.15607/RSS.2018.XIV.062
Zhu AZ, Yuan LZ, Chaney K, et al., 2019. Live demonstration: unsupervised event-based learning of optical flow, depth and egomotion. IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.1694. https://doi.org/10.1109/CVPRW.2019.00216
Author information
Authors and Affiliations
Contributions
Peiwen ZHANG and Huafeng NIE designed the research. Jiangtao XU pointed out the key research directions. Peiwen ZHANG drafted the paper. Zhiyuan GAO and Kaiming NIE helped organize the paper and ensure its quality. Peiwen ZHANG and Jiangtao XU revised and finalized the paper.
Corresponding author
Additional information
Compliance with ethics guidelines
Peiwen ZHANG, Jiangtao XU, Huafeng NIE, Zhiyuan GAO, and Kaiming NIE declare that they have no conflict of interest.
Project supported by the National Key R&D Program of China (No. 2019YFB2204202)
Rights and permissions
About this article
Cite this article
Zhang, P., Xu, J., Nie, H. et al. Motion detection for high-speed high-brightness objects based on a pulse array image sensor. Front Inform Technol Electron Eng 23, 113–122 (2022). https://doi.org/10.1631/FITEE.2000407
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1631/FITEE.2000407