Skip to main content
Log in

High-speed video generation with an event camera

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The event camera is a kind of visual sensor that mimics aspects of the human visual system by only recording events when the light intensity on a pixel changes. This allows for an event camera to possess high temporal resolution and makes it able to capture fast motion. However, an event camera lacks information for all pixels within a scene, especially color information. In this paper, we aim to recover a typical scene in which the foreground undergoes high-speed motion which can be approximated by a planar motion, and the background is static. We demonstrate how to use the event camera to generate high-speed videos of 2D motion augmented with foreground and background images taken from a conventional camera. We match an object obtained for a static image to frames formed by the event stream, from the event camera, based on curve saliency, and we build a parametric model of affine motion to create image sequences. In this work, we are able to restore scenes of very fast motion such as falling or rotating objects and string vibration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., Cohen, M.: Interactive digital photomontage. In: ACM Transactions on Graphics (TOG), vol. 23, pp. 294–302. ACM (2004)

  2. Ancuti, C., Haber, T., Mertens, T., Bekaert, P.: Video enhancement using reference photographs. Vis. Comput. 24(7), 709–717 (2008)

    Article  Google Scholar 

  3. Baker, S., Matthews, I.: Lucas-kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  4. Bardow, P., Davison, A.J., Leutenegger, S.: Simultaneous optical flow and intensity estimation from an event camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 884–892 (2016)

  5. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph.(Proc SIGGRAPH) 28(3), 24 (2009)

    Article  Google Scholar 

  6. Barth, F.G., Humphrey, J.A., Srinivasan, M.V.: Frontiers in Sensing: From Biology to Engineering. Springer, Berlin (2012)

    Book  Google Scholar 

  7. Barua, S., Miyatani, Y., Veeraraghavan, A.: Direct face detection and video reconstruction from event cameras. In: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pp. 1–9. IEEE (2016)

  8. Ben-Ezra, M., Nayar, S.K.: Motion-based motion deblurring. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 689–698 (2004)

    Article  Google Scholar 

  9. Cheng, M.M., Zhang, F.L., Mitra, N.J., Huang, X., Hu, S.M.: Repfinder: finding approximately repeated scene elements for image editing. ACM Trans. Graph. 29(4), 83:1-8 (2010)

    Article  Google Scholar 

  10. Gupta, A., Bhat, P., Dontcheva, M., Deussen, O., Curless, B., Cohen, M.: Enhancing and experiencing spacetime resolution with videos and stills. In: Computational Photography (ICCP), 2009 IEEE International Conference on, pp. 1–9. IEEE (2009)

  11. Irani, M., Anandan, P.: Robust multi-sensor image alignment. In: International Conference on Computer Vision, pp. 959–966 (1998)

  12. Kim, H., Handa, A., Benosman, R., Ieng, S.H., Davison, A.: Simultaneous mosaicing and tracking with an event camera. In: Proceedings of the British Machine Vision Conference. BMVA Press (2014). doi:10.5244/C.28.26

  13. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008)

    Article  Google Scholar 

  14. Lichtsteiner, P., Posch, C., Delbruck, T.: A 128 \(\times \) 128 120 db 15 \(\mu \) s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)

    Article  Google Scholar 

  15. Matsushita, Y., Ofek, E., Tang, X., Shum, H.Y.: Full-frame video stabilization. In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 50–57. IEEE (2005)

  16. Meister, M., Berry, M.J.: The neural code of the retina. Neuron 22(3), 435–450 (1999)

    Article  Google Scholar 

  17. Ni, Z., Ieng, S.H., Posch, C., Benosman, R.: Visual tracking using neuromorphic asynchronous event-based cameras. Neural Comput. 27(4), 1–29 (2015)

    Article  Google Scholar 

  18. Paris, S., Durand, F.: A topological approach to hierarchical segmentation using mean shift. In: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8. IEEE (2007)

  19. Reinbacher, C., Graber, G., Pock, T.: Real-time intensity-image reconstruction for event cameras using manifold regularisation. In: Proceedings of the British Machine Vision Conference. BMVA Press (2016)

  20. Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. (TOG) 23(3), 309–314 (2004)

    Article  Google Scholar 

  21. Saner, D., Wang, O., Heinzle, S., Pritch, Y., Smolic, A., Sorkine-Hornung, A., Gross, M.H.: High-speed object tracking using an asynchronous temporal contrast sensor. In: VMV, pp. 87–94 (2014)

  22. Tai, Y.W., Du, H., Brown, M.S., Lin, S.: Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1012–1028 (2010)

    Article  Google Scholar 

  23. Wu, H.Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.T.: Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 31(4), 1–8 (2012)

Download references

Acknowledgements

We thank all the anonymous reviewers for their helpful and constructive comments. This work was supported by the Natural Science Foundation of China (Project Number 61521002), the Joint NSFC-ISF Research Program (project number 61561146393), Research Grant of Beijing Higher Institution Engineering Research Center and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology. Luping Shi was supported in part by the Beijing Municipal Science and Technology Commission (Z15111000090000), the Study of BrainInspired Computing of Tsinghua University (20141080934) and SuZhou-Tsinghua innovation leading program 2016SZ0102.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shi-Min Hu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 19376 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, HC., Zhang, FL., Marshall, D. et al. High-speed video generation with an event camera. Vis Comput 33, 749–759 (2017). https://doi.org/10.1007/s00371-017-1372-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-017-1372-y

Keywords

Navigation