Skip to main content

High-Speed HDR Video Reconstruction from Hybrid Intensity Frames and Events

  • Conference paper
  • First Online:
Computer Vision and Machine Intelligence

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 586))

  • 870 Accesses

Abstract

An effective way to generate high dynamic range (HDR) videos is to capture a sequence of low dynamic range (LDR) frames with alternate exposures and interpolate the intermediate frames. Video frame interpolation techniques can help reconstruct missing information from neighboring images of different exposures. Most of the conventional video frame interpolation techniques compute optical flow between successively captured frames and linearly interpolate them to obtain the intermediate frames. However, these techniques will fail when there is a nonlinear motion or sudden brightness changes in the scene. There is a new class of sensors called event sensors which asynchronously measures per-pixel brightness changes and offer advantages like high temporal resolution, high dynamic range, and low latency. For HDR video reconstruction, we recommend using a hybrid imaging system consisting of a conventional camera, which captures alternate exposure LDR frames, and an event camera which captures high-speed events. We interpolate the missing frames for each exposure by using an event-based interpolation technique which takes in the nearest image frames corresponding to that exposure and the high-speed events data between these frames. At each timestamp, once we have interpolated all the LDR frames for different exposures, we use a deep learning-based algorithm to obtain the HDR frame. We compare our results with those of non-event-based interpolation methods and found that event-based techniques perform better when a large number of frames need to be interpolated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, USA, 1997, SIGGRAPH ’97, pp. 369–378. ACM Press/Addison-Wesley Publishing Co (1997)

    Google Scholar 

  2. Zhao, H., Shi, B., Fernandez-Cull, C., Yeung, S.K., Raskar, R.: Unbounded high dynamic range photography using a modulo camera. In: 2015 IEEE International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2015)

    Google Scholar 

  3. Tocci, M.D., Kiser, C., Tocci, N., Sen, P.: A versatile HDR video production system. ACM Trans. Graph. (TOG) 30(4), 1–10 (2011)

    Google Scholar 

  4. Kalantari, N.K., Ramamoorthi, R.: Deep HDR video from sequences with alternating exposures. Comput. Graph. Forum Wiley Online Libr. 38, 193–205 (2019)

    Google Scholar 

  5. Brandli, C., Berner, R., Yang, M., Liu, S.-C., Delbruck, T.: A 240\(\times \) 180 130 dB 3 \(\upmu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circ. 49(10), 2333–2341 (2014)

    Article  Google Scholar 

  6. Tulyakov, S., Gehrig, D., Georgoulis, S., Erbach, J., Gehrig, M., Li, Y., Scaramuzza, D.: Time lens: event-based video frame interpolation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16155–16164 (2021)

    Google Scholar 

  7. Yan, Q., Gong, D., Shi, Q., van den Hengel, A., Shen, C., Reid, I., Zhang, Y.: Attention-guided network for ghost-free high dynamic range imaging. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1751–1760 (2019)

    Google Scholar 

  8. Li, H., Yuan, Y., Wang, Q.: Video frame interpolation via residue refinement. In: ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2613–2617. IEEE (2020)

    Google Scholar 

  9. Jiang, H., Sun, D., Jampani, V., Yang, M.-H., Learned-Miller, E., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9000–9008 (2018)

    Google Scholar 

  10. Arif Khan, E., Oguz Akyuz, A., Reinhard, E.: Ghost removal in high dynamic range images. In: 2006 International Conference on Image Processing, pp. 2005–2008. IEEE (2006)

    Google Scholar 

  11. Oh, T.H., Lee, J.-Y., Tai, Y.-W., Kweon, I.S.: Robust high dynamic range imaging by rank minimization. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1219–1232 (2014)

    Google Scholar 

  12. Sen, P., Kalantari, N.K., Yaesoubi, M., Darabi, S., Goldman, D.B., Shechtman, E.: Robust patch-based HDR reconstruction of dynamic scenes. ACM Trans. Graph. 31(6), 203 (2012)

    Google Scholar 

  13. Kalantari, N.K., Ramamoorthi, R., et al.: Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph. 36(4), 1–12, Art. No. 144 (2017)

    Google Scholar 

  14. Banterle, F., Ledda, P., Debattista, K., Chalmers, A.: Inverse tone mapping. In: Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, pp. 349–356 (2016)

    Google Scholar 

  15. Moriwaki, K., Yoshihashi, R., Kawakami, R., You, S., Naemura, T.: Hybrid loss for learning single-image-based HDR reconstruction. arXiv preprint arXiv:1812.07134 (2018)

  16. Zhang, J., Lalonde, J.-F.: Learning high dynamic range from outdoor panoramas. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4519–4528 (2017)

    Google Scholar 

  17. Santana Santos, M., Ren, T.I., Khademi Kalantari, N.: Single image HDR reconstruction using a CNN with masked features and perceptual loss. arXiv preprint arXiv:2005.07335 (2020)

  18. Eilertsen, G., Kronander, J., Denes, G., Mantiuk, R.K., Unger, J.: HDR image reconstruction from a single exposure using deep CNNs. ACM Trans. Graph. (TOG) 36(6), 1–15 (2017)

    Google Scholar 

  19. Niu, Y., Wu, J., Liu, W., Guo, W., Lau, R.W.H.: HDR-GAN: HDR image reconstruction from multi-exposed LDR images with large motions. IEEE Trans. Image Process. 30, 3885–3896 (2021)

    Google Scholar 

  20. Yan, Q., Zhang, L., Liu, Y., Sun, J., Shi, Q., Zhang, Y.: Deep HDR imaging via a non-local network. IEEE Trans. Image Process. 29, 4308–4322 (2020)

    Article  MATH  Google Scholar 

  21. Wu, S., Xu, J., Tai, Y.-W., Tang, C.-K.: Deep high dynamic range imaging with large foreground motions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 117–132 (2018)

    Google Scholar 

  22. Hirakawa, K., Simon, P.M.: Single-shot high dynamic range imaging with conventional camera hardware. In: 2011 International Conference on Computer Vision, pp. 1339–1346. IEEE (2011)

    Google Scholar 

  23. Chen, G., Chen, C., Guo, S., Liang, Z., Wong, K.Y.K., Zhang, L.: HDR video reconstruction: a coarse-to-fine network and a real-world benchmark dataset. arXiv preprint arXiv:2103.14943 (2021)

  24. Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., et al.: Event-based vision: a survey. arXiv preprint arXiv:1904.08405 (2019)

  25. Froehlich, J., Grandinetti, S., Eberhardt, B., Walter, S., Schilling, A., Brendel, H.: Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays. Digit. Photogr. X Int. Soc. Opt. Photonics 9023, 90230X (2014)

    Google Scholar 

  26. Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open event camera simulator. In: Conference on Robot Learning, pp. 969–982. PMLR (2018)

    Google Scholar 

  27. Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. (TOG) 30(4), 1–14 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rishabh Samra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Samra, R., Mitra, K., Shedligeri, P. (2023). High-Speed HDR Video Reconstruction from Hybrid Intensity Frames and Events. In: Tistarelli, M., Dubey, S.R., Singh, S.K., Jiang, X. (eds) Computer Vision and Machine Intelligence. Lecture Notes in Networks and Systems, vol 586. Springer, Singapore. https://doi.org/10.1007/978-981-19-7867-8_15

Download citation

Publish with us

Policies and ethics