Skip to main content
Log in

Deblurring Low-Light Images with Events

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Modern image-based deblurring methods usually show degenerate performance in low-light conditions since the images often contain most of the poorly visible dark regions and a few saturated bright regions, making the amount of effective features that can be extracted for deblurring limited. In contrast, event cameras can trigger events with a very high dynamic range and low latency, which hardly suffer from saturation and naturally encode dense temporal information about motion. However, in low-light conditions existing event-based deblurring methods would become less robust since the events triggered in dark regions are often severely contaminated by noise, leading to inaccurate reconstruction of the corresponding intensity values. Besides, since they directly adopt the event-based double integral model to perform pixel-wise reconstruction, they can only handle low-resolution grayscale active pixel sensor images provided by the DAVIS camera, which cannot meet the requirement of daily photography. In this paper, to apply events to deblurring low-light images robustly, we propose a unified two-stage framework along with a motion-aware neural network tailored to it, reconstructing the sharp image under the guidance of high-fidelity motion clues extracted from events. Besides, we build an RGB-DAVIS hybrid camera system to demonstrate that our method has the ability to deblur high-resolution RGB images due to the natural advantages of our two-stage framework. Experimental results show our method achieves state-of-the-art performance on both synthetic and real-world images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. In this paper, we use the term “low-light images” to refer to the images containing a majority of pixels in dark regions (poorly visible with low contrast) and a few bright (usually saturated) pixels (e.g., night scenes containing light sources (Hu et al., 2018b)), instead of the images containing only dark regions (e.g., images with short-exposure (Chen et al., 2018a; Zhang et al., 2020b)).

  2. Additional results can be found in the supplementary material.

  3. Additional results can be found in the supplementary material.

  4. Additional results can be found in the supplementary material.

References

  • Baldwin, R., Almatrafi, M., Asari, V., & Hirakawa, K. (2020). Event probability mask (EPM) and event denoising convolutional neural network (EDnCNN) for neuromorphic cameras. In: Proceedings of computer vision and pattern recognition, pp. 1701–1710.

  • Barrios-Avilés, J., Rosado-Muñoz, A., Medus, L. D., Bataller-Mompeán, M., & Guerrero-Martínez, J. F. (2018). Less data same information for event-based sensors: A bioinspired filtering and data reduction algorithm. Sensors, 18(12), 4122.

    Article  Google Scholar 

  • Boracchi, G., & Foi, A. (2012). Modeling the performance of image restoration from motion blur. IEEE Transactions on Image Processing, 21(8), 3502–3517.

    Article  MathSciNet  MATH  Google Scholar 

  • Brandli, C., Berner, R., Yang, M., Liu, S. C., & Delbruck, T. (2014). A 240 \(\times \) 180 130 dB 3 \(\mu \)s latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10), 2333–2341.

    Article  Google Scholar 

  • Chakrabarti, A. (2016). A neural approach to blind motion deblurring. In: Proceedings of European conference on computer vision, pp. 221–235.

  • Chan, T. F., & Wong, C. K. (1998). Total variation blind deconvolution. IEEE Transactions on Image Processing, 7(3), 370–375.

    Article  Google Scholar 

  • Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018a). Learning to see in the dark. In: Proceedings of computer vision and pattern recognition, pp. 3291–3300.

  • Chen, H., Gu, J., Gallo, O., Liu, M. Y., Veeraraghavan, A., & Kautz, J. (2018b). Reblur2Deblur: Deblurring videos via self-supervised learning. In: Proceedings of international conference on computational photography, pp. 1–9.

  • Chen, H., Teng, M., Shi, B., Wang, Y., & Huang, T. (2020). Learning to deblur and generate high frame rate video with an event camera. arXiv preprint arXiv:2003.00847

  • Chen, L., Zhang, J., Lin, S., Fang, F., & Ren. J. S. (2021a). Blind deblurring for saturated images. In: Proceedings of computer vision and pattern recognition, pp. 6308–6316.

  • Chen, L., Zhang, J., Pan, J., Lin, S., Fang, F., & Ren, J. S. (2021b). Learning a non-blind deblurring network for night blurry images. In: Proceedings of computer vision and pattern recognition, pp. 10542–10550.

  • Chi, Z., Wang, Y., Yu, Y., & Tang, J. (2021). Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proceedings of computer vision and pattern recognition, pp. 9137–9146.

  • Cho, S., & Lee, S. (2009). Fast motion deblurring. In: Proceedings of ACM SIGGRAPH Asia, pp. 1–8

  • Cho, S. J., Ji, S. W., Hong, J. P., Jung, S. W., & Ko, S. J. (2021). Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of international conference on computer vision, pp. 4641–4650.

  • Delbruck, T., Hu, Y., & He, Z. (2020). V2E: From video frames to realistic DVS event camera streams. arXiv preprint arXiv:2006.07722

  • Dong, J., Pan, J., Su, Z., & Yang, M. H. (2017). Blind image deblurring with outlier handling. In: Proceedings of international conference on computer vision, pp. 2478–2486.

  • Dong, J., Roth, S., & Schiele, B. (2021). Learning spatially-variant MAP models for non-blind image deblurring. In: Proceedings of computer vision and pattern recognition, pp. 4886–4895.

  • Duan, P., Wang, Z. W., Zhou, X., Ma, Y., & Shi, B. (2021). EventZoom: Learning to denoise and super resolve neuromorphic events. In: Proceedings of computer vision and pattern recognition, pp. 12824–12833.

  • Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., & Freeman, W. T. (2006). Removing camera shake from a single photograph. In: Proceedings of ACM SIGGRAPH, pp. 787–794.

  • Fu, Z., Zheng, Y., Ma, T., Ye, H., Yang, J., & He, L. (2022). Edge-aware deep image deblurring. Neurocomputing, 502, 37–47.

    Article  Google Scholar 

  • Gallego, G., Rebecq, H., & Scaramuzza, D. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of computer vision and pattern recognition, pp. 3867–3876.

  • Gao, H., Tao, X., Shen, X., & Jia, J. (2019). Dynamic scene deblurring with parameter selective sharing and nested skip connections. In: Proceedings of computer vision and pattern recognition, pp. 3848–3856.

  • Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of international conference on artificial intelligence and statistics, pp. 249–256.

  • Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., & Shi, Q. (2017). From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In: Proceedings of computer vision and pattern recognition, pp. 2319–2328.

  • Gu, S., Li, Y., Gool, L. V., & Timofte, R. (2019). Self-guided network for fast image denoising. In: Proceedings of international conference on computer vision, pp. 2511–2520.

  • Han, J., Zhou, C., Duan, P., Tang, Y., Xu, C., Xu, C., Huang, T., & Shi, B. (2020). Neuromorphic camera guided high dynamic range imaging. In: Proceedings of computer vision and pattern recognition, pp. 1730–1739.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of computer vision and pattern recognition, pp. 770–778.

  • Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.

    Article  MathSciNet  MATH  Google Scholar 

  • Hu, J., Shen, L., & Sun, G. (2018a). Squeeze-and-excitation networks. In: Proceedings of computer vision and pattern recognition, pp. 7132–7141.

  • Hu, Z., Cho, S., Wang, J., & Yang, M. H. (2018). Deblurring low-light images with light streaks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(10), 2329–2341.

    Article  Google Scholar 

  • Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In: Proceedings of computer vision and pattern recognition.

  • Hyun Kim, T., & Mu Lee, K. (2015). Generalized video deblurring for dynamic scenes. In: Proceedings of computer vision and pattern recognition, pp. 5426–5434.

  • Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., & Liu, Y. (2020). Learning event-based motion deblurring. In: Proceedings of computer vision and pattern recognition, pp. 3320–3329.

  • Joshi, N., Szeliski, R., & Kriegman, D. J. (2008). PSF estimation using sharp edge prediction. In: Proceedings of computer vision and pattern recognition, pp. 1–8.

  • Kaufman, A., & Fattal, R. (2020). Deblurring using analysis-synthesis networks pair. In: Proceedings of computer vision and pattern recognition, pp. 5811–5820.

  • Khodamoradi, A., & Kastner, R. (2018). \(O(N)\)-Space spatiotemporal filter for reducing noise in neuromorphic vision sensors. IEEE Transactions on Emerging Topics in Computing, 9(1), 15–23.

    Google Scholar 

  • Kingma, D. P., & Ba, J. (2014). ADAM: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  • Krishnan, D., Tay, T., & Fergus, R. (2011). Blind deconvolution using a normalized sparsity measure. In: Proceedings of computer vision and pattern recognition, pp. 233–240.

  • Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). DeblurGAN: Blind motion deblurring using conditional adversarial networks. In: Proceedings of computer vision and pattern recognition, pp. 8183–8192.

  • Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. In: Proceedings of international conference on computer vision, pp. 8878–8887.

  • Li, C., Guo, C., Han, L. H., Jiang, J., Cheng, M. M., Gu, J., & Loy, C. C. (2021). Low-light image and video enhancement using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 1.

  • Li, G., He, X., Zhang, W., Chang, H., Dong, L., & Lin, L. (2018). Non-locally enhanced encoder-decoder network for single image de-raining. In: Proceedings of ACM MM, pp. 1056–1064.

  • Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A 128 \(\times \) 128 120 dB 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566–576.

    Article  Google Scholar 

  • Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., & Ren, J. (2020). Learning event-driven video deblurring and interpolation. In: Proceedings of European conference on computer vision.

  • Liu, H., Brandli, C., Li, C., Liu, S. C., & Delbruck, T. (2015). Design of a spatiotemporal correlation filter for event-based sensors. In: International symposium on circuits and systems, pp. 722–725.

  • Liu, J., Xu, D., Yang, W., Fan, M., & Huang, H. (2021). Benchmarking low-light image enhancement and beyond. International Journal of Computer Vision, 129(4), 1153–1184.

    Article  Google Scholar 

  • Liu, Y., Cheng, M. M., Hu, X., Bian, J. W., Zhang, L., Bai, X., & Tang, J. (2019). Richer convolutional features for edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(08), 1939–1946.

    Article  Google Scholar 

  • Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175–2193.

    Article  Google Scholar 

  • Maharjan, P., Li, L., Li, Z., Xu, N., Ma, C., & Li, Y. (2019). Improving extreme low-light image denoising via residual learning. In: Proceedings of international conference on multimedia and expo.

  • Michaeli, T., & Irani, M. (2014). Blind deblurring using internal patch recurrence. In: Proceedings of European conference on computer vision, pp. 783–798.

  • Mitrokhin, A., Fermüller, C., Parameshwara, C., & Aloimonos, Y. (2018). Event-based moving object detection and tracking. In: Proceedings of international conference on intelligent robots and systems, pp. 1–9.

  • Moseley, B., Bickel, V., López-Francos, I. G., & Rana, L. (2021). Extreme low-light environment-driven image denoising over permanently shadowed lunar regions with a physical noise model. In: Proceedings of computer vision and pattern recognition, pp. 6317–6327.

  • Nah, S., Hyun Kim, T., & Mu Lee, K. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of computer vision and pattern recognition, pp. 3883–3891.

  • Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N. Y., Kainz, B., Glocker, B., & Rueckert, D. (2018). Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999

  • Pan, J., Hu, Z., Su, Z., & Yang, M. H. (2016). \(L_0\)-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(2), 342–355.

    Article  Google Scholar 

  • Pan, J., Sun, D., Pfister, H., & Yang, M. H. (2016b). Blind image deblurring using dark channel prior. In: Proceedings of computer vision and pattern recognition, pp. 1628–1636.

  • Pan, L., Scheerlinck, C., Yu, X., Hartley, R., Liu, M., & Dai, Y. (2019). Bringing a blurry frame alive at high frame-rate with an event camera. In: Proceedings of computer vision and pattern recognition, pp. 6820–6829.

  • Pan, L., Liu, M., & Hartley, R. (2020). Single image optical flow estimation with an event camera. In: Proceedings of computer vision and pattern recognition, pp. 1669–1678.

  • Ren, D., Zhang, K., Wang, Q., Hu, Q., & Zuo, W. (2020). Neural blind deconvolution using deep priors. In: Proceedings of computer vision and pattern recognition, pp. 3341–3350.

  • Rim, J., Lee, H., Won, J., & Cho, S. (2020). Real-world blur dataset for learning and benchmarking deblurring algorithms. In: Proceedings of European conference on computer vision.

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  • Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion deblurring from a single image. ACM Transactions on Graphics (Proc of ACM SIGGRAPH), 27(3), 1–10.

    Article  Google Scholar 

  • Shang, W., Ren, D., Zou, D., Ren, J. S., Luo, P., & Zuo, W. (2021). Bringing events into video deblurring with non-consecutively blurry frames. In: Proceedings of international conference on computer vision, pp. 4531–4540.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Stoffregen, T., Gallego, G., Drummond, T., Kleeman, L., & Scaramuzza, D. (2019). Event-based motion segmentation by motion compensation. In: Proceedings of computer vision and pattern recognition, pp. 7244–7253.

  • Suin, M., Purohit, K., & Rajagopalan, A. (2020). Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proceedings of computer vision and pattern recognition, pp. 3606–3615.

  • Sun, J., Cao, W., Xu, Z., & Ponce, J. (2015). Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of computer vision and pattern recognition, pp. 769–777.

  • Tao, X., Gao, H., Shen, X., Wang, J., & Jia, J. (2018). Scale-recurrent network for deep image deblurring. In: Proceedings of computer vision and pattern recognition, pp. 8174–8182.

  • Tran, P., Tran, A. T., Phung, Q., & Hoai, M. (2021). Explore image deblurring via encoded blur kernel space. In: Proceedings of computer vision and pattern recognition, pp. 11956–11965.

  • Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022

  • Wang, B., He, J., Yu, L., Xia, G.S., & Yang, W. (2020a). Event enhanced high-quality image recovery. In: Proceedings of European conference on computer vision, pp. 155–171.

  • Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In: Proceedings of computer vision and pattern recognition, pp. 7794–7803

  • Wang, Y., Du, B., Shen, Y., Wu, K., Zhao, G., Sun, J., & Wen, H. (2019). EV-Gait: Event-based robust gait recognition using dynamic vision sensors. In: Proceedings of computer vision and pattern recognition, pp. 6358–6367

  • Wang, Z., Duan, P., Cossairt, O., Katsaggelos, A., Huang, T., & Shi, B. (2020b). Joint filtering of intensity images and neuromorphic events for high-resolution noise-robust imaging. In: Proceedings of computer vision and pattern recognition, pp. 1609–1619.

  • Wei, K., Fu, Y., Yang, J., & Huang, H. (2020). A physics-based noise formation model for extreme low-light raw denoising. In: Proceedings of computer vision and pattern recognition, pp. 2758–2767.

  • Whyte, O., Sivic, J., Zisserman, A., & Ponce, J. (2012). Non-uniform deblurring for shaken images. International Journal of Computer Vision, 98(2), 168–186.

    Article  MathSciNet  MATH  Google Scholar 

  • Xu, F., Yu, L., Wang, B., Yang, W., Xia, G. S., Jia, X., Qiao, Z., & Liu, J. (2021). Motion deblurring with real events. In: Proceedings of international conference on computer vision, pp. 2583–2592.

  • Xu, L., Zheng, S., & Jia, J. (2013). Unnatural \(L_0\) sparse representation for natural image deblurring. In: Proceedings of computer vision and pattern recognition, pp. 1107–1114.

  • Yan, Y., Ren, W., Guo, Y., Wang, R., & Cao, X. (2017). Image deblurring via extreme channels prior. In: Proceedings of computer vision and pattern recognition, pp. 4003–4011.

  • Yu, Z., Feng, C., Liu, M. Y., & Ramalingam, S. (2017). CASENet: Deep category-aware semantic edge detection. In: Proceedings of computer vision and pattern recognition, pp. 5964–5973.

  • Yuan, Y., Su, W., & Ma, D. (2020). Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training. In: Proceedings of computer vision and pattern recognition, pp. 3555–3564.

  • Zhang, H., Dai, Y., Li, H., & Koniusz, P. (2019). Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of computer vision and pattern recognition, pp. 5978–5986.

  • Zhang, J., Pan, J., Ren, J., Song, Y., Bao, L., Lau, R. W., & Yang, M. H. (2018a). Dynamic scene deblurring using spatially variant recurrent neural networks. In: Proceedings of computer vision and pattern recognition, pp. 2521–2529.

  • Zhang, K., Luo, W., Zhong, Y., Ma, L., Stenger, B., Liu, W., & Li, H. (2020a). Deblurring by realistic blurring. In: Proceedings of computer vision and pattern recognition, pp. 2737–2746.

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018b). The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of computer vision and pattern recognition.

  • Zhang, S., Zhang, Y., Jiang, Z., Zou, D., Ren, J., & Zhou, B. (2020b). Learning to see in the dark with events. In: Proceedings of european conference on computer vision, pp. 666–682.

  • Zhong, L., Cho, S., Metaxas, D., Paris, S., & Wang, J. (2013). Handling noise in single image deblurring using directional filters. In: Proceedings of computer vision and pattern recognition, pp. 612–619.

  • Zhou, C., Zhao, H., Han, J., Xu, C., Xu, C., Huang, T., & Shi, B. (2020). UnModNet: Learning to unwrap a modulo image for high dynamic range imaging. In: Proceedings of advances in neural information processing systems.

  • Zhou, C., Teng, M., Han, J., Xu, C., & Shi, B. (2021a). DeLiEve-Net: Deblurring low-light images with light streaks and local events. In: Proceedings of international conference on computer vision workshops, pp. 1155–1164.

  • Zhou, C., Teng, M., Han, Y., Xu, C., & Shi, B. (2021b). Learning to dehaze with polarization. In: Proceedings of advances in neural information processing systems.

  • Zhu, A. Z., Yuan, L., Chaney, K., & Daniilidis, K. (2019). Unsupervised event-based learning of optical flow, depth, and egomotion. In: Proceedings of computer vision and pattern recognition, pp. 989–997.

Download references

Acknowledgements

This work is supported by National Key R &D Program of China (2020AAA0105200) and National Natural Science Foundation of China under Grant No. 62136001, 62088102, 62276007, and 61876007.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boxin Shi.

Additional information

Communicated by Ying Fu.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, C., Teng, M., Han, J. et al. Deblurring Low-Light Images with Events. Int J Comput Vis 131, 1284–1298 (2023). https://doi.org/10.1007/s11263-023-01754-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01754-5

Keywords

Navigation