Skip to main content

Residual and Dense UNet for Under-Display Camera Restoration

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12539))

Abstract

With the rapid development of electronic products, the increasing demand for full-screen devices has become a new trend, which facilitates the investigation of Under-Display Cameras (UDC). UDC can not only bring larger display-to-body ratio but also improve the interactive experience. However, when imaging sensor is mounted behind a display, existing screen materials will cause severe image degradation due to lower light transmission rate and diffraction effects. In order to promote the research in this field, RLQ-TOD 2020 held the Image Restoration Challenge for Under-Display Camera. The challenge was composed of two tracks – 4k Transparent OLED (T-OLED) and phone Pentile OLED (P-OLED) track. In this paper, we propose a UNet-like structure with two various basic building blocks to tackle this problem. We discover that T-OLED and P-OLED have different preferences with the model structure and the input patch size during training. With the proposed model, our team won the third place in the challenge on both T-OLED and P-OLED tracks.

Q. Yang and Y. Liu—The first two authors are co-first authors.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abdelhamed, A., Timofte, R., Brown, M.S., Yu, S., Cao, Z.: Ntire 2019 challenge on real image denoising: Methods and results. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2020)

    Google Scholar 

  2. Bulat, A., Yang, J., Tzimiropoulos, G.: To learn image super-resolution, use a GAN to learn how to do image degradation first. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, Part VI. LNCS, vol. 11210, pp. 187–202. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_12

    Chapter  Google Scholar 

  3. Cai, J., Zeng, H., Yong, H., Cao, Z., Zhang, L.: Toward real-world single image super-resolution: a new benchmark and a new model. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3086–3095 (2019)

    Google Scholar 

  4. Chen, C., Xiong, Z., Tian, X., Zha, Z.J., Wu, F.: Camera lens super-resolution. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  5. Dong, Y., Liu, Y., Zhang, H., Chen, S., Qiao, Y.: FD-GAN: generative adversarial networks with fusion-discriminator for single image dehazing (2020)

    Google Scholar 

  6. Gong, D., Sun, W., Shi, Q., Hengel, A.V.D., Zhang, Y.: Learning to zoom-in via learning to zoom-out: Real-world super-resolution by generating and adapting degradation (2020)

    Google Scholar 

  7. Gross, S., Wilber, M.: Training and investigating residual nets. Facebook AI Res. 6, (2016)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018)

    Google Scholar 

  10. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network (2016)

    Google Scholar 

  11. Li, S., Cai, Q., Li, H., Cao, J., Li, Z.: Frequency separation network for image super-resolution. IEEE Access 8, 1 (2020)

    Article  Google Scholar 

  12. Liu, J., et al.: Learning raw image denoising with Bayer pattern unification and Bayer preserving augmentation (2019)

    Google Scholar 

  13. Liu, P., Zhang, H., Zhang, K., Lin, L., Zuo, W.: Multi-level wavelet-CNN for image restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 773–782 (2018)

    Google Scholar 

  14. Nah, S., et al.: Ntire 2019 challenge on video deblurring: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  15. Orieux, F., Giovannelli, J.F., Rodet, T.: Bayesian estimation of regularization and point spread function parameters for wiener-hunt deconvolution. JOSA A 27(7), 1593–1607 (2010)

    Article  Google Scholar 

  16. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015, Part III. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  17. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)

    Google Scholar 

  18. Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018, Part V. LNCS, vol. 11133, pp. 63–79. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11021-5_5

    Chapter  Google Scholar 

  19. Xu, X., Ma, Y., Sun, W.: Towards real scene super-resolution with raw images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1723–1731 (2019)

    Google Scholar 

  20. Yang, C.-Y., Ma, C., Yang, M.-H.: Single-image super-resolution: a benchmark. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part IV. LNCS, vol. 8692, pp. 372–386. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_25

    Chapter  Google Scholar 

  21. Zhang, H., Sindagi, V., Patel, V.M.: Image de-raining using a conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 30(11), 3943–3956 (2017)

    Google Scholar 

  22. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2016)

    Article  MathSciNet  Google Scholar 

  23. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN based image denoising. IEEE Trans. Image Process. 27, 4608–4622 (2017)

    Article  MathSciNet  Google Scholar 

  24. Zhang, X., Chen, Q., Ng, R., Koltun, V.: Zoom to learn, learn to zoom. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3762–3770 (2019)

    Google Scholar 

  25. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. (99), 1 (2020)

    Google Scholar 

  26. Zhou, Y., et al.: When awgn-based denoiser meets real noises (2019)

    Google Scholar 

  27. Zhou, Y., Ren, D., Emerton, N., Lim, S., Large, T.: Image restoration for under-display camera. arXiv preprint arXiv:2003.04857 (2020)

Download references

Acknowledgement

This work is partially supported by the National Key \( R \& D\) Program of China (NO. 2019YFB17050003, NO. 2018YFB1308801, NO. 2017YFB0306401), the Consulting Research Project of the Chinese Academy of Engineering (Grant no. 2019-XZ-7).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qirui Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Q., Liu, Y., Tang, J., Ku, T. (2020). Residual and Dense UNet for Under-Display Camera Restoration. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12539. Springer, Cham. https://doi.org/10.1007/978-3-030-68238-5_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68238-5_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68237-8

  • Online ISBN: 978-3-030-68238-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics