Skip to main content

Zero-Shot Image Enhancement with Renovated Laplacian Pyramid

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13804))

Included in the following conference series:

  • 1562 Accesses

Abstract

In this research, we tackle image enhancement task both in the traditional and Zero-Shot learning scheme with renovated Laplacian pyramid. Recent image enhancement fields experience power of Zero-Shot learning, estimating output from information of an input image itself without additional ground truth data, aiming for avoiding collection of training dataset and domain shift. As requiring ”zero” training data, introducing effective visual prior is particularly important in Zero-Shot image enhancement. Previous studies mainly focus on designing task specific loss function to capture its internal physical process. On the other, though incorporating signal processing methods into enhancement model is efficaciously performed in supervised learning, is less common in Zero-Shot learning. Aiming for further improvement and adding promising leaps to Zero-Shot learning, this research proposes to incorporate Laplacian pyramid to network process. First, Multiscale Laplacian Enhancement (MLE) is formulated, simply enhancing an input image in the hierarchical Laplacian pyramid representation, resulting in detail enhancement, image sharpening, and contrast improvement depending on its hyper parameters. By combining MLE and introducing visual prior specific to underwater images, Zero-Shot underwater image enhancement model with only seven convolutional layers is proposed. Without prior training and any training data, proposed model attains comparative performance compared with previous state-of-the-art models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 126–135 (2017)

    Google Scholar 

  2. Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  3. Anwar, S., Li, C.: Diving deeper into underwater image enhancement: a survey. Signal Process. Image Commun. 89, 115978 (2020). https://doi.org/10.1016/j.image.2020.115978, www.sciencedirect.com/science/article/pii/S0923596520301478

  4. Aubry, M., Paris, S., Hasinoff, S.W., Kautz, J., Durand, F.: Fast local Laplacian filters: theory and applications. ACM Trans. Graph. (TOG) 33(5), 1–14 (2014)

    Article  Google Scholar 

  5. Bojanowski, P., Joulin, A., Lopez-Pas, D., Szlam, A.: Optimizing the latent space of generative networks. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 600–609. PMLR (10–15 Jul 2018), https://proceedings.mlr.press/v80/bojanowski18a.html

  6. Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin institute 310(1), 1–26 (1980)

    Article  Google Scholar 

  7. Burt, P.J., Adelson, E.H.: The laplacian pyramid as a compact image code. In: Readings in Computer Vision, pp. 671–679. Elsevier (1987)

    Google Scholar 

  8. Cao, X., Rong, S., Liu, Y., Li, T., Wang, Q., He, B.: Nuicnet: non-uniform illumination correction for underwater image using fully convolutional network. IEEE Access 8, 109989–110002 (2020)

    Article  Google Scholar 

  9. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1), 89–97 (2004)

    MathSciNet  MATH  Google Scholar 

  10. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)

    Google Scholar 

  11. Cheng, Z., Xiong, Z., Chen, C., Liu, D., Zha, Z.J.: Light field super-resolution with zero-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10010–10019 (2021)

    Google Scholar 

  12. Cho, W., Choi, S., Park, D.K., Shin, I., Choo, J.: Image-to-image translation via group-wise deep whitening-and-coloring transformation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10639–10647 (2019)

    Google Scholar 

  13. Fu, M., Liu, H., Yu, Y., Chen, J., Wang, K.: DW-GAN: A discrete wavelet transform GAN for nonhomogeneous dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 203–212 (2021)

    Google Scholar 

  14. Gandelsman, Y., Shocher, A., Irani, M.: " double-dip": unsupervised image decomposition via coupled deep-image-priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11026–11035 (2019)

    Google Scholar 

  15. Gao, W., Zhang, X., Yang, L., Liu, H.: An improved sobel edge detection. In: 2010 3rd International Conference on Computer Science and Information Technology, vol. 5, pp. 67–71. IEEE (2010)

    Google Scholar 

  16. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. JMLR Workshop and Conference Proceedings (2010)

    Google Scholar 

  17. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT press (2016)

    Google Scholar 

  18. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  19. Islam, M.J., Luo, P., Sattar, J.: Simultaneous Enhancement and Super-Resolution of Underwater Imagery for Improved Visual Perception. In: Robotics: Science and Systems (RSS). Corvalis, Oregon, USA (July 2020). https://doi.org/10.15607/RSS.2020.XVI.018

  20. Jin, C., Deng, L.J., Huang, T.Z., Vivone, G.: Laplacian pyramid networks: a new approach for multispectral pansharpening. Inf. Fusion 78, 158–170 (2022)

    Article  Google Scholar 

  21. Jinjin, G., Haoming, C., Haoyu, C., Xiaoxing, Y., Ren, J.S., Chao, D.: PIPAL: a large-scale image quality assessment dataset for perceptual image restoration. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 633–651. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_37

    Chapter  Google Scholar 

  22. Kar, A., Dhara, S.K., Sen, D., Biswas, P.K.: Zero-shot single image restoration through controlled perturbation of koschmieder’s model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16205–16215 (2021)

    Google Scholar 

  23. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)

    Google Scholar 

  24. Lehtinen, J., et al.: Noise2Noise: learning image restoration without clean data. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 2965–2974. PMLR (10–15 Jul 2018), https://proceedings.mlr.press/v80/lehtinen18a.html

  25. Li, B., Gou, Y., Liu, J.Z., Zhu, H., Zhou, J.T., Peng, X.: Zero-shot image dehazing. IEEE Trans. Image Process. 29, 8457–8466 (2020)

    Article  MATH  Google Scholar 

  26. Li, C., Anwar, S., Porikli, F.: Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recogn. 98, 107038 (2020)

    Article  Google Scholar 

  27. Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., Tao, D.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  MATH  Google Scholar 

  28. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  29. M Uplavikar, P., Wu, Z., Wang, Z.: All-in-one underwater image enhancement using domain-adversarial learning. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2019)

    Google Scholar 

  30. McCartney, E.J.: Optics of the atmosphere: scattering by molecules and particles. New York (1976)

    Google Scholar 

  31. Narasimhan, S.G., Nayar, S.K.: Chromatic framework for vision in bad weather. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000 (Cat. No. PR00662), vol. 1, pp. 598–605. IEEE (2000)

    Google Scholar 

  32. Panetta, K., Gao, C., Agaian, S.: Human-visual-system-inspired underwater image quality measures. IEEE J. Oceanic Eng. 41(3), 541–551 (2015)

    Article  Google Scholar 

  33. Paris, S., Hasinoff, S.W., Kautz, J.: Local Laplacian filters: edge-aware image processing with a Laplacian pyramid. ACM Trans. Graph. 30(4), 68 (2011)

    Article  Google Scholar 

  34. Peli, E.: Contrast in complex images. JOSA A 7(10), 2032–2040 (1990)

    Article  Google Scholar 

  35. Peng, L., Zhu, C., Bian, L.: U-shape transformer for underwater image enhancement. arXiv preprint arXiv:2111.11843 (2021)

  36. Polesel, A., Ramponi, G., Mathews, V.: Image enhancement via adaptive unsharp masking. IEEE Trans. Image Process. 9(3), 505–510 (2000). https://doi.org/10.1109/83.826787

    Article  Google Scholar 

  37. Qin, X., Wang, Z., Bai, Y., Xie, X., Jia, H.: FFA-Net: feature fusion attention network for single image dehazing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11908–11915 (2020)

    Google Scholar 

  38. Sharma, A., Tan, R.T.: Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11977–11986 (June 2021)

    Google Scholar 

  39. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454 (2018)

    Google Scholar 

  40. Wu, H., Liu, J., Xie, Y., Qu, Y., Ma, L.: Knowledge transfer dehazing network for nonhomogeneous dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (June 2020)

    Google Scholar 

  41. Yang, M., Sowmya, A.: An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24(12), 6062–6071 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  42. Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9036–9045 (2019)

    Google Scholar 

  43. Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S., Zhao, S.: Zero-shot restoration of back-lit images using deep internal learning. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1623–1631 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shunsuke Takao .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1090 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Takao, S. (2023). Zero-Shot Image Enhancement with Renovated Laplacian Pyramid. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13804. Springer, Cham. https://doi.org/10.1007/978-3-031-25069-9_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25069-9_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25068-2

  • Online ISBN: 978-3-031-25069-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics