Skip to main content

Ambient Illumination Disentangled Based Weakly-Supervised Image Restoration Using Adaptive Pixel Retention Factor

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15038))

Included in the following conference series:

  • 171 Accesses

Abstract

Existing image restoration algorithms are typically designed for specific domains. It is extremely challenging to achieve enhancement for both low-light and underwater images through a single model. Additionally, due to the extremely limited number of annotated underwater images, the model’s generalization performance is poor. In this paper, we present an ambient illumination disentangled network based weakly-supervised image restoration (WSIR) approach, aiming to utilize incomplete labeled images to achieve the restoration of various low-quality images. On the one hand, we design an illumination disentanglement network (Idnet) to learn the mapping rules for Retinex theory, and establish a data-driven camera response function (DdCRF) for illumination adjustment. On the other hand, we design a Adaptive Pixel Retention Factor Network (APRFNet) for generating the parameter maps in DdCRF, that improves its robustness and flexibility in complex and changeable environments, promoting the authenticity and visual aesthetics of the reconstructed results. Extensive experiments on public datasets and self-collected images demonstrate that our proposed scheme outperforms state-of-the-art methods in both qualitative and quantitative metrics.

This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant U22A2066, Grant 61733014 and Grant U21B2047.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://sites.google.com/site/vonikakis/datasets.

  2. 2.

    https://aistudio.baidu.com/datasetdetail/228251.

References

  1. Yang, W., Liu, J., Wei, C., Wang, W.: Deep retinex decomposition for low-light enhancement. In: Proceedings of British Machine Vision Conference (BMVC). British Machine Vision Association (2018)

    Google Scholar 

  2. Li, C., Anwar, S., Hou, J., Cong, R., Guo, C., Ren, W.: Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 30, 4985–5000 (2021)

    Article  Google Scholar 

  3. Li, C., et al.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Google Scholar 

  4. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017)

    Google Scholar 

  5. Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), vol. 30 (2017)

    Google Scholar 

  6. Jiang, Y., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Google Scholar 

  7. Hai, J., et al.: R2rnet: low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 90, 103712 (2023)

    Google Scholar 

  8. Zhu, A., Zhang, L., Shen, Y., Ma, Y., Zhao, S., Zhou, Y.: Zero-shot restoration of underexposed images via robust retinex decomposition. In: Proceedings of International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2020)

    Google Scholar 

  9. Ren, Y., Ying, Z., Li, T.H., Li, G.: Lecarm: low-light image enhancement using the camera response model. IEEE Trans. Circuits Syst. Video Technol. 29(4), 968–981 (2018)

    Google Scholar 

  10. Stark, J.A.: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 9(5), 889–896 (2000)

    Article  Google Scholar 

  11. Coltuc, D., Bolon, P., Chassery, J.-M.: Exact histogram specification. IEEE Trans. Image Process. 15(5), 1143–1152 (2006)

    Article  Google Scholar 

  12. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)

    Google Scholar 

  13. Cai, R., Chen, Z.: Brain-like retinex: a biologically plausible retinex algorithm for low light image enhancement. Pattern Recogn. 136, 109195 (2023)

    Article  Google Scholar 

  14. Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)

    Google Scholar 

  15. Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1780–1789 (2020)

    Google Scholar 

  16. Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4225–4238 (2021)

    Google Scholar 

  17. Huang, Y., Yuan, F., Xiao, F., Lu, J., Cheng, E.: Underwater image enhancement based on zero-reference deep network. IEEE J. Ocean. Eng. (2023)

    Google Scholar 

  18. Lee, H.-Y., Tseng, H.-Y., Huang, J.-B., Singh, M., Yang, M.-H.: Diverse image-to-image translation via disentangled representations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 35–51 (2018)

    Google Scholar 

  19. Souibgui, M.A., Kessentini, Y.: De-gan: a conditional generative adversarial network for document enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1180–1191 (2022)

    Google Scholar 

  20. Danon, D., Averbuch-Elor, H., Fried, O., Cohen-Or, D.: Unsupervised natural image patch learning. Comput. Visual Media 5(3), 229–237 (2019)

    Google Scholar 

  21. Cai, J., Shuhang, G., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  Google Scholar 

  22. Wang, Y., Wan, R., Yang, W., Li, H., Chau, L.-P., Kot, A.: Low-light image enhancement with normalizing flow. In: Proceedings of AAAI Conference (AAAI), vol. 36, pp. 2604–2612 (2022)

    Google Scholar 

  23. Wang, H., Xu, X., Xu, K., Lau, R.W.H.: Lighting up nerf via unsupervised decomposition and enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12632–12641 (2023)

    Google Scholar 

  24. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceeding of IEEE/CVF International Conference on Computer Vision Pattern Recognition (CVPR), pp. 5728–5739 (2022)

    Google Scholar 

  25. Liang, Z., Li, C., Zhou, S., Feng, R., Loy, C.C.: Iterative prompt learning for unsupervised backlit image enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8094–8103 (2023)

    Google Scholar 

  26. Yang, S., Ding, M., Wu, Y., Li, Z., Zhang, J.: Implicit neural representation for cooperative low-light image enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12918–12927 (Oct 2023)

    Google Scholar 

  27. Li, C., et al.: Embedding fourier for ultra-high-definition low-light image enhancement. In: Proceedings of International Conference on Learning Representations (ICLR) (2023)

    Google Scholar 

  28. Yu, F., et al.: Bdd100k: a diverse driving video database with scalable annotation tooling, 2(5), 6 (2018). arXiv:1805.04687

  29. Wang, W., Wei, C., Yang, W., Liu, J.: Gladnet: low-light enhancement network with global awareness. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 751–755. IEEE (2018)

    Google Scholar 

  30. Jin, Y., Yang, W., Tan, R.T.: Unsupervised night image enhancement: when layer decomposition meets light-effects suppression. In: Proceeding of the European Conference on Computer Vision (ECCV), pp. 404–421. Springer (2022)

    Google Scholar 

  31. Sharma, G., Wu, W., Dalal, E.N.: The ciede2000 color-difference formula: implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 30(1), 21–30 (2005)

    Google Scholar 

  32. Yang, M., Sowmya, A.: An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24(12), 6062–6071 (2015)

    Article  MathSciNet  Google Scholar 

  33. Wang, Z., Shen, L., Mai, X., Mei, Yu., Wang, K., Lin, Y.: Domain adaptation for underwater image enhancement. IEEE Trans. Image Process. 32, 1442–1457 (2023)

    Article  Google Scholar 

  34. Zhenqi, F., et al.: Unsupervised underwater image restoration: from a homology perspective. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 643–651 (2022)

    Google Scholar 

  35. Ke, Z., Liu, Y., Zhu, L., Zhao, N., Lau, R.W.H.: Neural preset for color style transfer. In: Proceeding of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14173–14182 (2023)

    Google Scholar 

  36. Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2018)

    Google Scholar 

  37. Panetta, K., Gao, C., Agaian, S.: Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 41(3), 541–551 (2015)

    Article  Google Scholar 

  38. Wang, Y., Li, N., Li, Z., Zhaorui, G., Zheng, H., Zheng, B., Sun, M.: An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Elect. Eng. 70, 904–913 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rongxin Cui .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 9817 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mao, R., Cui, R. (2025). Ambient Illumination Disentangled Based Weakly-Supervised Image Restoration Using Adaptive Pixel Retention Factor. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15038. Springer, Singapore. https://doi.org/10.1007/978-981-97-8685-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-8685-5_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-8684-8

  • Online ISBN: 978-981-97-8685-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics