Abstract
Existing image restoration algorithms are typically designed for specific domains. It is extremely challenging to achieve enhancement for both low-light and underwater images through a single model. Additionally, due to the extremely limited number of annotated underwater images, the model’s generalization performance is poor. In this paper, we present an ambient illumination disentangled network based weakly-supervised image restoration (WSIR) approach, aiming to utilize incomplete labeled images to achieve the restoration of various low-quality images. On the one hand, we design an illumination disentanglement network (Idnet) to learn the mapping rules for Retinex theory, and establish a data-driven camera response function (DdCRF) for illumination adjustment. On the other hand, we design a Adaptive Pixel Retention Factor Network (APRFNet) for generating the parameter maps in DdCRF, that improves its robustness and flexibility in complex and changeable environments, promoting the authenticity and visual aesthetics of the reconstructed results. Extensive experiments on public datasets and self-collected images demonstrate that our proposed scheme outperforms state-of-the-art methods in both qualitative and quantitative metrics.
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant U22A2066, Grant 61733014 and Grant U21B2047.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Yang, W., Liu, J., Wei, C., Wang, W.: Deep retinex decomposition for low-light enhancement. In: Proceedings of British Machine Vision Conference (BMVC). British Machine Vision Association (2018)
Li, C., Anwar, S., Hou, J., Cong, R., Guo, C., Ren, W.: Underwater image enhancement via medium transmission-guided multi-color space embedding. IEEE Trans. Image Process. 30, 4985–5000 (2021)
Li, C., et al.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017)
Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), vol. 30 (2017)
Jiang, Y., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)
Hai, J., et al.: R2rnet: low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 90, 103712 (2023)
Zhu, A., Zhang, L., Shen, Y., Ma, Y., Zhao, S., Zhou, Y.: Zero-shot restoration of underexposed images via robust retinex decomposition. In: Proceedings of International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2020)
Ren, Y., Ying, Z., Li, T.H., Li, G.: Lecarm: low-light image enhancement using the camera response model. IEEE Trans. Circuits Syst. Video Technol. 29(4), 968–981 (2018)
Stark, J.A.: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 9(5), 889–896 (2000)
Coltuc, D., Bolon, P., Chassery, J.-M.: Exact histogram specification. IEEE Trans. Image Process. 15(5), 1143–1152 (2006)
Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)
Cai, R., Chen, Z.: Brain-like retinex: a biologically plausible retinex algorithm for low light image enhancement. Pattern Recogn. 136, 109195 (2023)
Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)
Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1780–1789 (2020)
Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4225–4238 (2021)
Huang, Y., Yuan, F., Xiao, F., Lu, J., Cheng, E.: Underwater image enhancement based on zero-reference deep network. IEEE J. Ocean. Eng. (2023)
Lee, H.-Y., Tseng, H.-Y., Huang, J.-B., Singh, M., Yang, M.-H.: Diverse image-to-image translation via disentangled representations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 35–51 (2018)
Souibgui, M.A., Kessentini, Y.: De-gan: a conditional generative adversarial network for document enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1180–1191 (2022)
Danon, D., Averbuch-Elor, H., Fried, O., Cohen-Or, D.: Unsupervised natural image patch learning. Comput. Visual Media 5(3), 229–237 (2019)
Cai, J., Shuhang, G., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)
Wang, Y., Wan, R., Yang, W., Li, H., Chau, L.-P., Kot, A.: Low-light image enhancement with normalizing flow. In: Proceedings of AAAI Conference (AAAI), vol. 36, pp. 2604–2612 (2022)
Wang, H., Xu, X., Xu, K., Lau, R.W.H.: Lighting up nerf via unsupervised decomposition and enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12632–12641 (2023)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceeding of IEEE/CVF International Conference on Computer Vision Pattern Recognition (CVPR), pp. 5728–5739 (2022)
Liang, Z., Li, C., Zhou, S., Feng, R., Loy, C.C.: Iterative prompt learning for unsupervised backlit image enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8094–8103 (2023)
Yang, S., Ding, M., Wu, Y., Li, Z., Zhang, J.: Implicit neural representation for cooperative low-light image enhancement. In: Proceeding of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12918–12927 (Oct 2023)
Li, C., et al.: Embedding fourier for ultra-high-definition low-light image enhancement. In: Proceedings of International Conference on Learning Representations (ICLR) (2023)
Yu, F., et al.: Bdd100k: a diverse driving video database with scalable annotation tooling, 2(5), 6 (2018). arXiv:1805.04687
Wang, W., Wei, C., Yang, W., Liu, J.: Gladnet: low-light enhancement network with global awareness. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 751–755. IEEE (2018)
Jin, Y., Yang, W., Tan, R.T.: Unsupervised night image enhancement: when layer decomposition meets light-effects suppression. In: Proceeding of the European Conference on Computer Vision (ECCV), pp. 404–421. Springer (2022)
Sharma, G., Wu, W., Dalal, E.N.: The ciede2000 color-difference formula: implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 30(1), 21–30 (2005)
Yang, M., Sowmya, A.: An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24(12), 6062–6071 (2015)
Wang, Z., Shen, L., Mai, X., Mei, Yu., Wang, K., Lin, Y.: Domain adaptation for underwater image enhancement. IEEE Trans. Image Process. 32, 1442–1457 (2023)
Zhenqi, F., et al.: Unsupervised underwater image restoration: from a homology perspective. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 643–651 (2022)
Ke, Z., Liu, Y., Zhu, L., Zhao, N., Lau, R.W.H.: Neural preset for color style transfer. In: Proceeding of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14173–14182 (2023)
Ancuti, C.O., Ancuti, C., De Vleeschouwer, C., Bekaert, P.: Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 27(1), 379–393 (2018)
Panetta, K., Gao, C., Agaian, S.: Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 41(3), 541–551 (2015)
Wang, Y., Li, N., Li, Z., Zhaorui, G., Zheng, H., Zheng, B., Sun, M.: An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Elect. Eng. 70, 904–913 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Mao, R., Cui, R. (2025). Ambient Illumination Disentangled Based Weakly-Supervised Image Restoration Using Adaptive Pixel Retention Factor. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15038. Springer, Singapore. https://doi.org/10.1007/978-981-97-8685-5_13
Download citation
DOI: https://doi.org/10.1007/978-981-97-8685-5_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8684-8
Online ISBN: 978-981-97-8685-5
eBook Packages: Computer ScienceComputer Science (R0)