Skip to main content

Unveiling Advanced Frequency Disentanglement Paradigm for Low-Light Image Enhancement

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Previous low-light image enhancement (LLIE) approaches, while employing frequency decomposition techniques to address the intertwined challenges of low frequency (e.g., illumination recovery) and high frequency (e.g., noise reduction), primarily focused on the development of dedicated and complex networks to achieve improved performance. In contrast, we reveal that an advanced disentanglement paradigm is sufficient to consistently enhance state-of-the-art methods with minimal computational overhead. Leveraging the image Laplace decomposition scheme, we propose a novel low-frequency consistency method, facilitating improved frequency disentanglement optimization. Our method, seamlessly integrating with various models such as CNNs, Transformers, and flow-based and diffusion models, demonstrates remarkable adaptability. Noteworthy improvements are showcased across five popular benchmarks, with up to 7.68dB gains on PSNR achieved for six state-of-the-art models. Impressively, our approach maintains efficiency with only 88K extra parameters, setting a new standard in the challenging realm of low-light image enhancement. https://github.com/redrock303/ADF-LLIE.

X. Lin—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The structure of our global branch is illustrated in our supplementary material.

  2. 2.

    \(M_h = H/s, M_w = W/s\), \(H\times W\) is the feature resolution.

References

  1. Adelson, E.H., Anderson, C.H., Bergen, J.R., Burt, P.J., Ogden, J.M.: Pyramid methods in image processing. RCA Eng. 29(6), 33–41 (1984)

    Google Scholar 

  2. Burt, P.J., Adelson, E.H.: The laplacian pyramid as a compact image code. In: Readings in Computer Vision, pp. 671–679. Elsevier (1987)

    Google Scholar 

  3. Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., Zhang, Y.: Retinexformer: one-stage retinex-based transformer for low-light image enhancement. In: ICCV (2023)

    Google Scholar 

  4. Chen, C., Chen, Q., Do, M.N., Koltun, V.: Seeing motion in the dark. In: ICCV (2019)

    Google Scholar 

  5. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: CVPR (2018)

    Google Scholar 

  6. Cui, Z., et al.: You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction. In: BMVC (2022)

    Google Scholar 

  7. Feng, S., Bo, L.: Low-light color image enhancement based on retinex. In: International Conference on Automation, Control and Robotics Engineering (CACRE) (2020)

    Google Scholar 

  8. Fu, H., Zheng, W., Meng, X., Wang, X., Wang, C., Ma, H.: You do not need additional priors or regularizers in retinex-based low-light image enhancement. In: CVPR (2023)

    Google Scholar 

  9. Fu, Z., Yang, Y., Tu, X., Huang, Y., Ding, X., Ma, K.K.: Learning a simple low-light image enhancer from paired low-light instances. In: CVPR (2023)

    Google Scholar 

  10. Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM TOG 36(4), 1–12 (2017)

    Article  Google Scholar 

  11. Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR (2020)

    Google Scholar 

  12. Hao, S., Han, X., Guo, Y., Xu, X., Wang, M.: Low-light image enhancement with semi-decoupled decomposition. IEEE TMM 22(12), 3025–3038 (2020)

    Google Scholar 

  13. Hou, J., Zhu, Z., Hou, J., Liu, H., Zeng, H., Yuan, H.: Global structure-aware diffusion process for low-light image enhancement. In: NeurIPS (2023)

    Google Scholar 

  14. Hu, Y., He, H., Xu, C., Wang, B., Lin, S.: Exposure: a white-box photo post-processing framework. ACM TOG 37(2), 1–17 (2018)

    Article  Google Scholar 

  15. Jiang, H., Luo, A., Fan, H., Han, S., Liu, S.: Low-light image enhancement with wavelet-based diffusion models. ACM TOG 42(6), 1–14 (2023)

    Google Scholar 

  16. Jobson, D.J., Rahman, Z.U., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE TIP 6(7), 965–976 (1997)

    Google Scholar 

  17. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: ECCV (2016)

    Google Scholar 

  18. Kim, H.U., Koh, Y.J., Kim, C.S.: Global and local enhancement networks for paired and unpaired image enhancement. In: ECCV (2020)

    Google Scholar 

  19. Kim, H., Choi, S.M., Kim, C.S., Koh, Y.J.: Representative color transform for image enhancement. In: ICCV (2021)

    Google Scholar 

  20. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  21. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  22. Kuleshov, V., Chaganty, A., Liang, P.: Tensor factorization via matrix factorization. In: Artificial Intelligence and Statistics (2015)

    Google Scholar 

  23. Li, Z., Shu, H., Zheng, C.: Multi-scale single image dehazing using laplacian and gaussian pyramids. IEEE TIP 30, 9270–9279 (2021)

    Google Scholar 

  24. Liang, J., Zeng, H., Zhang, L.: High-resolution photorealistic image translation in real-time: a laplacian pyramid translation network. In: CVPR (2021)

    Google Scholar 

  25. Liang, X., Chen, X., Ren, K., Miao, X., Chen, Z., Jin, Y.: Low-light image enhancement via adaptive frequency decomposition network. Sci. Rep. 13(1), 14107 (2023)

    Article  Google Scholar 

  26. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: ICCV (2021)

    Google Scholar 

  27. Loh, Y.P., Chan, C.S.: Getting to know low-light images with the exclusively dark dataset. Comput. Vis. Image Underst. 178, 30–42 (2019)

    Article  Google Scholar 

  28. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. In: ICLR (2017)

    Google Scholar 

  29. Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: Deeplpf: deep local parametric filters for image enhancement. In: CVPR (2020)

    Google Scholar 

  30. Moran, S., et al.: CURL: neural curve layers for global image enhancement. In: International Conference on Pattern Recognition (ICPR) (2021)

    Google Scholar 

  31. Park, J., Lee, J.Y., Yoo, D., Kweon, I.S.: Distort-and-recover: color enhancement using deep reinforcement learning. In: CVPR (2018)

    Google Scholar 

  32. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: NeurIPS (2019)

    Google Scholar 

  33. Peng, L., Zhu, C., Bian, L.: U-shape transformer for underwater image enhancement. In: ECCV (2021)

    Google Scholar 

  34. Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)

    Article  Google Scholar 

  35. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 38, 35–44 (2004)

    Article  Google Scholar 

  36. Risheng, L., Long, M., Jiaao, Z., Xin, F., Zhongxuan, L.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR (2021)

    Google Scholar 

  37. Song, Y., Qian, H., Du, X.: Starenhancer: learning real-time and style-aware image enhancement. In: ICCV (2021)

    Google Scholar 

  38. Souibgui, M.A., et al.: Docentr: an end-to-end document image enhancement transformer. In: International Conference on Pattern Recognition (ICPR) (2022)

    Google Scholar 

  39. Vonikakis, V., Kouskouridas, R., Gasteratos, A.: On the evaluation of illumination compensation algorithms. Multimedia Tools Appl. 77, 9211–9231 (2018)

    Article  Google Scholar 

  40. Wang, C., Pan, J., Wu, X.M.: Structural prior guided generative adversarial transformers for low-light image enhancement. arXiv preprint arXiv:2207.07828 (2022)

  41. Wang, H., Chen, X., Ni, B., Liu, Y., Liu, J.: Omni aggregation networks for lightweight image super-resolution. In: CVPR (2023)

    Google Scholar 

  42. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q.: ECA-net: efficient channel attention for deep convolutional neural networks. In: CVPR (2020)

    Google Scholar 

  43. Wang, R., Xu, X., Fu, C.W., Lu, J., Yu, B., Jia, J.: Seeing dynamic scene in the dark: a high-quality video dataset with mechatronic alignment. In: ICCV (2021)

    Google Scholar 

  44. Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T.: Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method. In: AAAI (2023)

    Google Scholar 

  45. Wang, Y., Wan, R., Yang, W., Li, H., Chau, L.P., Kot, A.: Low-light image enhancement with normalizing flow. In: AAAI (2022)

    Google Scholar 

  46. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE TIP 13(4), 600–612 (2004)

    Google Scholar 

  47. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. In: BMVC (2018)

    Google Scholar 

  48. Wu, X., Shi, B., Dong, Y., Huang, C., Chawla, N.: Neural tensor factorization for temporal interaction learning. In: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (2018)

    Google Scholar 

  49. Xu, K., Yang, X., Yin, B., Lau, R.W.: Learning to restore low-light images via decomposition-and-enhancement. In: CVPR (2020)

    Google Scholar 

  50. Xu, X., Wang, R., Fu, C.W., Jia, J.: SNR-aware low-light image enhancement. In: CVPR (2022)

    Google Scholar 

  51. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: CVPR (2020)

    Google Scholar 

  52. Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE TIP 30, 2072–2086 (2021)

    Google Scholar 

  53. Yuhui, W., et al.: Learning semantic-aware knowledge guidance for low-light image enhancement. In: CVPR (2023)

    Google Scholar 

  54. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: CVPR (2022)

    Google Scholar 

  55. Zamir, S.W., et al.: Learning enriched features for real image restoration and enhancement. In: ECCV (2020)

    Google Scholar 

  56. Zamir, S.W., et al.: Learning enriched features for fast image restoration and enhancement. IEEE TPAMI 45(2), 1934–1948 (2022)

    Article  Google Scholar 

  57. Zhang, Z., Jiang, Y., Jiang, J., Wang, X., Luo, P., Gu, J.: Star: a structure-aware lightweight transformer for real-time image enhancement. In: ICCV (2021)

    Google Scholar 

  58. Zheng, C., Shi, D., Shi, W.: Adaptive unfolding total variation network for low-light image enhancement. In: ICCV (2021)

    Google Scholar 

  59. Zhou, K., Liu, K., Li, W., Han, X., Lu, J.: Mutual guidance and residual integration for image enhancement. arXiv preprint arXiv:2211.13919 (2022)

Download references

Acknowledgments

This work is partially supported by Shenzhen Science and Technology Program KQTD20210811090149095 and also the Pearl River Talent Recruitment Program 2019QN01X226. The work was supported in part by the Basic Research Project No. HZQB-KCZYZ-2021067 of Hetao Shenzhen-HK S&T Cooperation Zone, Guangdong Provincial Outstanding Youth Fund (No. 2023B1515020055), the National Key R&D Program of China with grant No. 2018YFB1800800, by Shenzhen Outstanding Talents Training Fund 202002, by Guangdong Research Projects No. 2017ZT07X152 and No. 2019CX01X104, by Key Area R&D Program of Guangdong Province (Grant No. 2018B030338001) by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001), and by Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No. ZDSYS201707251409055). It is also partly supported by NSFC-61931024, NSFC-62172348, and Shenzhen Science and Technology Program No. JCYJ20220530143604010.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Zhou .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1724 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, K. et al. (2025). Unveiling Advanced Frequency Disentanglement Paradigm for Low-Light Image Enhancement. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15065. Springer, Cham. https://doi.org/10.1007/978-3-031-72667-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72667-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72666-8

  • Online ISBN: 978-3-031-72667-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics