Skip to main content

Advertisement

Log in

RGB-Net: transformer-based lightweight low-light image enhancement network via RGB channel separation

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

In real-life scenarios, captured images often suffer from insufficient brightness, significant noise, and color distortion due to varying lighting conditions. Therefore, we propose a novel lightweight network for low-light image enhancement named RGB-Net. Firstly, unlike traditional Retinex-based models, our approach leverages the separation of RGB color channels to enhance the input image. Each RGB channel is independently enhanced for brightness and color information by a U-shaped channel optimization module (UCOM). Additionally, we utilize the transformer to capture long-range dependencies by incorporating a multi-head self-attention module within the UCOM, thereby improving feature extraction capabilities. Secondly, we design a multi-channel fusion module (MCFM) that integrates a mixed dense convolution and fully connected layers, employing a residual network to fuse the enhancement results from different color channels for improve image reconstruction. Thirdly, we construct a new hybrid loss function by exploring various loss terms, which significantly improves the representational ability of our network. Extensive experiments on five publicly used real-world datasets have shown that our method can significantly enhance image details with only 0.71M parameters and 5.81G floating-point operations, outperforming existing low-light image enhancement algorithms in both quantitative and qualitative evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

Data will be made available on request.

References

  1. Zhang, J., Lv, Y., Tao, J., Huang, F., Zhang, J.: A robust real-time anchor-free traffic sign detector with one-level feature. IEEE Trans. Emerg. Topics Comput. Intell. 8(2), 1437–1451 (2024)

    MATH  Google Scholar 

  2. Zhang, J., He, Y., Chen, W., Kuang, L.-D., Zheng, B.: Corrformer: context-aware tracking with cross-correlation and transformer. Comput. Electr. Eng. 114, 109075 (2024)

    MATH  Google Scholar 

  3. Zhang, J., Zeng, Z., Sharma, P.K., Alfarraj, O., Tolba, A., Wang, J.: A dual encoder crack segmentation network with haar wavelet-based high-low frequency attention. Expert Syst. Appl. 256, 124950 (2024). https://doi.org/10.1016/j.eswa.2024.124950

    Article  Google Scholar 

  4. Wang, S., Chen, Z., Wang, H.: Multi-weight and multi-granularity fusion of underwater image enhancement. Earth Sci. Inf. 15(3), 1647–1657 (2022)

    MATH  Google Scholar 

  5. Wang, S., Chen, Z., Wang, H.: Underwater image enhancement by combining multi-attention with recurrent residual convolutional u-net. SIViP 18(4), 3229–3241 (2024)

    MATH  Google Scholar 

  6. Mi, A., Luo, W., Huo, Z.: Dual-band low-light image enhancement. Multimed. Syst. 30(2), 91 (2024)

    MATH  Google Scholar 

  7. Abdullah-Al-Wadud, M., Kabir, M.H., Dewan, M.A.A., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(2), 593–600 (2007)

    MATH  Google Scholar 

  8. Wang, S., Zheng, J., Hu, H.-M., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)

    MATH  Google Scholar 

  9. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2016)

    MathSciNet  MATH  Google Scholar 

  10. Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)

    MATH  Google Scholar 

  11. Pizer, S.M., Amburn, E.P., Austin, J.D., Cromartie, R., Geselowitz, A., Greer, T., Haar Romeny, B., Zimmerman, J.B., Zuiderveld, K.: Adaptive histogram equalization and its variations. Comput. Vision Graphics Image Process. 39(3), 355–368 (1987)

    Google Scholar 

  12. Land, E.H., McCann, J.J.: Lightness and retinex theory. Josa 61(1), 1–11 (1971)

    MATH  Google Scholar 

  13. Funt, B.V., Ciurea, F., McCann, J.J.: Retinex in matlab\(^{{\rm TM}}\). J. Electron. Imag. 13(1), 48–57 (2004). https://www.researchgate.net/publication/242506186_Retinex_in_Matlab

    Google Scholar 

  14. Jobson, D.J., Rahman, Z.-U., Woodell, G.A.: Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6(3), 451–462 (1997)

    MATH  Google Scholar 

  15. Jobson, D.J., Rahman, Z.-U., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)

    MATH  Google Scholar 

  16. Rahman, Z.-U., Jobson, D.J., Woodell, G.A.: Retinex processing for automatic image enhancement. J. Electron. Imaging 13(1), 100–110 (2004)

    MATH  Google Scholar 

  17. Zhang, J., Xing, Z., Wu, M., Gui, Y., Zheng, B.: Enhancing low-light images via skip cross-attention fusion and multi-scale lightweight transformer. J. Real-Time Image Proc. 21(2), 42 (2024)

    MATH  Google Scholar 

  18. Zhang, J., Jiang, J., Wu, M., Feng, Z., Shi, X.: Illumination-guided dual-branch fusion network for partition-based image exposure correction. J. Vis. Commun. Image Represent. 106, 104342 (2025)

    MATH  Google Scholar 

  19. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1632–1640 (2019)

  20. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vision 129, 1013–1037 (2021)

    MATH  Google Scholar 

  21. Jiang, K., Wang, Z., Wang, Z., Chen, C., Yi, P., Lu, T., Lin, C.-W.: Degrade is upgrade: Learning degradation for low-light image enhancement. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 1078–1086 (2022)

  22. Lu, H., Gong, J., Liu, Z., Lan, R., Pan, X.: Fdmlnet: a frequency-division and multiscale learning network for enhancing low-light image. Sensors 22(21), 8244 (2022)

    Google Scholar 

  23. Xu, K., Chen, H., Xu, C., Jin, Y., Zhu, C.: Structure-texture aware network for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 32(8), 4983–4996 (2022)

    MATH  Google Scholar 

  24. Garg, A., Pan, X.-W., Dung, L.-R.: Licent: low-light image enhancement using the light channel of hsl. IEEE Access 10, 33547–33560 (2022)

    Google Scholar 

  25. Chen, J., Lian, Q., Zhang, X., Zhang, D., Yang, Y.: Hcsam-net: multistage network with a hybrid of convolution and self-attention mechanism for low-light image enhancement. Available at SSRN 4237486 (2022)

  26. Yang, Y., Hu, W., Huang, S., Tu, W., Wan, W.: Low-light image enhancement network based on multi-stream information supplement. Multidimens. Syst. Signal Process. 33(3), 711–723 (2022)

    MATH  Google Scholar 

  27. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general u-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17683–17693 (2022)

  28. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

  29. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)

  30. Chen, C., Chen, Q., Do, M.N., Koltun, V.: Seeing motion in the dark. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3185–3194 (2019)

  31. Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789 (2020)

  32. Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6849–6857 (2019)

  33. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560 (2018)

  34. Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)

    MATH  Google Scholar 

  35. Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Google Scholar 

  36. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (2021)

  37. Xu, K., Yang, X., Yin, B., Lau, R.W.: Learning to restore low-light images via decomposition-and-enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2281–2290 (2020)

  38. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: Band representation-based semi-supervised low-light image enhancement: bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 30, 3461–3473 (2021)

    MATH  Google Scholar 

  39. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pp. 492–511 (2020). Springer

  40. Xu, X., Wang, R., Fu, C.-W., Jia, J.: Snr-aware low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17714–17724 (2022)

  41. Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., Zhang, Y.: Retinexformer: one-stage retinex-based transformer for low-light image enhancement. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12504–12513 (2023)

  42. Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T.: Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 2654–2662 (2023)

  43. Feng, Y., Zhang, C., Wang, P., Wu, P., Yan, Q., Zhang, Y.: You only need one color space: an efficient network for low-light image enhancement. Preprint at arXiv:2402.05809v2 (2024). http://arxiv.org/abs/2402.05809v2

  44. Tan, J., Pei, S., Qin, W., Fu, B., Li, X., Huang, L.: Wavelet-based mamba with fourier adjustment for low-light image enhancement. In: Proceedings of the Asian Conference on Computer Vision (ACCV), pp. 3449–3464 (2024)

  45. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image video Technol. 38, 35–44 (2004)

    MATH  Google Scholar 

  46. Nsampi, N.E., Hu, Z., Wang, Q.: Learning exposure correction via consistency modeling. In: BMVC, p. 12 (2021)

  47. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Dslr-quality photos on mobile devices with deep convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3277–3285 (2017)

  48. Afifi, M., Derpanis, K.G., Ommer, B., Brown, M.S.: Learning multi-scale photo exposure correction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9157–9167 (2021)

  49. Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition (2011)

  50. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    MathSciNet  MATH  Google Scholar 

  51. Wang, Z.: Image quality assessment: form error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 604–606 (2004)

    MATH  Google Scholar 

  52. Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graphics (TOG) 36(4), 1–12 (2017)

    Google Scholar 

  53. Chen, Y.-S., Wang, Y.-C., Kao, M.-H., Chuang, Y.-Y.: Deep photo enhancer: unpaired learning for image enhancement from photographs with gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6306–6314 (2018)

  54. Huang, J., Liu, Y., Zhao, F., Yan, K., Zhang, J., Huang, Y., Zhou, M., Xiong, Z.: Deep fourier-based exposure correction network with spatial-frequency interaction. In: European Conference on Computer Vision, pp. 163–180 (2022). Springer

  55. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

  56. Wang, S., Ma, K., Yeganeh, H., Wang, Z., Lin, W.: A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 22(12), 2387–2390 (2015)

    MATH  Google Scholar 

Download references

Funding

This work was supported in part by the Open Fund of Key Laboratory of Safety Control of Bridge Engineering, Ministry of Education (Changsha University of Science and Technology) under Grant 21KB06.

Author information

Authors and Affiliations

Authors

Contributions

Jianming Zhang: Conceptualization, Formal analysis, Writing - Review & Editing, Supervision, Funding acquisition. Zhijian Feng: Methodology, Formal analysis, Software, Writing - Original Draft. Jia Jiang: Methodology, Formal analysis, Validation. Xiangnan Shi: Data Curation, Visualization. Jin Zhang: Project administration, Investigation.

Corresponding author

Correspondence to Jianming Zhang.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Communicated by Bing-kun Bao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Feng, Z., Jiang, J. et al. RGB-Net: transformer-based lightweight low-light image enhancement network via RGB channel separation. Multimedia Systems 31, 162 (2025). https://doi.org/10.1007/s00530-025-01750-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-025-01750-4

Keywords