Abstract
This paper presents a novel low-light image enhancement model named PFLLTNet, specifically designed to address the issues of detail loss and global structure distortion in low-light conditions. The model leverages the separated operations of luminance and chrominance in the YUV color space, combined with multi-headed self-attention (MHSA), feature fusion paths, residual connections, and the PixelShuffle upsampling strategy, significantly improving image detail restoration and enhancing the fidelity of global information. Additionally, we optimized the combination and weight configuration of the loss function, with particular emphasis on the introduction of light consistency loss and the refinement of the learning rate scheduling mechanism using cosine annealing with restarts, ensuring the model’s stability and robustness during extended training. Experimental results demonstrate that PFLLTNet achieves state-of-the-art (SOTA) performance in key metrics such as PSNR and SSIM while maintaining relatively low computational complexity. Due to its computational efficiency and low resource demands, PFLLTNet holds significant potential for deployment in scenarios such as mobile devices, real-time video processing, and intelligent surveillance systems, particularly in environments requiring rapid processing and constrained computational resources. The source code and pre-trained models are available for download at https://github.com/Huang408746862/PFLLTNet.




Similar content being viewed by others
Data availability
No datasets were generated or analysed during the current study.
References
Wang, W., Wu, X., Yuan, X., Gao, Z.: An experiment-based review of low-light image enhancement methods. IEEE Access. 8, 884–917 (2020)
Orhei, C., Vasiu, R.: An analysis of extended and dilated filters in sharpening algorithms. IEEE Access. 11, 81449–81465 (2023)
Orhei, C., Bogdan, V., Bonchis, C., Vasiu, R.: Dilated filters for edge-detection algorithms. Appl. Sci. 11(22), 10716 (2021)
Orhei, C.-C.: Urban landmark detection using computer vision. Unpublished Ph.D. dissertation, Universitatea Politehnica Timișoara (2022)
Căleanu, C.D., Sîrbu, C.L., Simion, G.: Deep neural architectures for contrast enhanced ultrasound (CEUS) focal liver lesions automated diagnosis. Sensors. 21(12), 4126 (2021)
Vert, S., Andone, D., Ternauciuc, A., Mihaescu, V., Rotaru, O., Mocofan, M., Orhei, C., Vasiu, R.: User evaluation of a multi-platform digital storytelling concept for cultural heritage. Mathematics. 9(21), 2678 (2021)
Orhei, C., Radu, L., Mocofan, M., Vert, S., Vasiu, R.: Urban landmark detection using A-KAZE features and vector of aggregated local descriptors. In: 2022 International Symposium on Electronics and Telecommunications (ISETC), IEEE, 1–4 (2022)
Avram, A., Porobic, I., Papazian, P.: An overview of intelligent surveillance systems development. In: 2018 International Symposium on Electronics and Telecommunications (ISETC), IEEE, 1–6 (2018)
Wei, C., Wang, W., Yang, W., Liu, J.: Deep Retinex Decomposition for Low-Light Enhancement. arXiv preprint, arXiv:1808.04560 [Online]. (2018). Available: https://doi.org/10.48550/arXiv.1808.04560
Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)
Jobson, D.J., Rahman, Z., Woodell, G.A.: Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6(3), 451–462 (1997)
Rahman, Z., Jobson, D.J., Woodell, G.A.: Multi-scale retinex for color image enhancement. In: Proceedings of 3rd IEEE International Conference on Image Processing, vol. 3, IEEE, 1003–1006 (1996)
Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)
Guo, X., Li, Y., Ling, H.: LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)
Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, 1632–1640 (2019)
Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., Zhang, Y.: Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (2023)
Yi, X., Xu, H., Zhang, H., Tang, L., Ma, J.: Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 12302–12311 (2023)
Zhu, L., Chen, Y., Ghamisi, P., Benediktsson, J.A.: Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 56(9), 5046–5063 (2018)
Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)
Xu, X., Wang, R., Fu, C.-W., Jia, J.: SNR-aware low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17714–17724 (2022)
Guo, X., Hu, Q.: Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vision. 131(1), 48–66 (2023)
Brateanu, A., Balmez, R., Avram, A., Orhei, C., Ancuti, C.: LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image Enhancement. arXiv preprint, arXiv:2401.15204 [Online]. (2024). Available: https://doi.org/10.48550/arXiv.2401.15204
Poynton, C.: Image digitization and reconstruction. In: Digital Video and HDTV, Poynton, C. (ed.), Morgan Kaufmann, San Francisco, 187–194 [Online]. (2003). Available: https://www.sciencedirect.com/science/article/pii/B9781558607927500847
Foley, J.D., van Dam, A.: Fundamentals of Interactive Computer Graphics. Addison-Wesley (1982)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5728–5739 (2022)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV, Springer, 492–511 (2020)
Chen, C., Chen, Q., Do, M.N., Koltun, V.: Seeing motion in the dark. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 3185–3194 (2019)
Zeng, H., Cai, J., Li, L., Cao, Z., Zhang, L.: Learning image-adaptive 3D lookup tables for high-performance photo enhancement in real-time. IEEE aTransactions Pattern Anal. Mach. Intell. 44(4), 2058–2073 (2020)
Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6849–6857 (2019)
Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: DeepLPF: Deep local parametric filters for image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12826–12835 (2020)
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general U-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17683–17693 (2022)
Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)
Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10561–10570 (2021)
Xu, K., Yang, X., Yin, B., Lau, R.W.: Learning to restore low-light images via decomposition-and-enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2281–2290 (2020).)
Author information
Authors and Affiliations
Contributions
B.H. contributed to the main manuscript text and conceptual framework. H.M., L.H., C.Z., and N.Y. reviewed and revised the manuscript. All authors approved the final version.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Huang, B., Meng, H., Huang, L. et al. PFLLTNet: enhancing low-light images with PixelShuffle upsampling and feature fusion. SIViP 19, 300 (2025). https://doi.org/10.1007/s11760-025-03885-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11760-025-03885-3