Abstract
In industrial settings like process monitoring and production line optimization, high-quality images play a critical role in managing operational details, rectifying procedural deviations, and improving strategic production outcomes. Images captured in low-light conditions frequently exhibit color distortion, low contrast, and loss of detail, adversely affecting the accuracy and reliability of experimental data. To address these challenges, the study introduces an unsupervised learning method that utilizes the advanced U-Net + + architecture for enhancing low-light images. The advanced U-Net + + architecture, which replaces the traditional U-Net model, enhances deep feature extraction and learning mechanisms, effectively improving image processing for unaligned data. This approach substantially improves low-light photos in brightness, contrast, and color. Enhancements in image quality provide a solid foundation for more reliable and accurate analytical assessments in various applications. Experimental evidence shows that this method not only outperforms existing techniques in visual quality but also excels in key objective metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), contrast difference, and image entropy ratio. This research significantly enhances image quality in low-light conditions using the U-Net + + architecture, improving both visual clarity and informational content crucial for accurate industrial experiments. The advancement promises to boost efficiency in industrial automation, quality inspection, and production management, thereby advancing technological development.










Similar content being viewed by others
Data availability
No datasets were generated or analysed during the current study.
References
Cui, Z., Qi, G.-J., Gu, L., You, S., Zhang, Z., Harada, T.: Multitask AET with orthogonal tangent regularity for dark object detection. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. 2553–2562 (2021)
Cui, Z., Li, K., Gu, L., Su, S., Gao, P., Jiang, Z., Qiao, Y., Harada, T.: You only need 90k parameters to adapt light: A lightweight transformer for image enhancement and exposure correction. arXiv Preprint. arXiv, 220514871 (2022)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You Only Look Once: Unified, Real-Time Object Detection. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 779–788 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 770–778 (2016)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 3431–3440 (2015)
Pizer, S.M., Amburn, E.P., Austin, J.D., Cromartie, R., Geselowitz, A., Greer, T., ter, Haar Romeny, B., Zimmerman, J.B., Zuiderveld, K.: Adaptive histogram equalization and its variations. In: Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)
Chang, Y., Saito, T., Nakajima, M.: Multi-scale wavelet transform for image denoising. IEEE Trans. Image Process. 9(11), 1631–1641 (2000)
Land, E.H., McCann, J.J.: Lightness and retinex theory. J. Opt. Soc. Am. 61(1), 1–11 (1971)
Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)
Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion. In: Proc. 15th Pac. Conf. Comput. Graph. Appl. 382–390 (2007)
Zhang, X., Yu, F.X., Karaman, S., Chang, S.-F.: Learning to reconstruct high-quality images for irregularly-sampled photoacoustic microscopy. IEEE Trans. Med. Imaging. 37(10), 2438–2450 (2018)
Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)
Yang, J., Price, B., Cohen, S., Yang, M.-H.: Deep recursive band network for image enhancement. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 3421–3430 (2019)
Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Playing atari with deep reinforcement learning. In: Proc. NIPS Deep Learn. Workshop, 1–9 (2013)
Lampert, C.H., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 951–958 (2009)
Jiang, Y., Zhou, Z., Tan, H.: Semantic-guided zero-shot learning for low-light image/video enhancement. J. Comput. Vis. Image Underst. 215, 212–223 (2021)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Adv. Neural Inf. Process. Syst. 27, 2672–2680 (2014)
Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 234–245 (2020)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian Denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26, 3142–3155 (2017)
Lore, K., Akintayo, A., Sarkar, S.: LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)
Wei, L., Zhang, S., Gao, W., Tian, Q.: Deep retinex decomposition for low-light enhancement. In: Br. Mach. Vis. Conf. (BMVC). 1–12 (2018)
Liang, G., Chen, K., Li, H., Lu, Y., Wang, L.: Towards robust event-guided low-light image enhancement: A large-scale real-world event-image dataset and novel Approach. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 910–921 (2024)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Celebi, M.E., Kingravi, H.A., Vela, P.A.: A comparative study of efficient initialization methods for the K-means clustering algorithm. Expert Syst. Appl. 40(1), 200–210 (2013)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Med. Image Comput. Comput.-Assist. Interv. – MICCAI 234–241, Springer, Cham (2015) (2015)
Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., Glocker, B., Rueckert, D.: Attention U-Net: Learning Where to Look for the Pancreas. In: MIDL 1–10 (2018) (2018)
Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018)
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proc. IEEE Int. Conf. Comput. Vis. 2223–2232 (2017)
Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR). 1780–1789 (2020)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Int. Conf. Learn. Represent (ICLR). 1–14 (2015)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Proc. Int. Conf. Med. Image Comput. Comput. -Assist. Intervent. 234–241 (2015)
Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging. 39(6), 1856–1867 (2020)
Wang, L., Jiang, Z., Zhang, Y., Liu, W., Zheng, Y., Liu, J., Xie, J.: RQ-LLIE: Towards robust quantization for low-light image enhancement. IEEE Trans. Image Process. 32, 1234–1245 (2023)
Wang, L., Yang, Z., Lu, Y., Liang, G., Liu, D.: Four-LLIE: Enhancing low-light image via fourier frequency information, pp. 5678–5689. ACM Multimedia (2023)
Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z., Luo, Z.: RUAS: Resource-aware unsupervised adaptation for low-light image enhancement. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR). 6228–6237 (2021)
Zhang, Y., Zhu, H., Xie, W., Lai, S., Shao, L., Gao, Y.: SCI: Structure and consistency improvement for low-light image enhancement. IEEE Trans. Image Process. 31, 3834–3849 (2022)
Xu, H., Wang, Z., Wu, C., Guo, X., Yang, M.-H.: URQ-LLIE: Retinex-based deep unfolding network for low-light image enhancement. IEEE Trans. Image Process. 31, 2995–3007 (2021)
Author information
Authors and Affiliations
Contributions
Xinghao Wang took the lead in writing the main manuscript text and preparing the figures. The manuscript benefited from the contributions and insights of all authors. Each author has reviewed and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, X., Wang, Y., Zhou, J. et al. An unsupervised learning method based on U-Net + + for low-light image enhancement. SIViP 19, 282 (2025). https://doi.org/10.1007/s11760-024-03621-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11760-024-03621-3