Abstract
The dark color and irregular length, thickness, and inter-slub separation distance of the slub make it challenging to analyze the samples and extract the slub parameters in slub denim. A method for extracting and measuring slubs of slub denim based on conditional image-to-image translation was proposed to solve this problem. The generator commonly used in Pix2Pix model was modified by adding additional three layers of down-sampling and up-sampling, and expending the receptive field of the convolution kernel to extract deeper features from source images. The new network structure improved the ability to extract deep image features. The image generated by the model was processed by image morphology, and the connected components of a small area were eliminated to get the final result. The peak signal-to-noise ratio (PSNR), mean square error (MSE), structural similarity (SSIM), and Fréchet Inception Distance (FID) were used as evaluation metrics. The slub length measurements in the generated image and ground truth were also compared. The evaluation metrics value of 17.44, 0.0176, 0.95, and 10.75 respectively, indicating that the proposed method can generate the distribution of slubs on denim fabric surface under constraints. Moreover, the method can measure the length of slubs in the fabric even without corresponding yarns, with most measured slub lengths having errors less than 8 mm. The proposed method is effective and superior in extracting and measuring slubs on denim fabric surfaces, offering valuable reference for enterprises in analyzing slub denims, saving labor and material resources in the proofing process.









Similar content being viewed by others
Data availability
No datasets were generated or analysed during the current study.
References
Gözde Ertekin, M.E., Marmaralı, A.: Visual Perception and Performance properties of Fabrics Knitted with Elastic Core Cotton Slub Yarns. J. Nat. Fibers. 19, 810–822 (2022). https://doi.org/10.1080/15440478.2021.1944428
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, : 1125–1134 (2017)
Tan, D.S., Lin, Y.-X., Hua, K.-L.: Incremental learning of multi-domain image-to-image translations. IEEE Trans. Circuits Syst. Video Technol. 31, 1526–1539 (2020). https://doi.org/10.1109/TCSVT.2020.3005311
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. arXiv Preprint arXiv. (2014). https://doi.org/10.48550/arXiv.1406.2661 .1406.2661
Song, H.-S., Mugabi, J., Jeong, J.-H.: Pix2Pix and deep neural network-based deep learning technology for predicting vortical flow fields and aerodynamic performance of airfoils. Appl. Sci. 13 (2023). https://doi.org/10.3390/app13021019
Kushwaha, V., Shukla, P., Nandi, G.C.: Generating quality grasp rectangle using Pix2Pix GAN for intelligent robot grasping. Mach. Vis. Appl. 34, 15 (2022). https://doi.org/10.1007/s00138-022-01362-2
Li, B., Lu, Y., Pang, W., Xu, H.: Image colorization using CycleGAN with semantic and spatial rationality. Multimedia Tools Appl. 82, 21641–21655 (2023). https://doi.org/10.1007/s11042-023-14675-9
Hua, Z., Qi, L., Yang, Z., Sun, Y.: An improved pix2pix generative adversarial networks for sand-dust image enhancement. Signal. Image Video Process. 18, 5347–5354 (2024). https://doi.org/10.1007/s11760-024-03237-7
Tirel, L., Ali, A.M., Hashim, H.A.: Novel hybrid integrated Pix2Pix and WGAN model with gradient penalty for binary images denoising. Syst. Soft Comput. 6, 200122 (2024). https://doi.org/10.1016/j.sasc.2024.200122
Kim, S., Lee, J., Jeong, K., Lee, J., Hong, T., An, J.: Automated door placement in architectural plans through combined deep-learning networks of ResNet-50 and Pix2Pix-GAN. Expert Syst. Appl. 244, 122932 (2024). https://doi.org/10.1016/j.eswa.2023.122932
Liu, J., Xie, Z., Gao, W., Jiang, H.: Automatic determination of slub yarn geometrical parameters based on an amended similarity-based clustering method. Text. Res. J. 80, 1075–1082 (2010). https://doi.org/10.1177/0040517509352524
Liu, J., Li, Z., Lu, Y., Jiang, H.: Visualisation and determination of the geometrical parameters of slub yarn, fibres text. East. Eur. 18, 31–35 (2010)
Lian, J., Zhao, L., Xu, B.: Measurement slub yarn parameter by parallel-plate capacitor. Appl. Mech. Mater. 52, 1964–1969 (2011). https://doi.org/10.4028/www.scientific.net/AMM.52-54.1964
Lv, H., Ma, C.: Partition slub yarn into slub parts and base parts. Adv. Mater. Res. 301, 251–256 (2011). https://doi.org/10.4028/www.scientific.net/AMR.301-303.251
Lv, H., Ma, C.: Using Capacitance Sensor to Identify the Appearance Parameters of Slub Yarn, in: International Conference on Electronic Commerce, Web Application, and Communication 115–120 (2011)
El-khalek, A., El-Bealy, R., El-Deeb, A.: A computer-based system for evaluation of slub yarn characteristics. J. Text. 2014, 1–11 (2014). https://doi.org/10.1155/2014/784516
Lian, J., Xu, B.: Algorithm of slub yarn parameter measurement based on capacitive sensors. J. Donghua Univ. 33, 678–681 (2016)
Pan, R., Gao, W., Liu, J., Wang, H.: Recognition the parameters of slub-yarn based on image analysis. J. Eng. Fibers Fabr. 6, 25–30 (2011). https://doi.org/10.1177/155892501100600104
Eldessouki, M., Ibrahim, S., Militky, J.: A dynamic and robust image processing based method for measuring the yarn diameter and its variation. Text. Res. J. 84, 1948–1960 (2014). https://doi.org/10.1177/0040517514530032
Li, Z., Xiang, J., Wang, L., Zhang, N., Wang, J., Pan, R., Gao, W.: Measuring the geometrical parameters of slub yarn using a computer vision based image sequencing technique, fibres text. East. Eur. 27, 26–35 (2019). https://doi.org/10.5604/01.3001.0013.0739
Li, Z., Xiong, N., Wang, J., Pan, R., Gao, W., Zhang, N.: An intelligent computer method for automatic mosaic of sequential slub yarn images based on image processing. Text. Res. J. 88, 2854–2866 (2018). https://doi.org/10.1177/0040517517732081
Li, Z., Zhang, N., Wu, Y., Pan, R.: Gao, others, evaluation of an intelligent computer method for the automatic mosaic of sequential slub yarn images, fibres text. East. Eur. 26, 38–48 (2018). https://doi.org/10.5604/01.3001.0011.5737
Liu, X., Wen, Z., Su, Z., Choi, K.-F.: Slub extraction in woven fabric images using Gabor filters. Text. Res. J. 78, 320–325 (2008). https://doi.org/10.1177/0040517507090495
Liu, X., Wen, Z., Su, Z., Yi, S.: Automatic slub detection using Gabor filters. Int. J. Cloth. Sci. Technol. 20, 214–221 (2008). https://doi.org/10.1108/09556220810878838
Wang, J., Shi, K., Wang, L., Li, Z., Pan, R., Gao, W.: Decoloration of multi-color fabric images for fabric appearance smoothness evaluation by supervised image-to-image translation. IEEE Access. 7, 181284–181294 (2019). https://doi.org/10.1109/ACCESS.2019.2959705
Yu, Z., Zhong, Y., Gong, R.H., Xie, H.: Filling the binary images of draped fabric with pix2pix convolutional neural network. J. Eng. Fibers Fabr. 15 (2020). https://doi.org/10.1177/1558925020921544
Cai, S., Ban, Y., Narumi, T., Zhu, K.: FrictGAN: Frictional Signal Generation from Fabric Texture Images Using Generative Adversarial Network, pp. 11–15. in: ICAT-EGVE (2020). https://doi.org/10.2312/egve.20201254
Cai, S., Zhao, L., Ban, Y., Narumi, T., Liu, Y., Zhu, K.: GAN-based image-to-friction generation for tactile simulation of fabric material. Comput. Graphics. 102, 460–473 (2022). https://doi.org/10.1016/j.cag.2021.09.007
Zhang, N., Xiang, J., Wang, J., Pan, R., Gao, W.: Appearance generation for colored spun yarn fabric based on conditional image-to-image translation. Color. Res. Appl. 47, 1023–1034 (2022). https://doi.org/10.1002/col.22771
Yuan, L., Huo, D., Gu, Q., Xiong, Y., Wang, D.: Study on color representation model and computer simulation of colored spun fabrics based on image translation model. Text. Res. J. 92, 2379–2390 (2022). https://doi.org/10.1177/00405175211073784
Zhu, W., Wang, Z., Li, Q., Zhu, C.: A method of enhancing silk digital printing color prediction through Pix2Pix GAN-based approaches. Appl. Sci. 14 (2024). https://doi.org/10.3390/app14010011
Wei, C., Liang, J., Liu, H., Hou, Z., Huan, Z.: Multi-stage unsupervised fabric defect detection based on DCGAN. Visual Comput. 39, 6655–6671 (2023). https://doi.org/10.1007/s00371-022-02754-1
Xu, Y., Zhi, C., Wang, S., Chen, J., Sun, R., Dong, Z., Yu, L.: Text. Res. J. 94, 1771–1785 (2024). https://doi.org/10.1177/00405175241237479 FabricGAN: an enhanced generative adversarial network for data augmentation and improved fabric defect detection
Yang, S., Li, J., Li, Y., Nie, J., Ercisli, S., Khan, M.A.: Imbalanced segmentation for abnormal cotton fiber based on GAN and multiscale residual U-Net. Alex Eng. J. 106, 25–41 (2024). https://doi.org/10.1016/j.aej.2024.07.008
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference pp. 234–241 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM. 60, 84–90 (2017). https://doi.org/10.1145/3065386
Chen, G., Zhang, G., Yang, Z., Liu, W.: Multi-scale patch-GAN with edge detection for image inpainting. Appl. Intell. 53, 3917–3932 (2023). https://doi.org/10.1007/s10489-022-03577-2
Li, C., Wand, M.: Combining markov random fields and convolutional neural networks for image synthesis, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2479–2486 (2016)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861
Ramachandra, R., Li, H.: Finger-NestNet: Interpretable fingerphoto verification on smartphone using deep nested residual network, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops 693–700 (2023)
Xu, L., Yan, Q., Xia, Y., Jia, J.: Structure extraction from texture via relative total variation. ACM Trans. Graph. 31 (2012). https://doi.org/10.1145/2366145.2366158
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
All authors have read as well as agreed upon the final manuscript. Bingpeng Song was contributed manuscript original draft, conceptualization, validation and curation of data. Wentao He was performed investigation. Ning Zhang was contributed manuscript – review and editing. Jun Xiang was contributed methodology and visualization. Ruru Pan was performed manuscript – review and editing, formal analysis, and supervision.
Corresponding author
Ethics declarations
Ethical approval
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Song, B., He, W., Zhang, N. et al. Slub extraction and measurement in denim fabric images based on conditional image-to-image translation. SIViP 19, 301 (2025). https://doi.org/10.1007/s11760-025-03905-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11760-025-03905-2