Skip to main content
Log in

Slub extraction and measurement in denim fabric images based on conditional image-to-image translation

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The dark color and irregular length, thickness, and inter-slub separation distance of the slub make it challenging to analyze the samples and extract the slub parameters in slub denim. A method for extracting and measuring slubs of slub denim based on conditional image-to-image translation was proposed to solve this problem. The generator commonly used in Pix2Pix model was modified by adding additional three layers of down-sampling and up-sampling, and expending the receptive field of the convolution kernel to extract deeper features from source images. The new network structure improved the ability to extract deep image features. The image generated by the model was processed by image morphology, and the connected components of a small area were eliminated to get the final result. The peak signal-to-noise ratio (PSNR), mean square error (MSE), structural similarity (SSIM), and Fréchet Inception Distance (FID) were used as evaluation metrics. The slub length measurements in the generated image and ground truth were also compared. The evaluation metrics value of 17.44, 0.0176, 0.95, and 10.75 respectively, indicating that the proposed method can generate the distribution of slubs on denim fabric surface under constraints. Moreover, the method can measure the length of slubs in the fabric even without corresponding yarns, with most measured slub lengths having errors less than 8 mm. The proposed method is effective and superior in extracting and measuring slubs on denim fabric surfaces, offering valuable reference for enterprises in analyzing slub denims, saving labor and material resources in the proofing process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

No datasets were generated or analysed during the current study.

References

  1. Gözde Ertekin, M.E., Marmaralı, A.: Visual Perception and Performance properties of Fabrics Knitted with Elastic Core Cotton Slub Yarns. J. Nat. Fibers. 19, 810–822 (2022). https://doi.org/10.1080/15440478.2021.1944428

    Article  Google Scholar 

  2. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, : 1125–1134 (2017)

  3. Tan, D.S., Lin, Y.-X., Hua, K.-L.: Incremental learning of multi-domain image-to-image translations. IEEE Trans. Circuits Syst. Video Technol. 31, 1526–1539 (2020). https://doi.org/10.1109/TCSVT.2020.3005311

    Article  MATH  Google Scholar 

  4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. arXiv Preprint arXiv. (2014). https://doi.org/10.48550/arXiv.1406.2661 .1406.2661

    Article  MATH  Google Scholar 

  5. Song, H.-S., Mugabi, J., Jeong, J.-H.: Pix2Pix and deep neural network-based deep learning technology for predicting vortical flow fields and aerodynamic performance of airfoils. Appl. Sci. 13 (2023). https://doi.org/10.3390/app13021019

  6. Kushwaha, V., Shukla, P., Nandi, G.C.: Generating quality grasp rectangle using Pix2Pix GAN for intelligent robot grasping. Mach. Vis. Appl. 34, 15 (2022). https://doi.org/10.1007/s00138-022-01362-2

    Article  Google Scholar 

  7. Li, B., Lu, Y., Pang, W., Xu, H.: Image colorization using CycleGAN with semantic and spatial rationality. Multimedia Tools Appl. 82, 21641–21655 (2023). https://doi.org/10.1007/s11042-023-14675-9

    Article  Google Scholar 

  8. Hua, Z., Qi, L., Yang, Z., Sun, Y.: An improved pix2pix generative adversarial networks for sand-dust image enhancement. Signal. Image Video Process. 18, 5347–5354 (2024). https://doi.org/10.1007/s11760-024-03237-7

    Article  MATH  Google Scholar 

  9. Tirel, L., Ali, A.M., Hashim, H.A.: Novel hybrid integrated Pix2Pix and WGAN model with gradient penalty for binary images denoising. Syst. Soft Comput. 6, 200122 (2024). https://doi.org/10.1016/j.sasc.2024.200122

    Article  Google Scholar 

  10. Kim, S., Lee, J., Jeong, K., Lee, J., Hong, T., An, J.: Automated door placement in architectural plans through combined deep-learning networks of ResNet-50 and Pix2Pix-GAN. Expert Syst. Appl. 244, 122932 (2024). https://doi.org/10.1016/j.eswa.2023.122932

    Article  MATH  Google Scholar 

  11. Liu, J., Xie, Z., Gao, W., Jiang, H.: Automatic determination of slub yarn geometrical parameters based on an amended similarity-based clustering method. Text. Res. J. 80, 1075–1082 (2010). https://doi.org/10.1177/0040517509352524

    Article  MATH  Google Scholar 

  12. Liu, J., Li, Z., Lu, Y., Jiang, H.: Visualisation and determination of the geometrical parameters of slub yarn, fibres text. East. Eur. 18, 31–35 (2010)

    MATH  Google Scholar 

  13. Lian, J., Zhao, L., Xu, B.: Measurement slub yarn parameter by parallel-plate capacitor. Appl. Mech. Mater. 52, 1964–1969 (2011). https://doi.org/10.4028/www.scientific.net/AMM.52-54.1964

    Article  MATH  Google Scholar 

  14. Lv, H., Ma, C.: Partition slub yarn into slub parts and base parts. Adv. Mater. Res. 301, 251–256 (2011). https://doi.org/10.4028/www.scientific.net/AMR.301-303.251

    Article  MATH  Google Scholar 

  15. Lv, H., Ma, C.: Using Capacitance Sensor to Identify the Appearance Parameters of Slub Yarn, in: International Conference on Electronic Commerce, Web Application, and Communication 115–120 (2011)

  16. El-khalek, A., El-Bealy, R., El-Deeb, A.: A computer-based system for evaluation of slub yarn characteristics. J. Text. 2014, 1–11 (2014). https://doi.org/10.1155/2014/784516

    Article  MATH  Google Scholar 

  17. Lian, J., Xu, B.: Algorithm of slub yarn parameter measurement based on capacitive sensors. J. Donghua Univ. 33, 678–681 (2016)

    MATH  Google Scholar 

  18. Pan, R., Gao, W., Liu, J., Wang, H.: Recognition the parameters of slub-yarn based on image analysis. J. Eng. Fibers Fabr. 6, 25–30 (2011). https://doi.org/10.1177/155892501100600104

    Article  MATH  Google Scholar 

  19. Eldessouki, M., Ibrahim, S., Militky, J.: A dynamic and robust image processing based method for measuring the yarn diameter and its variation. Text. Res. J. 84, 1948–1960 (2014). https://doi.org/10.1177/0040517514530032

    Article  MATH  Google Scholar 

  20. Li, Z., Xiang, J., Wang, L., Zhang, N., Wang, J., Pan, R., Gao, W.: Measuring the geometrical parameters of slub yarn using a computer vision based image sequencing technique, fibres text. East. Eur. 27, 26–35 (2019). https://doi.org/10.5604/01.3001.0013.0739

    Article  MATH  Google Scholar 

  21. Li, Z., Xiong, N., Wang, J., Pan, R., Gao, W., Zhang, N.: An intelligent computer method for automatic mosaic of sequential slub yarn images based on image processing. Text. Res. J. 88, 2854–2866 (2018). https://doi.org/10.1177/0040517517732081

    Article  MATH  Google Scholar 

  22. Li, Z., Zhang, N., Wu, Y., Pan, R.: Gao, others, evaluation of an intelligent computer method for the automatic mosaic of sequential slub yarn images, fibres text. East. Eur. 26, 38–48 (2018). https://doi.org/10.5604/01.3001.0011.5737

    Article  Google Scholar 

  23. Liu, X., Wen, Z., Su, Z., Choi, K.-F.: Slub extraction in woven fabric images using Gabor filters. Text. Res. J. 78, 320–325 (2008). https://doi.org/10.1177/0040517507090495

    Article  Google Scholar 

  24. Liu, X., Wen, Z., Su, Z., Yi, S.: Automatic slub detection using Gabor filters. Int. J. Cloth. Sci. Technol. 20, 214–221 (2008). https://doi.org/10.1108/09556220810878838

    Article  Google Scholar 

  25. Wang, J., Shi, K., Wang, L., Li, Z., Pan, R., Gao, W.: Decoloration of multi-color fabric images for fabric appearance smoothness evaluation by supervised image-to-image translation. IEEE Access. 7, 181284–181294 (2019). https://doi.org/10.1109/ACCESS.2019.2959705

    Article  Google Scholar 

  26. Yu, Z., Zhong, Y., Gong, R.H., Xie, H.: Filling the binary images of draped fabric with pix2pix convolutional neural network. J. Eng. Fibers Fabr. 15 (2020). https://doi.org/10.1177/1558925020921544

  27. Cai, S., Ban, Y., Narumi, T., Zhu, K.: FrictGAN: Frictional Signal Generation from Fabric Texture Images Using Generative Adversarial Network, pp. 11–15. in: ICAT-EGVE (2020). https://doi.org/10.2312/egve.20201254

  28. Cai, S., Zhao, L., Ban, Y., Narumi, T., Liu, Y., Zhu, K.: GAN-based image-to-friction generation for tactile simulation of fabric material. Comput. Graphics. 102, 460–473 (2022). https://doi.org/10.1016/j.cag.2021.09.007

    Article  MATH  Google Scholar 

  29. Zhang, N., Xiang, J., Wang, J., Pan, R., Gao, W.: Appearance generation for colored spun yarn fabric based on conditional image-to-image translation. Color. Res. Appl. 47, 1023–1034 (2022). https://doi.org/10.1002/col.22771

    Article  MATH  Google Scholar 

  30. Yuan, L., Huo, D., Gu, Q., Xiong, Y., Wang, D.: Study on color representation model and computer simulation of colored spun fabrics based on image translation model. Text. Res. J. 92, 2379–2390 (2022). https://doi.org/10.1177/00405175211073784

    Article  MATH  Google Scholar 

  31. Zhu, W., Wang, Z., Li, Q., Zhu, C.: A method of enhancing silk digital printing color prediction through Pix2Pix GAN-based approaches. Appl. Sci. 14 (2024). https://doi.org/10.3390/app14010011

  32. Wei, C., Liang, J., Liu, H., Hou, Z., Huan, Z.: Multi-stage unsupervised fabric defect detection based on DCGAN. Visual Comput. 39, 6655–6671 (2023). https://doi.org/10.1007/s00371-022-02754-1

    Article  MATH  Google Scholar 

  33. Xu, Y., Zhi, C., Wang, S., Chen, J., Sun, R., Dong, Z., Yu, L.: Text. Res. J. 94, 1771–1785 (2024). https://doi.org/10.1177/00405175241237479 FabricGAN: an enhanced generative adversarial network for data augmentation and improved fabric defect detection

  34. Yang, S., Li, J., Li, Y., Nie, J., Ercisli, S., Khan, M.A.: Imbalanced segmentation for abnormal cotton fiber based on GAN and multiscale residual U-Net. Alex Eng. J. 106, 25–41 (2024). https://doi.org/10.1016/j.aej.2024.07.008

    Article  Google Scholar 

  35. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference pp. 234–241 (2015)

  36. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM. 60, 84–90 (2017). https://doi.org/10.1145/3065386

    Article  MATH  Google Scholar 

  37. Chen, G., Zhang, G., Yang, Z., Liu, W.: Multi-scale patch-GAN with edge detection for image inpainting. Appl. Intell. 53, 3917–3932 (2023). https://doi.org/10.1007/s10489-022-03577-2

    Article  MATH  Google Scholar 

  38. Li, C., Wand, M.: Combining markov random fields and convolutional neural networks for image synthesis, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2479–2486 (2016)

  39. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  MATH  Google Scholar 

  40. Ramachandra, R., Li, H.: Finger-NestNet: Interpretable fingerphoto verification on smartphone using deep nested residual network, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops 693–700 (2023)

  41. Xu, L., Yan, Q., Xia, Y., Jia, J.: Structure extraction from texture via relative total variation. ACM Trans. Graph. 31 (2012). https://doi.org/10.1145/2366145.2366158

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors have read as well as agreed upon the final manuscript. Bingpeng Song was contributed manuscript original draft, conceptualization, validation and curation of data. Wentao He was performed investigation. Ning Zhang was contributed manuscript – review and editing. Jun Xiang was contributed methodology and visualization. Ruru Pan was performed manuscript – review and editing, formal analysis, and supervision.

Corresponding author

Correspondence to Ruru Pan.

Ethics declarations

Ethical approval

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, B., He, W., Zhang, N. et al. Slub extraction and measurement in denim fabric images based on conditional image-to-image translation. SIViP 19, 301 (2025). https://doi.org/10.1007/s11760-025-03905-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-025-03905-2

Keywords