Skip to main content

Visual Saliency Guided Deep Fabric Defect Classification

  • Conference paper
  • First Online:
Intelligence Science and Big Data Engineering. Visual Data Engineering (IScIDE 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11935))

Abstract

Fabric defects have an important influence on the quality of the fabric product. Automatic fabric defect detection is a crucial part for quality control in the textile industry. The primary challenge of fabric defects identification is not only to find the existing defects, but also to classify them into different types. In this paper, we propose a novel fabric defect detection and classification method consists of three main steps. Firstly, the fabric image is cropped into a set of image patches and each patch is labeled with specified defect type. Secondly, the visual saliency map is generated from the patch to localize defects with specified visual attention. Then, the combination of visual salience map with raw image input into a convolutional neural network for robust feature representation, and finally output its predicted defect type. During the testing section, defect inspection runs in a sliding window schemes using the trained model, and both the type and position of each defect are obtained simultaneously. Our method tries to investigate the combination of visual saliency and one-stage object detector with feature pyramid, which fully makes use of information from multi-resolution guided with visual attention. Besides, soft-cutoff loss is employed to further improve the performance of the method, and our network can be learnt in an end-to-end manner. Experiments based on our fabric defect image datasets, the proposed method can achieve a 98.52% accuracy of classification. This method is comparable to the usual two-stage detector with more compact model parameters, makes it valuable in the industrial application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Schneider, D., Merhof, D.: Blind weave detection for woven fabrics. Pattern Anal. Appl. 18(3), 725–737 (2015)

    Article  MathSciNet  Google Scholar 

  2. Kim, C.-W., Koivo, A.J.: Hierarchical classification of surface defects on dusty wood boards. Pattern Recogn. Lett. 15(7), 713–721 (1994)

    Article  Google Scholar 

  3. Wen, W., Xia, A.: Verifying edges for visual inspection purposes1. Pattern Recogn. Lett. 20(3), 315–328 (1999)

    Article  Google Scholar 

  4. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015)

    Article  Google Scholar 

  5. He, K., Gkioxari, G. Dollar, P., Girshick, R.: Mask R-CNN. In: IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969. IEEE, Venice (2017)

    Google Scholar 

  6. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  7. Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117–2125. IEEE, Hawaii (2017)

    Google Scholar 

  8. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. PP(99), 2999–3007 (2017)

    Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Neural Inf. Process. Syst. Conference 25(2), 1–9 (2012)

    Google Scholar 

  10. Karlekar, V.V., Biradar, M.S., Bhangale K B. Fabric defect detection using wavelet filter. In: International Conference on Computing Communication Control & Automation. IEEE, Pune (2015)

    Google Scholar 

  11. Jing, J.: Automatic defect detection of patterned fabric via combining the optimal gabor filter and golden image subtraction. J. Fib. Bioeng. Inform. 8(2), 229–239 (2015)

    Article  Google Scholar 

  12. Wen, Z., Cao, J., Liu, X., et al.: Fabric defects detection using adaptive wavelets. Int. J. Clothing Sci. Technol. 26(3), 202–211 (2014)

    Article  Google Scholar 

  13. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) Computer Vision – ECCV 2014. Lecture Notes in Computer Science, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  14. Tong, L., Wong, W.K., Kwong, C.K.: Fabric defect detection for apparel industry: a nonlocal sparse representation approach. IEEE Access 5(99), 5947–5964 (2017)

    Google Scholar 

  15. Wei, L., Gang, H., Smith, J.R.: Unsupervised one-class learning for automatic outlier removal. In: IEEE Conference on Computer Vision & Pattern Recognition, pp. 3826–3833. IEEE, Columbus (2014)

    Google Scholar 

  16. Qu, T., Zou, L., Zhang, Q., et al.: Defect detection on the fabric with complex texture via dual-scale over-complete dictionary. J. Text. Inst. 107(6), 1–14 (2015)

    Google Scholar 

  17. Susan, S., Sharma, M.: Automatic texture defect detection using Gaussian mixture entropy modeling. Neurocomputing 239, 232–237 (2017)

    Article  Google Scholar 

  18. Szegedy, C., Vanhoucke, V., Ioffe, S.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE, Las Vegas (2016)

    Google Scholar 

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014)

    Google Scholar 

  20. Zhou, J., Wang, J.: Fabric defect detection using adaptive dictionaries. Textile Res. J. 83(17), 1846–1859 (2013)

    Article  Google Scholar 

  21. Qu, T., Zou, L., Zhang, Q., et al.: Defect detection on the fabric with complex texture via dual-scale over-complete dictionary. J. Text. Inst. 107(6), 1–14 (2015)

    Google Scholar 

  22. Sandler, M., Howard, A., Zhu, M., et al.: MobileNetV2: inverted residuals and linear bottlenecks. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. IEEE, Salt Lake City (2018)

    Google Scholar 

  23. Yapi, D., Mejri, M., Allili, M.S., et al.: A learning-based approach for automatic defect detection in textile images. IFAC Papersonline 48(3), 2423–2428 (2015)

    Article  Google Scholar 

  24. Şeker, A., Peker, K.A., Yüksek, A.G., et al.: Fabric defect detection using deep learning. In: 24th Signal Processing and Communication Application Conference, pp. 1437–1440. IEEE, Zonguldak (2016)

    Google Scholar 

  25. Yan, Q., Xu, L., Shi, J., et al.: Hierarchical Saliency Detection. In: Computer Vision and Pattern Recognition (CVPR), pp. 569–582. IEEE, Portland (2013)

    Google Scholar 

  26. Vikram, T.N., Tscherepanow, M., Wrede, B.: A saliency map based on sampling an image into random rectangular regions of interest. Pattern Recogn. 45(9), 3114–3124 (2012)

    Article  Google Scholar 

  27. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. (2014)

    Google Scholar 

  28. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  29. Lu, S., Mahadevan, V., Vasconcelos, N.: Learning optimal seeds for diffusion-based salient object detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790–2797. IEEE, Columbus (2014)

    Google Scholar 

  30. Cheng, M.M., Zhang, G.X., Mitra, N.J., et al.: Global contrast based salient region detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 37, no. 3, pp. 409–416 (2011)

    Google Scholar 

  31. Yang, C., Zhang, L., Lu, H., et al.: Saliency detection via graph-based manifold ranking. In: Computer Vision and Pattern Recognition, pp. 3166–3173. IEEE, Portland (2013)

    Google Scholar 

  32. Li, J., Levine, M.D., An, X., et al.: Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 996–1010 (2013)

    Article  Google Scholar 

  33. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, Minneapolis (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jifeng Shen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, Y., Song, Y., Shen, J., Yang, W. (2019). Visual Saliency Guided Deep Fabric Defect Classification. In: Cui, Z., Pan, J., Zhang, S., Xiao, L., Yang, J. (eds) Intelligence Science and Big Data Engineering. Visual Data Engineering. IScIDE 2019. Lecture Notes in Computer Science(), vol 11935. Springer, Cham. https://doi.org/10.1007/978-3-030-36189-1_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36189-1_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36188-4

  • Online ISBN: 978-3-030-36189-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics