Skip to main content
Log in

A textile fabric classification framework through small motions in videos

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In the field of computer visions, it is demanding and challenging to determine fabric categories in accordance with appearance changes and multi-frame motion information from a video. Investigating recent impressive results on textile fabric classification techniques, we observed that motion-based video analytics were overlooked in the prior studies. To address this technological gap, a framework called Two-Stream+, which employs deep neural networks to classify textile fabrics through small motions in videos is proposed. At the heart of the Two-Stream+ framework, the motion information of textile (e.g., flow trajectories and dense trajectories) was used to expose the material properties. More specifically, we advocate for fusing spatial and temporal Convolutional Neural Networks (i.e., ConvNets) towers at the first fully connected layer. In addition, deformable convolution is used in Residual Networks (i.e., ResNet) to enhance the transformation modeling capability of ConvNets. Testing a publicly available database, a conducted experiments is used to illustrate that the Two-Stream+ architecture has distinct advantages over the state-of-the-art architectures for classifying textile fabrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Abadi M, Barham P, Chen J, et al. (2016) TensorFlow: A system for large-scale machine learning [J].:1–18

  2. Aliaga C, O’Sullivan C, Gutierrez D, Tamstorf R (2015) Sackcloth or silk?: the impact of appearance vs dynamics on the perception of animated cloth. In Proceedings of the ACM SIGGRAPH symposium on applied perception, pages 41–46. ACM

  3. Bouman KL, Xiao B, Battaglia P, and Freeman WT (2013) Estimating the material properties of fabric from video. In Proceedings of the IEEE international conference on computer vision, pages 1984–1991.

  4. Chowdhary CL and Acharjya DP (2015) Segmentation of Mammograms using a Novel Intuitionistic Possibilistic Fuzzy C-Mean Clustering Algorithm, 50th Annual Golden Jubilee Convention of the Computer Society of India (CSI-2015), 2nd-5th December, Springer, AISC Vol. 652, pp. 75–82

  5. Chowdhary CL, Acharjya DP (2016) A Hybrid Scheme for Breast Cancer Detection using Intuitionistic Fuzzy Rough Set Technique. Int J Healthcare Inform Syst Inform, IGI Global 11(2):38–61

    Article  Google Scholar 

  6. Chowdhary CL and Acharjya DP (2017) Clustering Algorithm in Possibilistic Exponential Fuzzy C-Mean Segmenting Medical Images, Journal of Biomimetics, Biomaterials and Biomedical Engineering, Vol. 30, pp. 12–23

  7. Chowdhary CL, Acharjya DP (2020) Segmentation and Feature Extraction in Medical Imaging: A Systematic Review, International Conference on Computational Intelligence and Data Science (ICCIDS 2019), Procedia Computer Science, Elsevier, Vol. 167, pp. 26–36

  8. Chowdhary CL, Sai GVK, Acharjya DP (2016) Decrease in False Assumption for Detection using Digital Mammography, International Conference on Computational Intelligence in Data Mining (ICCIDM-2015), December 5–6, 2015, Bhubaneswar, India, Springer, pp. 325–333

  9. Chung J, Ahn S, Bengio Y (2016) Hierarchical Multiscale Recurrent Neural Networks[J]

  10. Dai J, Qi H, Xiong Y, Li Y, Zhang G, Hu H, Wei Y (2017) Deformable convolutional networks. CoRR, abs/1703.06211 1(2):3

    Google Scholar 

  11. Davis A, Bouman KL, Chen JG, Rubinstein M, Durand F, and Freeman WT (2015) Visual vibrometry: Estimating material properties from small motion in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5335–5343

  12. Deng J, Dong W, Socher R, Li LJ, Li K,Fei-Fei L (2009) Imagenet:A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. IEEE conference on, pages 248–255

  13. Donahue J, Hendricks LA, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition. :2625–2634

  14. Fan KC, Wang YK, Chang BL, Wang TP, Jou CH, Kao IF (1998) Fabric classification based on recognition using a neural network and dimensionality reduction[J]. Text Res J 68(3):179–185

    Article  Google Scholar 

  15. Fleming RW, Dror RO, Adelson EH (2003) Real-world illumination and the perception of surface reflectance properties. J Vision 3(5):3–3

    Article  Google Scholar 

  16. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778

  17. Ho Y-X, Landy MS, Maloney LT (2006) How direction of illumination affects visually perceived surface roughness. J Vision 6(5):8–8

    Article  Google Scholar 

  18. Ji S, Xu W, Yang M, Yu K (2013) 3d convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221–231

    Article  Google Scholar 

  19. Jing J, Xu M, Li P, Li Q, Liu S (2014) Automatic classification of woven fabric structure based on texture feature and PNN[J]. Fibers Polymers 15(5):1092–1098

    Article  Google Scholar 

  20. Jueliang H (2004) Research on the classification of fabrics based on Bayesian method [J]. Textile J, (01):47–48+4.

  21. Grigorios Kalliatakis, Georgios Stamatiadis, Shoaib Ehsan, Ales Leonardis, Juergen Gall, Anca Sticlaru, and Klaus D McDonald-Maier (2017) Evaluating deep convolutional neural networks for material classification. arXiv preprint arXiv:1703.04101

  22. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732

  23. Kuo C-FJ, Shih C-Y, Ho C-E, Peng K-C (2010) Application of computer vision in the automatic identification and classification of woven fabric weave patterns[J]. Textile Res J 80(20):2144–2157

    Article  Google Scholar 

  24. LeCun Y, Bottou L’e, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324

    Article  Google Scholar 

  25. Liu C, Sharan L, Adelson EH, Rosenholtz R (2010) Exploring features in a Bayesian framework for material recognition. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE conference on, pages 239–246

  26. Ng JY-H, Hausknecht M, Vijayanarasimhan S, Vinyals O, Monga R, Toderici G (2015) Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702

  27. Ning F, Delhomme D, LeCun Y, Piano F, Bottou L’e, Barbano PE (2005) Toward automatic phenotyping of developing embryos from videos. IEEE Trans Image Process 14(9):1360–1371

    Article  Google Scholar 

  28. Sharma S, Kiros R, Salakhutdinov R (2015) Action recognition using visual attention. arXiv preprint arXiv:1511.04119

  29. Show A 2015 Tell: Neural image caption generation with visual attention. Kelvin Xu et. al. arXiv Pre-Print. 83:89

  30. Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Advances in neural information processing systems, pages 568–576

  31. Sun J, Yao M, Xu B et al (2011) Fabric wrinkle characterization and classification using modified wavelet coefficients and support-vector-machine classifiers[J]. Text Res J 81(9):902–913

    Article  Google Scholar 

  32. Sutskever I, Martens J, Dahl G, Hinton G (2013) On the importance of initialization and momentum in deep learning. In: International conference on machine learning:1139–1147

  33. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826

  34. Tran D, Bourdev L, Fergus R, Torresani L, and Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pages 4489–4497,

  35. Vinit Bodhwani DP Acharjya U (2019) Bodhwani: deep residual networks for plant identification, international conference on pervasive computing advances and applications (PerCAA-2019), Procedia computer science, Elsevier, 152: 86–194

  36. Wang L, Xiong Y, Wang Z, Yu Q, Lin D, Tang X, Van Gool L (2016) Temporal segment networks: Towards good practices for deep action recognition. In: European Conference on Computer Vision, pages 20–36

  37. Wu Y, He K (2018) Group Normalization[J]. International Journal of Computer Vision, 2018

  38. Wu Z, Wang X, Jiang Y-G, Ye H, Xue X (2015) Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In: Proceedings of the 23rd ACM international conference on Multimedia, pages 461–470. ACM

  39. Xie J, He T, Zhang Z, Zhang H, Zhang Z, Li M (2018) Bag of tricks for image classification with convolutional neural networks. arXiv preprint arXiv:1812.01187

  40. Yang M, Ji S, Xu W, Wang J, Lv F, Yu K, Gong Y, Dikmen M, Lin DJ, Huang TS 2009Detecting human actions in surveillance videos. In: TREC video retrieval evaluation workshop

  41. Yang S, Liang J, Lin MC (2017) Learning-based cloth material recovery from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4383–4393,

Download references

Acknowledgments

This work is supported in part by the Science Foundation of Hubei under Grant No. 2014CFB764 and Department of Education of the Hubei Province of China under Grant No. Q20131608, and Engineering Research Center of Hubei Province for Clothing Information. Xiao Qin’s work is supported by the U.S. National Science Foundation under Grants IIS-1618669, OAC-1642133, and CCF0845257 (CAREER).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Tao.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, T., Zhou, X., Liu, J. et al. A textile fabric classification framework through small motions in videos. Multimed Tools Appl 80, 7567–7580 (2021). https://doi.org/10.1007/s11042-020-10085-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-10085-3

Keywords

Navigation