Abstract
In recent years, e-commerce and online shopping have grown tremendously. The development of an image-based cloth retrieval system has become a major research topic in order to effectively and accurately recover required garments from a big database. In this paper, a piecewise feature extraction (PWF) framework is proposed to retrieve the cross-domain clothes. This feature selection strategy is presented to extract optimal features in order to enhance the detection rate while simultaneously simplifying image retrieval computation. First, the user-input query images are collected from online or snapshots. Further clutter backgrounds and different skin textures, of the image, are eliminated via modified GrabCut segmentation. Followed by, the PWF model retrieves the required cloth from the shop domain. In PWF cloth and sleeve size attributes are identified in a separate block with Hough transform and fuzzy rule. Consequently, high-level features are extracted by using a novel RetrieveNet. BAM (Bottleneck attention module) of the RetrieveNet emphasizes the network to focus on the domain features of the input cloth. Finally, the L2 norm is employed to determine the degree of similarity between the shop and query cloths. Based on the similarity, related clothes are retrieved. DARN and Deepfashion2, public datasets are exploited to validate the proposed model, showing that the top-50 image retrieval accuracy is 69.54% and 47.29%, respectively. The efficiency of the proposed clothes retrieval approach is demonstrated by the outcomes of our experiments.












Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1096–1104) (2016)
Ak, K.E., Lim, J.H., Tham, J.Y., Kassim, A.A.: Which shirt for my first date? towards a flexible attribute-based fashion query system. Pattern Recogn. Lett. 112, 212–218 (2018)
Kuang, Z., Gao, Y., Li, G., Luo, P., Chen, Y., Lin, L., & Zhang, W. (2019). Fashion retrieval via graph reasoning networks on a similarity pyramid. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3066–3075)
Chen, Z., Xu, Z., Zhang, Y., Gu, X.: Query-free clothing retrieval via implicit relevance feedback. IEEE Trans. Multimedia 20(8), 2126–2137 (2017)
Pradhan, J., Ajad, A., Pal, A.K., Banka, H.: Multi-level colored directional motif histograms for content-based image retrieval. Vis. Comput. 36(9), 1847–1868 (2020)
Karthik, K., Kamath, S.S.: A deep neural network model for content-based medical image retrieval with multi-view classification. Vis. Comput. 37(7), 1837–1850 (2021)
Sharma, V., Murray, N., Larlus, D., Sarfraz, S., Stiefelhagen, R., & Csurka, G.: Unsupervised meta-domain adaptation for fashion retrieval. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1348–1357) (2021)
Li, Z., Li, Y., Tian, W., Pang, Y., Liu, Y.: Cross-scenario clothing retrieval and fine-grained style recognition. In 2016 23rd International Conference on Pattern Recognition (ICPR) (pp. 2912–2917). IEEE (2016)
Fu, J., Wang, J., Li, Z., Xu, M., Lu, H.: Efficient clothing retrieval with semantic-preserving visual phrases. In Asian conference on computer vision (pp. 420–431). Springer, Berlin, Heidelberg (2012)
Liu, S., Song, Z., Liu, G., Xu, C., Lu, H., Yan, S.: Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set. In 2012 IEEE conference on computer vision and pattern recognition (pp. 3330–3337). IEEE (2012)
Deng, L.L.: Pre-detection technology of clothing image segmentation based on GrabCut algorithm. Wireless Pers. Commun. 102(2), 599–610 (2018)
Wang, Z., Gu, Y., Zhang, Y., Zhou, J., Gu, X.: Clothing retrieval with visual attention model. In 2017 IEEE Visual Communications and Image Processing (VCIP) (pp. 1–4). IEEE (2017)
Valle, D., Ziviani, N., Veloso, A.: Effective fashion retrieval based on semantic compositional networks. In 2018 International Joint Conference on Neural Networks (IJCNN) (pp. 1–8). IEEE (2018)
Bhatnagar, A., Aggarwal, S.: Fine-grained apparel classification and retrieval without rich annotations. arXiv preprint arXiv:1811.02385 (2018)
Liu, X., Li, J., Wang, J., & Liu, Z.: Mmfashion: An open-source toolbox for visual fashion analysis. In Proceedings of the 29th ACM International Conference on Multimedia (pp. 3755–3758) (2021)
Zhao, H., Yu, J., Li, Y., Wang, D., Liu, J., Yang, H., & Wu, F.: Dress like an internet celebrity: fashion retrieval in videos. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence (pp. 1054–1060) (2021)
Lee, S., Oh, S., Jung, C., Kim, C.: A global-local embedding module for fashion landmark detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (pp. 0–0) (2019)
Stephen, O., Maduh, U. J., Ibrokhimov, S., Hui, K. L., Al-Absi, A. A., Sain, M.: A multiple-loss dual-output convolutional neural network for fashion class classification. In 2019 21st International Conference on Advanced Communication Technology (ICACT) (pp. 408–412). IEEE (2019).
Li, J., Yang, B., Yang, W., Sun, C., Xu, J.: Subspace-based multi-view fusion for instance-level image retrieval. Vis. Comput. 37(3), 619–633 (2021)
Cheng, S., Lai, H., Wang, L., Qin, J.: A novel deep hashing method for fast image retrieval. Vis. Comput. 35(9), 1255–1266 (2019)
Su, H., Wang, P., Liu, L., Li, H., Li, Z., Zhang, Y.: Where to look and how to describe: fashion image retrieval with an attentional heterogeneous bilinear network. IEEE Trans. Circuits Syst. Video Technol. 31(8), 3254–3265 (2020)
Lang, Y., He, Y., Yang, F., Dong, J., Xue, H.: Which is plagiarism: Fashion image retrieval based on regional representation for design protection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2595–2604) (2020)
Gu, X., Wong, Y., Shou, L., Peng, P., Chen, G., Kankanhalli, M.S.: Multi-modal and multi-domain embedding learning for fashion retrieval and analysis. IEEE Trans. Multimedia 21(6), 1524–1537 (2018)
Hou, Y., Vig, E., Donoser, M., & Bazzani, L.: Learning attribute-driven disentangled representations for interactive fashion retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 12147–12157) (2021)
Miao, Y., Li, G., Bao, C., Zhang, J., Wang, J.: ClothingNet: Cross-domain clothing retrieval with feature fusion and quadruplet loss. IEEE Access 8, 142669–142679 (2020)
Zhang, H., Sun, Y., Liu, L., Wang, X., Li, L., Liu, W.: ClothingOut: a category-supervised GAN model for clothing segmentation and retrieval. Neural Comput. Appl. 32(9), 4519–4530 (2020)
Xia, Y., Chen, B., Lu, W., Coenen, F., Zhang, B.: Attributes-oriented clothing description and retrieval with multi-task convolutional neural network. In 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD) (pp. 804–808). IEEE (2017)
Kinli, F., Ozcan, B., Kirac, F.: Fashion image retrieval with capsule networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (pp. 0–0) (2019)
Zhang, H., Ji, Y., Huang, W., Liu, L.: Sitcom-star-based clothing retrieval for video advertising: a deep learning framework. Neural Comput. Appl. 31(11), 7361–7380 (2019)
Huang, J., Feris, R. S., Chen, Q., Yan, S.: Cross-domain image retrieval with a dual attribute-aware ranking network. In Proceedings of the IEEE international conference on computer vision (pp. 1062–1070) (2015)
Acknowledgements
The authors with a deep sense of gratitude would thank the supervisor for his guidance and constant support rendered during this research.
Funding
The authors received no specific funding for this study.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest to report regarding the present study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Saranya, M.S., Geetha, P. A deep learning-based feature extraction of cloth data using modified grab cut segmentation. Vis Comput 39, 4195–4211 (2023). https://doi.org/10.1007/s00371-022-02584-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-022-02584-1