Abstract
The rapid growth of the online fashion market has raised the demand for fashion technologies, such as clothing attribute tagging. However, handling fashion image data is challenging since fashion images likely contain irrelevant backgrounds and involve various deformations. In this paper, we introduce SisterNetwork, a deep learning model to tackle the multi-label classification task for fashion attribute tagging. The proposed model consists of two different CNNs to leverage both the original image and the semantic segmentation information. We evaluate our model on the DCSA dataset which contains tagged fashion images, and we achieved the state-of-the-art performance on the multi-label classification task.
Keywords
H. Lim and J. Han—These authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Di, W., et al.: Style finder: fine-grained clothing style recognition and retrieval. In: Computer Vision and Pattern Recognition Workshops (2013)
Zhou, W., et al.: Fashion recommendations using text mining and multiple content attributes (2017)
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
Liu, W., et al.: SSD: single shot multibox detector. In: European Conference on Computer Vision. Springer, Cham (2016)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. arXiv preprint (2017)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Chen, L.-C., et al.: Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv preprint arXiv:1802.02611 (2018)
He, K., et al.: Mask R-CNN. In: IEEE International Conference on Computer Vision (ICCV). IEEE (2017)
Chen, H., Gallagher, A., Girod, B.: Describing clothing by semantic attributes. In: European Conference on Computer Vision. Springer, Heidelberg (2012)
Liu, Z., et al.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Liu, S., et al.: Fashion parsing with weak color-category labels. IEEE Trans. Multimedia 16(1), 253–265 (2014)
Acknowledgments
This work was supported by the Technology development Program (S2646078) funded by the Ministry of SMEs and Startups (MSS, Korea).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Lim, H., Han, J., Lee, Sg. (2019). SisterNetwork: Enhancing Robustness of Multi-label Classification with Semantically Segmented Images. In: Lee, S., Ismail, R., Choo, H. (eds) Proceedings of the 13th International Conference on Ubiquitous Information Management and Communication (IMCOM) 2019. IMCOM 2019. Advances in Intelligent Systems and Computing, vol 935. Springer, Cham. https://doi.org/10.1007/978-3-030-19063-7_86
Download citation
DOI: https://doi.org/10.1007/978-3-030-19063-7_86
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-19062-0
Online ISBN: 978-3-030-19063-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)