loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Sanaz Sabzevari 1 ; Ali Ghadirzadeh 2 ; Mårten Björkman 1 and Danica Kragic 1

Affiliations: 1 Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Stockholm, Sweden ; 2 Department of Computer Science, Stanford University, California, U.S.A.

Keyword(s): 3D Virtual Try-on, Multi-Pose, Spatial Alignment, Fine-Grained Details.

Abstract: Virtual try-on (VTON) eliminates the need for in-store trying of garments by enabling shoppers to wear clothes digitally. For successful VTON, shoppers must encounter a try-on experience on par with in-store trying. We can improve the VTON experience by providing a complete picture of the garment using a 3D visual pre-sentation in a variety of body postures. Prior VTON solutions show promising results in generating such 3D presentations but have never been evaluated in multi-pose settings. Multi-pose 3D VTON is particularly challenging as it often involves tedious 3D data collection to cover a wide variety of body postures. In this paper, we aim to develop a multi-pose 3D VTON that can be trained without the need to construct such a dataset. Our framework aligns in-shop clothes to the desired garment on the target pose by optimizing a consistency loss. We address the problem of generating fine details of clothes in different postures by incorporating multi-scale feature maps. Besides , we propose a coarse-to-fine architecture to remove artifacts inherent in 3D visual presentation. Our empirical results show that the proposed method is capable of generating 3D presentations in different body postures while outperforming existing methods in fitting fine details of the garment. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.145.74.54

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Sabzevari, S.; Ghadirzadeh, A.; Björkman, M. and Kragic, D. (2023). PG-3DVTON: Pose-Guided 3D Virtual Try-on Network. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 819-829. DOI: 10.5220/0011658100003417

@conference{visapp23,
author={Sanaz Sabzevari. and Ali Ghadirzadeh. and Mårten Björkman. and Danica Kragic.},
title={PG-3DVTON: Pose-Guided 3D Virtual Try-on Network},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={819-829},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011658100003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - PG-3DVTON: Pose-Guided 3D Virtual Try-on Network
SN - 978-989-758-634-7
IS - 2184-4321
AU - Sabzevari, S.
AU - Ghadirzadeh, A.
AU - Björkman, M.
AU - Kragic, D.
PY - 2023
SP - 819
EP - 829
DO - 10.5220/0011658100003417
PB - SciTePress