Abstract:
We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera...Show MoreMetadata
Abstract:
We generate synthetic annotated data for learning 3D human pose estimation using an egocentric fisheye camera. Synthetic humans are rendered from a virtual fisheye camera, with a random background, random clothing, random lighting parameters. In addition to RGB images, we generate ground truth of 2D/3D poses and location heat-maps. Capturing huge and various images and labeling manually for learning are not required. This approach will be used for the challenging situation such as capturing training data in sports.
Date of Conference: 23-27 March 2019
Date Added to IEEE Xplore: 15 August 2019
ISBN Information: