loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

NEMA: 6-DoF Pose Estimation Dataset for Deep Learning

Topics: Assistive Computer Vision; Deep Learning for Visual Understanding ; Egocentric Vision for Interaction Understanding; Image Formation, Acquisition Devices and Sensors; Machine Learning Technologies for Vision; Mobile and Egocentric Localization; Mobile and Egocentric Object Detection and Recognition; Object Detection and Localization

Authors: Philippe Pérez de San Roman 1 ; 2 ; Pascal Desbarats 2 ; Jean-Philippe Domenger 2 and Axel Buendia 3 ; 4

Affiliations: 1 ITECA, 264 Rue Fontchaudiere, 16000 Angoulême, France ; 2 Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, F-33400 Talence, France ; 3 CNAM-CEDRIC Paris, 292 Rue Saint Martin, 75003 Paris, France ; 4 SpirOps, 8 Passage de la Bonne Graine, 75011, Paris, France

Keyword(s): Deep Learning, 6-DOF Pose Estimation, 3D Detection, Dataset, RGB-D.

Abstract: Maintenance is inevitable, time-consuming, expensive, and risky to production and maintenance operators. Porting maintenance support applications to mixed reality (MR) headsets would ease operations. To function, the application needs to anchor 3D graphics onto real objects, i.e. locate and track real-world objects in three dimensions. This task is known in the computer vision community as Six Degree of Freedom Pose Estimation (6-Dof) and is best solved using Convolutional Neural Networks (CNNs). Training them required numerous examples, but acquiring real labeled images for 6-DoF pose estimation is a challenge on its own. In this article, we propose first a thorough review of existing non-synthetic datasets for 6-DoF pose estimations. This allows identifying several reasons why synthetic training data has been favored over real training data. Nothing can replace real images. We show next that it is possible to overcome the limitations faced by previous datasets by presenting a new m ethodology for labeled images acquisition. And finally, we present a new dataset named NEMA that allows deep learning methods to be trained without the need for synthetic data. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.218.55.14

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Pérez de San Roman, P.; Desbarats, P.; Domenger, J. and Buendia, A. (2022). NEMA: 6-DoF Pose Estimation Dataset for Deep Learning. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 682-690. DOI: 10.5220/0010913200003124

@conference{visapp22,
author={Philippe {Pérez de San Roman}. and Pascal Desbarats. and Jean{-}Philippe Domenger. and Axel Buendia.},
title={NEMA: 6-DoF Pose Estimation Dataset for Deep Learning},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP},
year={2022},
pages={682-690},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010913200003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP
TI - NEMA: 6-DoF Pose Estimation Dataset for Deep Learning
SN - 978-989-758-555-5
IS - 2184-4321
AU - Pérez de San Roman, P.
AU - Desbarats, P.
AU - Domenger, J.
AU - Buendia, A.
PY - 2022
SP - 682
EP - 690
DO - 10.5220/0010913200003124
PB - SciTePress