Abstract:
Due to recent deep learning techniques, person detection seems to be solved in the computer vision domain, however, it is still an issue in mobile robotics. On a robot on...Show MoreMetadata
Abstract:
Due to recent deep learning techniques, person detection seems to be solved in the computer vision domain, however, it is still an issue in mobile robotics. On a robot only limited computing capacities are available. The challenge gets even more difficult when operating in an environment, with people in poses different from the standard upright ones. In this work the environment of a supermarket is considered. Unlike most scenarios targeted by the community, persons not only occur in standing postures, but also grasping into the shelves or squatting in front of them. Furthermore, people are heavily occluded, e.g. by shopping carts. In such a challenging environment, it is important to perceive people early enough and in real-time in order to enable a socially aware navigation. Classical person detectors often suffer from a high posture variance or do not achieve acceptable real-time detection rates. For this reason, different components from the 3D object detection domain have been used to create a new robust person detector for mobile application. Operating on 3D point clouds allows fast detections in real-time up to our goal distance of ten meters and above using the Kinect2 depth sensor. The detector can even differentiate between typical postures of customers who stand or squat in front of shelves.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information: