Abstract:
In surveillance videos, the pictures of a same person often present significant variation which makes person re-identification difficult. Though the globe appearances may...Show MoreMetadata
Abstract:
In surveillance videos, the pictures of a same person often present significant variation which makes person re-identification difficult. Though the globe appearances may present great difference, some local patches still have great similarities, and human eyes can be used to distinguish the identity of each person via these local patches. Inspired from it, patch matching is introduced in person re-identification and has been shown to be an efficient method to solve these problems caused by different viewpoints, poses, camera settings, illumination and occlusion. But until now there is no guide for how to decide the size of patch, and most researches got these patches either by dense sampling or coarse partition. In both case, the person structure information is missing. To improve re-identification accuracy, we propose a method via person DPM (Deformable Parts Model) partition. First, both compared appearances are partitioned into several body parts by pre-trained person DPM and these parts are grouped according to their positions in the body; Second, part matching is conducted between two appearances' parts in each group based on deep learning features; Finally, fusing the similarities of each group are to decide whether these two appearances are from the same person or not. Experiments on VIPeR dataset illustrate that without supervised training, the proposed method can obtain good re-identification performances compared with state-of-the-art methods.
Date of Conference: 04-08 December 2016
Date Added to IEEE Xplore: 24 April 2017
ISBN Information: