11 March 2020 Multilevel deep representation fusion for person re-identification
Yu Zhao, Keren Fu, Qiaoyuan Shu, Pengcheng Wei, Xi Shi
Author Affiliations +
Abstract

Person re-identification (re-ID) aims at matching two pedestrian images across different cameras. Usually, the main scheme of re-ID based on deep learning includes two phases: feature extraction and metric calculation. We focus on how to extract more discriminative image features for re-ID. To address this problem, we propose a multilevel deep representation fusion (MDRF) model based on the convolutional neural network. Specifically, the MDRF model is designed to extract image features at different network levels through one forward pass. In order to produce the final image representation, these multilevel features are fused by a fusion layer. Then the final image representation is fed into a combined loss of the softmax and the triplet to optimize the model. The proposed method not only utilizes the abstract information of high-level features but also integrates the appearance information of low-level features. Extensive experiments on public datasets including Market-1501, DukeMTMC-reID, and CUHK03 demonstrate the effectiveness of the proposed method for person re-ID.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00 © 2020 SPIE and IS&T
Yu Zhao, Keren Fu, Qiaoyuan Shu, Pengcheng Wei, and Xi Shi "Multilevel deep representation fusion for person re-identification," Journal of Electronic Imaging 29(2), 023005 (11 March 2020). https://doi.org/10.1117/1.JEI.29.2.023005
Received: 30 November 2019; Accepted: 19 February 2020; Published: 11 March 2020
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Performance modeling

Cameras

Data modeling

Feature extraction

Space based lasers

Convolutional neural networks

Back to Top