skip to main content
10.1145/3453800.3453823acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlscConference Proceedingsconference-collections
research-article

Channel Attention Module and Weighted Local Feature Person Re-ID Network

Published: 18 June 2021 Publication History

Abstract

Person re-identification, as a new technology emerging in the field of intelligent detection and analysis in recent years, has received more and more attention from researchers. Cross-camera is one of the important features in the field of person re-identification, which makes the work very challenging. At present, most pedestrian recognition methods only use global features or local features. However, the performance boost that comes from combining the two features is ignored. Therefore, we propose a network that integrates global and local features. The added channel attention module makes the network pay more attention to the similar semantic information between channels and increase the discrimination ability. In addition, the local features containing discriminative information occupy a larger weight. Finally, extensive experiments have verified the effectiveness of our method, and our method has shown state-of-the-art results in three main datasets: Market-1501, DukeMTMC-reID, and CUHK03.

References

[1]
Zheng Z, Yang X, Yu Z, Joint discriminative and generative learning for person re-identification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2019: 2138-2147.
[2]
Martinel N, Luca Foresti G, Micheloni C. Aggregating Deep Pyramidal Representations for Person Re-Identification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019: 0-0.
[3]
Bai X, Yang M, Huang T, Deep-person: Learning discriminative deep features for person re-identification[J]. Pattern Recognition, 2020, 98: 107036.
[4]
Wang G, Yuan Y, Chen X, Learning discriminative features with multiple granularities for person re-identification[C]//Proceedings of the 26th ACM international conference on Multimedia. 2018: 274-282.
[5]
Sun Y, Zheng L, Yang Y, Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 480-496.
[6]
Sun Y, Xu Q, Li Y, Perceive where to focus: Learning visibility-aware part-level features for partial person re-identification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 393-402.
[7]
Chen X, Zheng L, Zhao C, RRGCCAN: Re-ranking via graph convolution channel attention network for person re-identification[J]. IEEE Access, 2020, 8: 131352-131360.
[8]
Li W, Zhu X, Gong S. Harmonious attention network for person re-identification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2285-2294.
[9]
Almazan J, Gajic B, Murray N, Re-id done right: towards good practices for person re-identification[J]. arXiv preprint arXiv:1801.05339, 2018.
[10]
Zheng L, Shen L, Tian L, Scalable person re-identification: A benchmark[C]//Proceedings of the IEEE international conference on computer vision. 2015: 1116-1124.
[11]
Zheng Z, Zheng L, Yang Y. Unlabeled samples generated by gan improve the person re-identification baseline in vitro[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3754-3762.
[12]
Li W, Zhao R, Xiao T, Deepreid: Deep filter pairing neural network for person re-identification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 152-159.
[13]
Deng W, Zheng L, Ye Q, Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 994-1003.
[14]
Zhong Z, Zheng L, Zheng Z, Camera style adaptation for person re-identification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 5157-5166.
[15]
Guo J, Yuan Y, Huang L, Beyond human parts: Dual part-aligned representations for person re-identification[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 3642-3651.
[16]
Quispe R, Pedrini H. Top-DB-Net: Top DropBlock for Activation Enhancement in Person Re-Identification[J]. arXiv preprint arXiv:2010.05435, 2020.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMLSC '21: Proceedings of the 2021 5th International Conference on Machine Learning and Soft Computing
January 2021
178 pages
ISBN:9781450387613
DOI:10.1145/3453800
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Channel Attention Module
  2. Global-Local feature fusion
  3. Person re-identification
  4. Weighted local feature

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Natural Science Foundation of China
  • Shandong Graduate Education Innovation Project

Conference

ICMLSC '21

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 45
    Total Downloads
  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media