Skip to main content

Crossing-Scene Pedestrian Identification Method Based on Twice FAS

  • Conference paper
  • First Online:
Data Science (ICPCSEE 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 728))

Abstract

In the field of crossing-scene pedestrian identification, the recognition accuracy is low due to the large local variation of the samples. A method based on twice Feature-Aggregation-Separation (FAS) is proposed in this paper. Firstly, a novel network structure aggregating the same types and separating different types of features twice respectively is proposed. Secondly, a method of cross-input neighborhood differences is applied to deal with the features produced by the first aggregation-separation, and the results are taken as the input of the second aggregation-separation. Finally, the features produced by twice FAS are chosen for splicing, and the results are used for Softmax classifier. Compared with MCPB-TC [8] method based on features aggregation-separation, the proposed scheme can provide directional aggregation-separation of positive samples and negative samples. Compared with AIDLA [4] based on cross-input neighborhood differences, it offers better ability of discriminating inter-class and aggregating intra-class. It also outperforms those methods by the tests of CUHK01 and VIPeR data set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Deep metric learning for person re-identification. In: ICPR, pp. 2666–2672, Stockholm (2014)

    Google Scholar 

  2. Yi, D., Lei, Z., Li, S.Z.: Deep metric learning for practical person re-identification. In: ICPR, pp. 3908–3916, Stockholm (2014)

    Google Scholar 

  3. Li, W., Zhao, R., Xiao, T., Wang, X.: Deepreid: deep filter pairing neural network for person re-identification. In: CVPR, pp. 152–159, Columbus (2014)

    Google Scholar 

  4. Ahmed, E., Jones, M., Marks, T.K.: An improved deep learning architecture for person re-Identification. In: CVPR, pp. 3908–3916, Boston (2015)

    Google Scholar 

  5. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. arXiv preprint arXiv:1503.03832 (2015)

  6. Zhao, R., Ouyang, W., Wang, X.: Learning mid-level filters for person re-identification. In: CVPR, pp. 144–151, Columbus (2014)

    Google Scholar 

  7. Paisitkriangkrai, S., Shen, C., van den Hengel, A.: Learning to rank in person re-identification with metric ensembles. arXiv preprint arXiv:1503.01543 (2015)

  8. Cheng, D., Gong, Y., Zhou, S., Wang, J., Zheng, N.: Person re-Identification by multi-channel parts-based CNN with improved triplet loss function. In: CVPR, pp. 1335–1344, Las Vegas (2016)

    Google Scholar 

  9. Li, W., Zhao, R., Wang, X.: Human reidentification with transferred metric learning. In: ACCV, pp. 31–44, Daejeon (2012)

    Google Scholar 

  10. Gray, D., Brennan, S., Tao, H.: Evaluating appearance models for recognition, reacquisition, and tracking. In: Proceedings of the IEEE International Workshop on Performance Evaluation for Tracking and Surveillance (PETS), vol. 3 (2007)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the 2016 Guangxi Science and Technology support program under Grant No. AB16380264 and 2016 Key Laboratory of Cognitive Radio and Information Processing (Guilin University of Electronic Technology), Ministry of Education Fund Project, Project No. CRKL160102.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Cai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper

Chen, Y., Cai, X., Zeng, Y., Wang, M. (2017). Crossing-Scene Pedestrian Identification Method Based on Twice FAS. In: Zou, B., Han, Q., Sun, G., Jing, W., Peng, X., Lu, Z. (eds) Data Science. ICPCSEE 2017. Communications in Computer and Information Science, vol 728. Springer, Singapore. https://doi.org/10.1007/978-981-10-6388-6_41

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-6388-6_41

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-6387-9

  • Online ISBN: 978-981-10-6388-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics