skip to main content
10.1145/3627341.3630373acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccvitConference Proceedingsconference-collections
research-article

A 2D Human Pose Estimation Method Based On Visual Transformer

Published: 15 December 2023 Publication History

Abstract

Two-dimensional human pose estimation is the basis of human behavior understanding, but predicting a reasonable three-dimensional human pose sequence is still a challenging problem. To solve this problem, a pose estimation model named DEFormer based on ViT (Vision Transformer) is proposed, which uses a coordinate representation of key points' distribution perception to reduce quantization errors, and combines the original encoding module with an efficient encoding module to construct a lighter two-stage model. Experimental results show that on the CrowdPose dataset and a self-constructed campus scene human motion dataset, the DEFormer lightweight pose estimation model achieves a maximum average accuracy of 85.9% for human pose estimation, demonstrating more accurate pose estimation performance.

References

[1]
Toshev A, Szegedy C. Deeppose: Human pose estimation via deep neural networks[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 1653-1660.
[2]
Sun K, Xiao B, Liu D, Deep high-resolution representation learning for human pose estimation[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 5693-5703.
[3]
Panteleris P, Argyros A. Pe-former: Pose estimation transformer[C]. Pattern Recognition and Artificial Intelligence: Third International Conference, ICPRAI 2022, Paris, France, June 1–3, 2022, Proceedings, Part II. Cham: Springer International Publishing, 2022: 3-14.
[4]
Touvron H, Cord M, Douze M, Training data-efficient image transformers & distillation through attention[C]. International conference on machine learning. PMLR, 2021: 10347-10357.
[5]
Ali A, Touvron H, Caron M, Xcit: Cross-covariance image transformers[J]. Advances in neural information processing systems, 2021, 34: 20014-20027.
[6]
Wang W, Xie E, Li X, Pyramid vision transformer: A versatile backbone for dense prediction without convolutions[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2021: 568-578.
[7]
Zhang F, Zhu X, Dai H, Distribution-aware coordinate representation for human pose estimation[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 7093-7102.
[8]
Ding M, Xiao B, Codella N, Davit: Dual attention vision transformers[C]. Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV. Cham: Springer Nature Switzerland, 2022: 74-92.
[9]
Li J,Wang C,Zhu H, CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark[J].2018.

Index Terms

  1. A 2D Human Pose Estimation Method Based On Visual Transformer
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ICCVIT '23: Proceedings of the 2023 International Conference on Computer, Vision and Intelligent Technology
        August 2023
        378 pages
        ISBN:9798400708701
        DOI:10.1145/3627341
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 December 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        ICCVIT 2023

        Acceptance Rates

        ICCVIT '23 Paper Acceptance Rate 54 of 142 submissions, 38%;
        Overall Acceptance Rate 54 of 142 submissions, 38%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 59
          Total Downloads
        • Downloads (Last 12 months)48
        • Downloads (Last 6 weeks)6
        Reflects downloads up to 20 Jan 2025

        Other Metrics

        Citations

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media