Skip to main content

GAT-POSE: Graph Autoencoder-Transformer Fusion forĀ Future Pose Prediction

  • Conference paper
  • First Online:
Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2024)

Abstract

Human pose prediction, interchangeably known as human pose forecasting, is a daunting endeavor within computer vision. Owing to its pivotal role in many advanced applications and research avenues like smart surveillance, autonomous vehicles, and healthcare, human pose prediction models must exhibit high precision and efficacy to curb error dissemination, especially in real-world settings. In this paper, we unveil GAT-POSE, an innovative fusion framework marrying the strengths of graph autoencoders and transformers crafted for deterministic future pose prediction. Our methodology encapsulates a singular compression and tokenization of pose sequences through graph autoencoders. By harnessing a transformer architecture for pose prediction and capitalizing on the tokenized pose sequences, we construct a new paradigm for precise pose prediction. The robustness of GAT-POSE is ascertained through its deployment in three diverse training and testing ecosystems, coupled with the utilization of multiple datasets for a thorough appraisal. The stringency of our experimental setup underscores that GAT-POSE outperforms contemporary methodologies in human pose prediction, bearing significant promise to influence a variety of real-world applications favorably and lay a robust foundation for subsequent explorations in computer vision research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmed, S., Huda, M.N., Rajbhandari, S., Saha, C., Elshaw, M., Kanarachos, S.: Pedestrian and cyclist detection and intent estimation for autonomous vehicles: a survey. Appl. Sci. 9(11), 2335 (2019)

    ArticleĀ  Google ScholarĀ 

  2. Aliakbarian, S., Saleh, F.S., Salzmann, M., Petersson, L., Gould, S.: A stochastic conditioning scheme for diverse human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5223ā€“5232 (2020)

    Google ScholarĀ 

  3. Barsoum, E., Kender, J., Liu, Z.: HP-GAN: probabilistic 3D human motion prediction via GAN. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1418ā€“1427 (2018)

    Google ScholarĀ 

  4. BĆ¼tepage, J., Kjellstrƶm, H., Kragic, D.: Anticipating many futures: online human motion prediction and generation for human-robot interaction. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 4563ā€“4570. IEEE (2018)

    Google ScholarĀ 

  5. Chao, X., et al.: Adversarial refinement network for human motion prediction. In: Proceedings of the Asian Conference on Computer Vision (2020)

    Google ScholarĀ 

  6. Corona, E., Pumarola, A., Alenya, G., Moreno-Noguer, F.: Context-aware human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6992ā€“7001 (2020)

    Google ScholarĀ 

  7. Cui, Q., Sun, H., Yang, F.: Learning dynamic relationships for 3D human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6519ā€“6527 (2020)

    Google ScholarĀ 

  8. Cui, Q., Sun, H., Yang, F.: Learning dynamic relationships for 3D human motion prediction. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6519ā€“6527 (2020)

    Google ScholarĀ 

  9. Guo, X., Choi, J.: Human motion prediction via learning local structure representations and temporal dependencies. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol.Ā 33, pp. 2580ā€“2587 (2019)

    Google ScholarĀ 

  10. Huang, Z., Liu, Y., Fang, Y., Horn, B.K.: Video-based fall detection for seniors with human pose estimation. In: 2018 4th International Conference on Universal Village (UV), pp.Ā 1ā€“4. IEEE (2018)

    Google ScholarĀ 

  11. Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6m: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1325ā€“1339 (2013)

    ArticleĀ  Google ScholarĀ 

  12. Jain, D.K., Zareapoor, M., Jain, R., Kathuria, A., Bachhety, S.: GAN-poser: an improvised bidirectional GAN model for human motion prediction. Neural Comput. Appl. 32(18), 14579ā€“14591 (2020)

    ArticleĀ  Google ScholarĀ 

  13. Jeon, H., Yoon, Y., Kim, D.: Lightweight 2D human pose estimation for fitness coaching system. In: 2021 36th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), pp.Ā 1ā€“4. IEEE (2021)

    Google ScholarĀ 

  14. Kundu, J.N., Gor, M., Babu, R.V.: BiHMP-GAN: bidirectional 3D human motion prediction GAN. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol.Ā 33, pp. 8553ā€“8560 (2019)

    Google ScholarĀ 

  15. Li, C., Zhang, Z., Lee, W.S., Lee, G.H.: Convolutional sequence to sequence model for human dynamics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5226ā€“5234 (2018)

    Google ScholarĀ 

  16. Li, C., Zhang, Z., Lee, W.S., Lee, G.H.: Convolutional sequence to sequence model for human dynamics. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5226ā€“5234 (2018)

    Google ScholarĀ 

  17. Li, M., Chen, S., Zhao, Y., Zhang, Y., Wang, Y., Tian, Q.: Dynamic multiscale graph neural networks for 3D skeleton based human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 214ā€“223 (2020)

    Google ScholarĀ 

  18. Li, M., Chen, S., Zhao, Y., Zhang, Y., Wang, Y., Tian, Q.: Multiscale spatio-temporal graph neural networks for 3D skeleton-based motion prediction. IEEE Trans. Image Process. 30, 7760ā€“7775 (2021)

    ArticleĀ  MathSciNetĀ  Google ScholarĀ 

  19. Li, Y., et al.: Efficient convolutional hierarchical autoencoder for human motion prediction. Vis. Comput. 35, 1143ā€“1156 (2019)

    ArticleĀ  Google ScholarĀ 

  20. Liu, D., Li, Q., Li, S., Kong, J., Qi, M.: Non-autoregressive sparse transformer networks for pedestrian trajectory prediction. Appl. Sci. 13(5), 3296 (2023)

    ArticleĀ  Google ScholarĀ 

  21. Liu, S., Huang, X., Fu, N., Li, C., Su, Z., Ostadabbas, S.: Simultaneously-collected multimodal lying pose dataset: enabling in-bed human pose monitoring. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 1106ā€“1118 (2022)

    ArticleĀ  Google ScholarĀ 

  22. Liu, X., Yin, J., Liu, J., Ding, P., Liu, J., Liu, H.: TrajectoryCNN: a new spatio-temporal feature learning network for human motion prediction. IEEE Trans. Circuits Syst. Video Technol. 31(6), 2133ā€“2146 (2020)

    ArticleĀ  Google ScholarĀ 

  23. Liu, Z., et al.: Motion prediction using trajectory cues. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13299ā€“13308 (2021)

    Google ScholarĀ 

  24. Lyu, K., Chen, H., Liu, Z., Zhang, B., Wang, R.: 3D human motion prediction: a survey. Neurocomputing 489, 345ā€“365 (2022)

    ArticleĀ  Google ScholarĀ 

  25. Lyu, K., Liu, Z., Wu, S., Chen, H., Zhang, X., Yin, Y.: Learning human motion prediction via stochastic differential equations. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4976ā€“4984 (2021)

    Google ScholarĀ 

  26. Ma, T., Nie, Y., Long, C., Zhang, Q., Li, G.: Progressively generating better initial guesses towards next stages for high-quality human motion prediction. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6437ā€“6446 (2022)

    Google ScholarĀ 

  27. Mahdavian, M., Nikdel, P., TaherAhmadi, M., Chen, M.: STPOTR: simultaneous human trajectory and pose prediction using a non-autoregressive transformer for robot follow-ahead. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9959ā€“9965. IEEE (2023)

    Google ScholarĀ 

  28. Mandal, S., Biswas, S., Balas, V.E., Shaw, R.N., Ghosh, A.: Motion prediction for autonomous vehicles from Lyft dataset using deep learning. In: 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), pp. 768ā€“773. IEEE (2020)

    Google ScholarĀ 

  29. Mangalam, K., Adeli, E., Lee, K.H., Gaidon, A., Niebles, J.C.: Disentangling human dynamics for pedestrian locomotion forecasting with noisy supervision. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2784ā€“2793 (2020)

    Google ScholarĀ 

  30. Mao, W., Liu, M., Salzmann, M.: History repeats itself: human motion prediction via motion attention. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 474ā€“489. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_28

    ChapterĀ  Google ScholarĀ 

  31. Mao, W., Liu, M., Salzmann, M., Li, H.: Learning trajectory dependencies for human motion prediction. In: IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

    Google ScholarĀ 

  32. Martinez, J., Black, M.J., Romero, J.: On human motion prediction using recurrent neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2891ā€“2900 (2017)

    Google ScholarĀ 

  33. MartĆ­nez-GonzĆ”lez, A., Villamizar, M., Odobez, J.M.: Pose transformers (POTR): human motion prediction with non-autoregressive transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2276ā€“2284 (2021)

    Google ScholarĀ 

  34. Medsker, L.R., Jain, L.: Recurrent neural networks. Des. Appl. 5(64ā€“67), 2 (2001)

    Google ScholarĀ 

  35. Nikdel, P., Mahdavian, M., Chen, M.: DMMGAN: diverse multi motion prediction of 3D human joints using attention-based generative adversarial network. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9938ā€“9944. IEEE (2023)

    Google ScholarĀ 

  36. Noghre, G.A., Pazho, A.D., Katariya, V., Tabkhi, H.: Understanding the challenges and opportunities of pose-based anomaly detection. arXiv preprint arXiv:2303.05463 (2023)

  37. Pazho, A.D., et al.: Ancilia: scalable intelligent video surveillance for the artificial intelligence of things. IEEE Internet Things J. (2023)

    Google ScholarĀ 

  38. Saadatnejad, S., etĀ al.: A generic diffusion-based approach for 3D human pose prediction in the wild. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 8246ā€“8253 (2023). https://doi.org/10.1109/ICRA48891.2023.10160399

  39. Saadatnejad, S., et al.: A generic diffusion-based approach for 3D human pose prediction in the wild. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 8246ā€“8253. IEEE (2023)

    Google ScholarĀ 

  40. Sofianos, T., Sampieri, A., Franco, L., Galasso, F.: Space-time-separable graph convolutional network for pose forecasting. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 11209ā€“11218 (2021)

    Google ScholarĀ 

  41. Tang, Y., et al.: Flag3D: a 3D fitness activity dataset with language instruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22106ā€“22117 (2023)

    Google ScholarĀ 

  42. Wang, H., Dong, J., Cheng, B., Feng, J.: PVRED: a position-velocity recurrent encoder-decoder for human motion prediction. IEEE Trans. Image Process. 30, 6096ā€“6106 (2021)

    ArticleĀ  Google ScholarĀ 

  43. Wang, Y., Wang, X., Jiang, P., Wang, F.: RNN-based human motion prediction via differential sequence representation. In: 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS), pp. 138ā€“143. IEEE (2019)

    Google ScholarĀ 

  44. Yang, X., Ren, X., Chen, M., Wang, L., Ding, Y.: Human posture recognition in intelligent healthcare. In: Journal of Physics: Conference Series, vol.Ā 1437, p. 012014. IOP Publishing (2020)

    Google ScholarĀ 

  45. Yu, H., et al.: Towards realistic 3D human motion prediction with a spatio-temporal cross-transformer approach. IEEE Trans. Circuits Syst. Video Technol. (2023)

    Google ScholarĀ 

  46. Yu, S., et al.: Regularity learning via explicit distribution modeling for skeletal video anomaly detection. IEEE Trans. Circuits Syst. Video Technol. (2023)

    Google ScholarĀ 

  47. Yuan, Y., Kitani, K.: DLow: diversifying latent flows for diverse human motion prediction. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 346ā€“364. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_20

    ChapterĀ  Google ScholarĀ 

  48. Zhong, C., Hu, L., Zhang, Z., Ye, Y., Xia, S.: Spatio-temporal gating-adjacency GCN for human motion prediction. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6447ā€“6456 (2022)

    Google ScholarĀ 

  49. Zimmermann, C., Welschehold, T., Dornhege, C., Burgard, W., Brox, T.: 3D human pose estimation in RGBD images for robotic task learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1986ā€“1992. IEEE (2018)

    Google ScholarĀ 

  50. Zou, J., et al.: Intelligent fitness trainer system based on human pose estimation. In: Sun, S., Fu, M., Xu, L. (eds.) ICSINC 2018. LNEE, vol. 550, pp. 593ā€“599. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-7123-3_69

    ChapterĀ  Google ScholarĀ 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamed Tabkhi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pazho, A.D., Maldonado, G., Tabkhi, H. (2024). GAT-POSE: Graph Autoencoder-Transformer Fusion forĀ Future Pose Prediction. In: Filipe, J., Rƶning, J. (eds) Robotics, Computer Vision and Intelligent Systems. ROBOVIS 2024. Communications in Computer and Information Science, vol 2077. Springer, Cham. https://doi.org/10.1007/978-3-031-59057-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-59057-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-59056-6

  • Online ISBN: 978-3-031-59057-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics