Skip to main content

MHAD: Multi-Human Action Dataset

  • Conference paper
  • First Online:
Fourth International Congress on Information and Communication Technology

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1041))

Abstract

This paper presents a framework for a multi-action recognition method. In this framework, we introduce a new approach to detect and recognize the action of several persons within one scene. Also, considering the scarcity of related data, we provide a new data set involving many persons performing different actions in the same video. Our multi-action recognition method is based on a three-dimensional convolution neural network, and it involves a preprocessing phase to prepare the data to be recognized using the 3DCNN model. The new representation of data consists in extracting each person’s sequence during its presence in the scene. Then, we analyze each sequence to detect the actions in it. The experimental results proved to be accurate, efficient, and robust in real-time multi-human action recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://drive.google.com/open?id=1pfnnansy4VAejLRKNhCA8fn9IABarwwz.

References

  1. M. Vrigkas, C. Nikou, I.A. Kakadiaris, A review of human activity recognition methods. Front. Robot. AI 2, 28 (2015)

    Article  Google Scholar 

  2. Z.S. Abdallah, M.M. Gaber, B. Srinivasan et al., Activity recognition with evolving data streams: A review. ACM Comput. Surv. (CSUR) 51(4), 71 (2018)

    Article  Google Scholar 

  3. C. Schuldt, I. Laptev, B. Caputo, Recognizing human actions: a local SVM approach, in Proceedings International Conference on Pattern Recognition (Cambridge, 2004), pp. 32–36

    Google Scholar 

  4. M. Blank, L. Gorelick, E. Shechtman, M. Irani, R. Basri, Actions as space-time shapes, in Proceedings IEEE International Conference on Computer Vision (Beijing, 2005), pp. 1395–1402

    Google Scholar 

  5. D. Weinland, E. Boyer, R. Ronfard, Action recognition from arbitrary views using 3D exemplars (ICCV, 2007)

    Google Scholar 

  6. A. Nagendran et al., New system performs persistent wide-area aerial surveillance. http://spie.org/x41092.xml?ArticleID=x41092

  7. R.B. Fisher, PETS04 Surveillance Ground Truth Dataset (2004). Available at http://www-prima.inrialpes.fr/PETS04/

  8. R. B. Fisher, Behave: Computer-assisted prescreening of video streams for unusual activities (2007). Available at http://homepages.inf.ed.ac.uk/rbf/BEHAVE/

  9. R.B. Fisher, PETS07 Benchmark Dataset (2007). Available at http://www.cvg.reading.ac.uk/PETS2007/data.html

  10. I. Laptev, M. Marszałek, C. Schmid, B. Rozenfeld, Learning realistic human actions from movies, in CVPR (2008)

    Google Scholar 

  11. M. Marszałek, I. Laptev, C. Schmid, Actions in context, in CVPR (2009)

    Google Scholar 

  12. M. Rodriguez, J. Ahmed, M. Shah, Action mach: A spatiotemporal maximum average correlation height filter for action recognition, in CVPR (2008), http://server.cs.ucf.edu/Ëśvision/data.html

  13. J. Liu, J. Luo, M. Shah, Recognizing realistic actions from videos "in the wild”, in CVPR (2009), http://serre-lab.clps.brown.edu/resources/HMDB/

  14. H. Kuehne, H. Jhuang, E. Garrote, et al. HMDB: A large video database for human motion recognition, in 2011 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2011), pp. 2556–2563

    Google Scholar 

  15. F. C. Heilbron, V. Escorcia, B. Ghanem, J.C. Niebles, ActivityNet: A large-scale video benchmark for human activity understanding, in Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA, 2015), pp. 961–970

    Google Scholar 

  16. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, F. Li, Large-scale video classification with convolutional neural networks, in Proceedings on 2014 Computer Vision Pattern Recognition (2014), pp. 1725–1732

    Google Scholar 

  17. H. Yang, C. Yuan, J. Xing, et al., Scnn: Sequential convolutional neural network for human action recognition in videos, in 2017 IEEE International Conference on Image Processing (ICIP) (IEEE, 2017), pp. 355–359

    Google Scholar 

  18. K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, in Proceedings of Neural Information Processing System (2014), pp. 568–576

    Google Scholar 

  19. H. Rahmani, A. Mian, M. Shah, Learning a deep model for human action recognition from novel viewpoints. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 667–681

    Google Scholar 

  20. C. Feichtenhofer, A. Pinz, A. Zisserman, Convolutional two-stream network fusion for video action recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 1933–1941

    Google Scholar 

  21. C.B. Jin, S. Li, H. Kim, Real-time action detection in video surveillance using sub-action descriptor with multi-cnn. arXiv preprint (2017) https://arxiv.org/abs/arXiv:1710.03383

    Article  Google Scholar 

  22. L. Wang, Y. Qiao, X. Tang, Action recognition with trajectory-pooled deep-convolutional descriptors, in CVPR, pp. 4305–4314 (2015)

    Article  Google Scholar 

  23. A. Akula, A. K., Shah, R. Ghosh, Deep learning approach for human action recognition in infrared images. Cogn. Syst. Res. 50, 146–154 (2018)

    Article  Google Scholar 

  24. O. Elharrouss, A. Abbad, D. Moujahid, H. Tairi, Moving object detection zone using a block-based background model. IET Comput. Vis. 12(1), 86–94 (2017)

    Google Scholar 

  25. J. F. Henriques, R. Caseiro, P. Martins et al., High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)

    Google Scholar 

  26. O. Elharrouss, A. Abbad, D. Moujahid, J. Riffi, H. Tairi, A block-based background model for moving object detection. ELCVIA: Electronic Letters on Computer Vision and Image Analysis. 15(3), 0017–31 (2016)

    Google Scholar 

  27. O. ELHarrouss, D. Moujahid, S. E. Elkaitouni, H. Tairi, Moving objects detection based on thresholding operations for video surveillance systems, in 2015 IEEE/ACS 12th International Conference of Computer Systems and Applications (AICCSA), pp. 1–5. IEEE (2015)

    Google Scholar 

  28. N. Almaadeed, O. Elharrouss, S. Al-Maadeed, A. Bouridane, A. Beghdadi, A novel approach for robust multi human action detection and recognition based on 3-Dimentional convolutional neural networks. arXiv preprint (2019) https://arxiv.org/abs/1907.11272

Download references

Acknowledgements

This publication was made by NPRP Grant# NPRP8-140-2-065 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Omar Elharrouss .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Elharrouss, O., Almaadeed, N., Al-Maadeed, S. (2020). MHAD: Multi-Human Action Dataset. In: Yang, XS., Sherratt, S., Dey, N., Joshi, A. (eds) Fourth International Congress on Information and Communication Technology. Advances in Intelligent Systems and Computing, vol 1041. Springer, Singapore. https://doi.org/10.1007/978-981-15-0637-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-0637-6_28

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-0636-9

  • Online ISBN: 978-981-15-0637-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics