skip to main content
10.1145/2911996.2912001acmconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
research-article

Action Recognition by Learning Deep Multi-Granular Spatio-Temporal Video Representation

Published: 06 June 2016 Publication History

Abstract

Recognizing actions in videos is a challenging task as video is an information-intensive media with complex variations. Most existing methods have treated video as a flat data sequence while ignoring the intrinsic hierarchical structure of the video content. In particular, an action may span different granularities in this hierarchy including, from small to large, a single frame, consecutive frames (motion), a short clip, and the entire video. In this paper, we present a novel framework to boost action recognition by learning a deep spatio-temporal video representation at hierarchical multi-granularity. Specifically, we model each granularity as a single stream by 2D (for frame and motion streams) or 3D (for clip and video streams) convolutional neural networks (CNNs). The framework therefore consists of multi-stream 2D or 3D CNNs to learn both the spatial and temporal representations. Furthermore, we employ the Long Short-Term Memory (LSTM) networks on the frame, motion, and clip streams to exploit long-term temporal dynamics. With a softmax layer on the top of each stream, the classification scores can be predicted from all the streams, followed by a novel fusion scheme based on the multi-granular score distribution. Our networks are learned in an end-to-end fashion. On two video action benchmarks of UCF101 and HMDB51, our framework achieves promising performance compared with the state-of-the-art.

References

[1]
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert. High accuracy optical ow estimation based on a theory for warping. In ECCV. 2004.
[2]
P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In VS-PETS, 2005.
[3]
J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389, 2014.
[4]
M. Hoai and A. Zisserman. Improving human action recognition using score distribution and ranking. In ACCV, 2014.
[5]
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735--1780, 1997.
[6]
M. Jain, H. Jegou, and P. Bouthemy. Better exploiting motion for better action recognition. In CVPR, 2013.
[7]
S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE Trans. on PAMI, 35(1):221--231, 2013.
[8]
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Ca e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[9]
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
[10]
A. Klaser, M. Marszalek, and C. Schmid. A spatio-temporal descriptor based on 3d-gradients. In BMVC, 2008.
[11]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[12]
H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In ICCV, 2011.
[13]
Z. Lan, M. Lin, X. Li, A. G. Hauptmann, and B. Raj. Beyond gaussian pyramid: Multi-skip feature stacking for action recognition. In CVPR, 2015.
[14]
I. Laptev and T. Lindeberg. Space-time interest points. In ICCV, 2003.
[15]
I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008.
[16]
J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets deep networks for video classi cation. In CVPR, 2015.
[17]
Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui. Jointly modeling embedding and translation to bridge video and language. arXiv preprint arXiv:1505.01861v3, 2015.
[18]
X. Peng, L. Wang, X. Wang, and Y. Qiao. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. arXiv preprint arXiv:1405.4506, 2014.
[19]
F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV, 2010.
[20]
Z. Qiu, Q. Li, T. Yao, T. Mei, and Y. Rui. Msrasiamsm at thumos challenge 2015. In CVPR THUMOS Challenge Workshop, 2015.
[21]
P. Scovanner, S. Ali, and M. Shah. A 3-dimensional sift descriptor and its application to action recognition. In ACM MM, 2007.
[22]
K. Simonyan, A. Vedaldi, and A. Zisserman. Deep sher networks for large-scale image classi cation. In NIPS, 2013.
[23]
K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
[24]
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[25]
K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human action classes from videos in the wild. CRCV-TR-12-01, 2012.
[26]
N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms. In ICML, 2015.
[27]
D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. arXiv preprint arXiv:1412.0767, 2014.
[28]
H. Wang, A. Klaser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, 2011.
[29]
H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[30]
L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In CVPR, 2015.
[31]
G. Willems, T. Tuytelaars, and L. J. V. Gool. An efficient dense and scale-invariant spatio-temporal interest point detector. In ECCV, 2008.
[32]
Z. Wu, Y.-G. Jiang, J. Wang, J. Pu, and X. Xue. Exploring inter-feature and inter-class relationships with deep neural networks for video classification. In ACM MM, 2014.
[33]
T. Yao, T. Mei, C.-W. Ngo, and S. Li. Annotation for free: Video tagging by mining user search behavior. In ACM MM, 2013.
[34]
X. Yuan, W. Lai, T. Mei, X.-S. Hua, and X.-Q. Wu. Automatic video genre categorization using hierarchical svm. In ICIP, 2006.
[35]
S. Zha, F. Luisier, W. Andrews, N. Srivastava, and R. Salakhutdinov. Exploiting image-trained CNN architectures for unconstrained video classification. arXiv preprint arXiv:1503.04144, 2015.
[36]
Z.-J. Zha, T. Mei, Z. Wang, and X.-S. Hua. Building a comprehensive ontology to re ne video concept detection. In ACM SIGMM Workshop on MIR, 2007.

Cited By

View all
  • (2024)AMS-Net: Modeling Adaptive Multi-Granularity Spatio-Temporal Cues for Video Action RecognitionIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332114135:12(18731-18745)Online publication date: Dec-2024
  • (2024)Fusion of Machine Learning and Deep Learning:A Hybrid Approach for Deepfake Detection2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)10.1109/ICCCNT61001.2024.10724874(1-6)Online publication date: 24-Jun-2024
  • (2024)Multi-granular spatial-temporal synchronous graph convolutional network for robust action recognitionExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.124980257:COnline publication date: 10-Dec-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMR '16: Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval
June 2016
452 pages
ISBN:9781450343596
DOI:10.1145/2911996
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. action recognition
  2. deep learning
  3. video analysis

Qualifiers

  • Research-article

Conference

ICMR'16
Sponsor:
ICMR'16: International Conference on Multimedia Retrieval
June 6 - 9, 2016
New York, New York, USA

Acceptance Rates

ICMR '16 Paper Acceptance Rate 20 of 120 submissions, 17%;
Overall Acceptance Rate 254 of 830 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)1
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)AMS-Net: Modeling Adaptive Multi-Granularity Spatio-Temporal Cues for Video Action RecognitionIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332114135:12(18731-18745)Online publication date: Dec-2024
  • (2024)Fusion of Machine Learning and Deep Learning:A Hybrid Approach for Deepfake Detection2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)10.1109/ICCCNT61001.2024.10724874(1-6)Online publication date: 24-Jun-2024
  • (2024)Multi-granular spatial-temporal synchronous graph convolutional network for robust action recognitionExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.124980257:COnline publication date: 10-Dec-2024
  • (2024)A hybrid transformer with domain adaptation using interpretability techniques for the application to the detection of risk situationsMultimedia Tools and Applications10.1007/s11042-024-18687-x83:35(83339-83356)Online publication date: 11-Mar-2024
  • (2024)Modelling an efficient hybridized approach for facial emotion recognition using unconstraint videos and deep learning approachesSoft Computing10.1007/s00500-024-09668-128:5(4593-4606)Online publication date: 6-Feb-2024
  • (2024)Prediction Accuracy & Reliability: Classification and Object Localization Under Distribution ShiftMachine Learning and Granular Computing: A Synergistic Design Environment10.1007/978-3-031-66842-5_9(263-301)Online publication date: 22-Sep-2024
  • (2024)IFI: Interpreting for Improving: A Multimodal Transformer with an Interpretability Technique for Recognition of Risk EventsMultiMedia Modeling10.1007/978-3-031-53302-0_9(117-131)Online publication date: 29-Jan-2024
  • (2023)Sleep Action Recognition Based on Segmentation StrategyJournal of Imaging10.3390/jimaging90300609:3(60)Online publication date: 7-Mar-2023
  • (2023)Alleviating Spatial Misalignment and Motion Interference for UAV-based Video RecognitionProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3611799(193-202)Online publication date: 26-Oct-2023
  • (2023)A Hybrid CNN-LSTM Model for Video-Based Teaching Style Evaluation2023 8th International Conference on Image, Vision and Computing (ICIVC)10.1109/ICIVC58118.2023.10270068(789-795)Online publication date: 27-Jul-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media