Skip to main content

Short-Term Action Recognition by 3D Convolutional Neural Network with Pixel-Wise Evidences

  • Conference paper
  • First Online:
  • 763 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1212))

Abstract

Action recognition in videos is becoming popular these years. The difficulty is how to extract the temporal information, which is important in the target actions. In this paper, we propose a conceptually, simple network for short-term action recognition. The proposed network architecture is extended from standard neural network to Autoencoder, which estimates pixel-wise evidence in frames, and they are integrated to classify the actions in the simple classifier. In the proposed architecture, the standard 2D convolutional layers for image classification are extended to 3D convolutional layers in the Autoencoder to extract the temporal information in the target actions. In the training phase, classifiers are introduced in the middle of layer to let the features of the middle layers are well discriminated. Also, classifiers are introduced at the end of layer to improve performance of the standard classifier. We have performed experiments using UCF101 dataset to evaluate the effectiveness of the proposed architecture. The results show that our methods can get efficient performance in short-term action recognition.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  2. Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)

    Article  Google Scholar 

  3. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504 (2006)

    Article  MathSciNet  Google Scholar 

  4. Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human action classes from videos in the wild. CRCV-TR-12-01, November 2012

    Google Scholar 

  5. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. arXiv:1406.2199 (2014)

  6. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  7. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Foundations, vol. 1. MIT Press, Cambridge 1986

    Google Scholar 

  8. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597 (2015)

  9. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. arXiv:1412.0767 (2014)

  10. Szegedy, C., et al.: Going deeper with convolutions. arXiv:1409.4842 (2014)

  11. Lin, M., Chen, Q., Yan, S.: Network In network. arXiv:1312.4400 (2013)

  12. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. arXiv:1610.02391 (2016)

  13. Köpüklü, O., Kose, N., Gunduz, A., Rigoll, G.: Resource efficient 3D convolutional neural networks. arXiv:1904.02422 (2019)

  14. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: CVPR (2017)

    Google Scholar 

  15. Ng, J.Y.-H., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: CVPR (2015)

    Google Scholar 

  16. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: CVPR (2016)

    Google Scholar 

  17. Feichtenhofer, C., Pinz, A., Wildes, R.P.: Spatiotemporal residual networks for video action recognition. arXiv:1611.02155 (2016)

  18. Cai, Z., Wang, L., Peng, X., Qiao, Y.: Multi-view super vector for action recognition. In: CVPR (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to XiaoHan Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X., Miyao, J., Kurita, T. (2020). Short-Term Action Recognition by 3D Convolutional Neural Network with Pixel-Wise Evidences. In: Ohyama, W., Jung, S. (eds) Frontiers of Computer Vision. IW-FCV 2020. Communications in Computer and Information Science, vol 1212. Springer, Singapore. https://doi.org/10.1007/978-981-15-4818-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4818-5_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4817-8

  • Online ISBN: 978-981-15-4818-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics