Skip to main content

Advertisement

Log in

Auxiliary criterion conversion via spatiotemporal semantic encoding and feature entropy for action recognition

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Video-based action recognition in realistic scenes is a core technology for human–computer interaction and smart surveillance. Although the trajectory features with the bag of visual words have confirmed promising performance, spatiotemporal interactive information cannot be effectively encoded which is valuable for classification. To address this issue, we propose a spatiotemporal semantic feature (ST-SF) and implement the conversion of it to the auxiliary criterion based on the information entropy theory. First, we present a text-based relevance analysis method to estimate the textual labels of objects most relevant to actions, which are employed to train the more targeted detectors based on the deep network. False detections are optimized by the inter-frame cooperativity and dynamic programming to construct the valid tubes. Then, we design the ST-SF to encode the interactive information, and the concept and calculation of feature entropy are defined based on the spatial distribution of ST-SFs on the training set. Finally, we achieve a two-stage classification strategy using the resulting decision gains. Experimental results on three publicly available datasets demonstrate that our method is robust and improves upon the state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2), 107–123 (2005)

    Article  Google Scholar 

  2. Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)

    Article  MathSciNet  Google Scholar 

  3. Yi, Y., Wang, H.: Motion keypoint trajectory and covariance descriptor for human action recognition. Vis. Comput. 34(3), 391–403 (2018)

    Article  Google Scholar 

  4. Wang, H., Oneata, D., Verbeek, J., Schmid, C.: A robust and efficient video representation for action recognition. Int. J. Comput. Vis. 119(3), 219–238 (2016)

    Article  MathSciNet  Google Scholar 

  5. Li, Y., Ye, J., Wang, T., Huang, S.: Augmenting bag-of-words: a robust contextual representation of spatiotemporal interest points for action recognition. Vis. Comput. 31(10), 1383–1394 (2015)

    Article  Google Scholar 

  6. Dawn, D.D., Shaikh, S.H.: A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Vis. Comput. 32(3), 289–306 (2016)

    Article  Google Scholar 

  7. Wang, H., Ullah, M.M., Kläser, A., Laptev, I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: British Machine Vision Conference, pp. 124.1-124.11 (2009)

  8. Yi, Y., Zheng, Z., Lin, M.: Realistic action recognition with salient foreground trajectories. Expert Syst. Appl. 75, 44–55 (2017)

    Article  Google Scholar 

  9. Peng, X., Wang, L., Wang, X., Qiao, Y.: Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput. Vis. Image Underst. 150(C), 109–125 (2016)

    Article  Google Scholar 

  10. Jégou, H., Perronnin, F., Douze, M.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1704–1716 (2012)

    Article  Google Scholar 

  11. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: International Conference on Neural Information Processing Systems, pp. 568–576 (2014)

  12. Varol, G., Laptev, I., Schmid, C.: Long-term temporal convolutions for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1510–1517 (2018)

    Article  Google Scholar 

  13. Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

  14. Tran, D., Wang, H., Torresani, L.: A closer look at spatiotemporal convolutions for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6450–6459 (2018)

  15. Hara, K., Kataoka, H., Satoh, Y.: Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6546–6555 (2018)

  16. Krishnan, K., Prabhu, N., Babu, R.V.: ARRNET: action recognition through recurrent neural networks. In: IEEE International Conference on Signal Processing and Communications, pp. 1–5 (2016)

  17. Prest, A., Ferrari, V., Schmid, C.: Explicit modeling of human-object interactions in realistic videos. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 835–848 (2013)

    Article  Google Scholar 

  18. Eum, S., Reale, C., Kwon, H., Bonial, C., Voss, C.: Object and text-guided semantics for CNN-based activity recognition. In: IEEE International Conference on Image Processing, pp. 1–5 (2018)

  19. Jain, M., Gemert, J.C.V, Snoek, C.G.M.: What do 15,000 object categories tell us about classifying and localizing actions?. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 46–55 (2015)

  20. Wu, Z., Fu, Y., Jiang, Y.G., Sigal, L.: Harnessing object and scene semantics for large-scale video understanding. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3112–3121 (2016)

  21. Heilbron, F.C., Barrios, W., Escorcia, V., Ghanem, B.: SCC: semantic context cascade for efficient action detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1454–1463 (2017)

  22. Liu, B., Wu, H., Su, W.: Rotation-invariant object detection using Sector-ring HOG and boosted random ferns. Vis. Comput. 34(5), 707–719 (2018)

    Article  Google Scholar 

  23. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

  24. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. computer research repository, arXiv. 1804.02767 (2018)

  25. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S.: Distributed representations of words and phrases and their compositionality. In: International Conference on Neural Information Processing Systems, pp. 3111–3119 (2013)

  26. Jhuang, H., Gall, J., Zuffi, S., Schmid, C.: Towards understanding action recognition. In: International Conference on Computer Vision, pp. 3192–3199 (2013)

  27. Zhang, W., Zhu, M., Derpanis, K.G.: From actemes to action: a strongly-supervised representation for detailed action understanding. In: International Conference on Computer Vision, pp. 2248–2255 (2013)

  28. Liu, J., Luo, J.: Recognizing realistic actions from videos “in the wild”. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1996–2003 (2009)

  29. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: European Conference on Computer Vision, pp. 740–755 (2014)

  30. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

  31. Gkioxari, G., Malik, J.: Finding action tubes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 759–768 (2015)

  32. LabelImg, GitHub, (2017). https://github.com/tzutalin/labelImg

  33. Shannon, C.E.: Prediction and entropy of printed English. Bell Syst. Tech. J. 30(1), 50–64 (1951)

    Article  Google Scholar 

  34. Ma, L., Lu, J., Feng, J., Zhou, J.: Multiple feature fusion via weighted entropy for visual tracking. In: International Conference on Computer Vision, pp. 3128–3136 (2015)

  35. Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: International Conference on Neural Information Processing Systems, pp. 529–536 (2005)

  36. Vedaldi, A., Fulkerson, B.: VLFeat: an open and portable library of computer vision algorithms. In: International Conference on Multimedea, pp. 1469–1472 (2010)

  37. Zolfaghari, M., Singh, K., Brox, T.: Eco: Efficient convolutional network for online video understanding. In: European Conference on Computer Vision, pp. 695–712 (2018)

  38. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: European Conference on Computer Vision, pp. 305–321 (2018)

  39. Peng, X., Zou, C., Qiao, Y., Peng, Q.: Action recognition with stacked fisher vectors. In: European Conference on Computer Vision, pp. 581–595 (2014)

  40. Shao, L., Liu, L., Yu, M.: Kernelized multiview projection for robust action recognition. Int. J. Comput. Vis. 118(2), 115–129 (2016)

    Article  MathSciNet  Google Scholar 

  41. Jung, H.J., Hong, K.S.: Modeling temporal structure of complex actions using bag-of-sequencelets. Pattern Recognit. Lett. 85(1), 21–28 (2017)

    Article  Google Scholar 

  42. Nie, B.X., Xiong, C., Zhu, S.C.: Joint action recognition and pose estimation from video. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1293–1301 (2015)

  43. Zhang, G., Jia, S., Li, X.: Weighted score-level feature fusion based on Dempster–Shafer evidence theory for action recognition. J. Electron. Imaging 27(1), 013021 (2018)

    Google Scholar 

  44. Chéron, G., Laptev, I., Schmid, C.: P-CNN: pose-based CNN features for action recognition. In: International Conference on Computer Vision, pp. 3218–3226 (2015)

  45. Cao, C., Zhang, Y., Zhang, C., Lu, H.: Action recognition with joints-pooled 3D deep convolutional descriptors. In: International Joint Conference on Artificial Intelligence, pp. 3324–3330 (2016)

  46. Iqbal, U., Garbade, M., Gall, J.: Pose for action-action for pose. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 438–445 (2017)

  47. Ma, M., Marturi, N., Li, Y., Leonardis, A., Stolkin, R.: Region-sequence based six-stream CNN features for general and fine-grained human action recognition in videos. Pattern Recognit. 76(1), 506–521 (2018)

    Article  Google Scholar 

  48. Gammulle, H., Denman, S., Sridharan, S., Fookes, C.: Two stream LSTM: a deep fusion framework for human action recognition. In: IEEE Winter Conference on Applications of Computer Vision, pp. 177–186 (2017)

  49. Pan, Y., Xu, J., Wang, M.: Compressing recurrent neural networks with tensor ring for action recognition. In: AAAI Conference on Artificial Intelligence, pp. 4683–4690 (2019)

  50. Tran, D., Bourdev, L., Fergus, R.: Learning spatiotemporal features with 3D convolutional networks. In: IEEE International Conference on Computer Vision, pp. 489–497 (2015)

  51. Wang, L., Xiong, Y., Qiao, Y.: Convnet architecture search for spatiotemporal feature learning. Computer Research Repository, arXiv. 1605.08140 (2017)

  52. Cao, C., Zhang, Y., Zhang, C., Lu, H.: Body joint guided 3-d deep convolutional descriptors for action recognition. IEEE Trans. Cybern. 48(3), 1095–1108 (2017)

    Article  Google Scholar 

  53. Yu, T., Wang, L., Da, C.: Weakly semantic guided action recognition. IEEE Trans. Multimed. 21(10), 2504–2517 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

This research is financially supported by the 2017 BJUT United Grand Scientific Research Program on Intelligent Manufacturing (No. 040000546317552) and the National Natural Science Foundation of China (Nos. 61175087, 61703012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoliang Zhang.

Ethics declarations

Funding

This study was funded by the 2017 BJUT United Grand Scientific Research Program on Intelligent Manufacturing (Grant number 040000546317552) and the National Natural Science Foundation of China (Grant number 61175087, Grant number 61703012).

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, X., Zhang, G., Jia, S. et al. Auxiliary criterion conversion via spatiotemporal semantic encoding and feature entropy for action recognition. Vis Comput 37, 1673–1690 (2021). https://doi.org/10.1007/s00371-020-01931-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01931-4

Keywords

Navigation