Reference Hub1
Spatial-Temporal Feature-Based Sports Video Classification

Spatial-Temporal Feature-Based Sports Video Classification

Zengkai Wang
Copyright: © 2021 |Volume: 12 |Issue: 4 |Pages: 19
ISSN: 1941-6237|EISSN: 1941-6245|EISBN13: 9781799860297|DOI: 10.4018/IJACI.2021100105
Cite Article Cite Article

MLA

Wang, Zengkai. "Spatial-Temporal Feature-Based Sports Video Classification." IJACI vol.12, no.4 2021: pp.79-97. http://doi.org/10.4018/IJACI.2021100105

APA

Wang, Z. (2021). Spatial-Temporal Feature-Based Sports Video Classification. International Journal of Ambient Computing and Intelligence (IJACI), 12(4), 79-97. http://doi.org/10.4018/IJACI.2021100105

Chicago

Wang, Zengkai. "Spatial-Temporal Feature-Based Sports Video Classification," International Journal of Ambient Computing and Intelligence (IJACI) 12, no.4: 79-97. http://doi.org/10.4018/IJACI.2021100105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Video classification has been an active research field of computer vision in last few years. Its main purpose is to produce a label that is relevant to the video given its frames. Unlike image classification, which takes still pictures as input, the input of video classification is a sequence of images. The complex spatial and temporal structures of video sequence incur understanding and computation difficulties, which should be modeled to improve the video classification performance. This work focuses on sports video classification but can be expanded into other applications. In this paper, the authors propose a novel sports video classification method by processing the video data using convolutional neural network (CNN) with spatial attention mechanism and deep bidirectional long short-term memory (BiLSTM) network with temporal attention mechanism. The method first extracts 28 frames from each input video and uses the classical pre-trained CNN to extract deep features, and the spatial attention mechanism is applied to CNN features to decide ‘where' to look. Then the BiLSTM is utilized to model the long-term temporal dependence between video frame sequences, and the temporal attention mechasim is employed to decide ‘when' to look. Finally, the label of the input video is given by the classification network. In order to evaluate the feasibility and effectiveness of the proposed method, an extensive experimental investigation was conducted on the open challenging sports video datasets of Sports8 and Olympic16; the results show that the proposed CNN-BiLSTM network with spatial temporal attention mechanism can effectively model the spatial-temporal characteristics of video sequences. The average classification accuracy of the Sports8 is 98.8%, which is 6.8% higher than the existing method. The average classification accuracy of 90.46% is achieved on Olympic16, which is about 18% higher than the existing methods. The performance of the proposed approach outperforms the state-of-the-art methods, and the experimental results demonstrate the effectiveness of the proposed approach.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.