Skip to main content

Action Recognition in Australian Rules Football Through Deep Learning

  • Conference paper
  • First Online:
Computational Science – ICCS 2022 (ICCS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13352))

Included in the following conference series:

  • 1430 Accesses

Abstract

Understanding player’s actions and activities in sports is crucial to analyze player and team performance. Within Australian Rules football, such data is typically captured manually by multiple (paid) spectators working for sports data analytics companies. This data is augmented with data from GPS tracking devices in player clothing. This paper focuses on exploring the feasibility of action recognition in Australian rules football through deep learning and use of 3-dimensional Convolutional Neural Networks (3D CNNs). We identify several key actions that players perform: kick, pass, mark and contested mark, as well as non-action events such as images of the crowd or players running with the ball. We explore various state-of-the-art deep learning architectures and developed a custom data set containing over 500 video clips targeted specifically to Australian rules football. We fine-tune a variety of models and achieve a top-1 accuracy of 77.45% using R2+1D ResNet-152. We also consider team and player identification and tracking using You Only Look Once (YOLO) and Simple Online and Realtime Tracking with a deep association metric (DeepSORT) algorithms. To the best of our knowledge, this is the first paper to address the topic of action recognition in Australian rules football.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abu-El-Haija, S., et al.: YouTube-8M: a large-scale video classification benchmark. CoRR abs/1609.08675. arXiv: 1609.08675 (2016)

  2. Carreira, J., Zisserman, A.: Quo Vadis, action recognition? A new model and the kinetics dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733. IEEE, Honolulu (July 2017). https://doi.org/10.1109/CVPR.2017.502, http://ieeexplore.ieee.org/document/8099985/

  3. Deng, J., Dong, W., Socher, R., Li, L.J., Kai Li, Li Fei-Fei: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE, Miami (June 2009). https://doi.org/10.1109/CVPR.2009.5206848, https://ieeexplore.ieee.org/document/5206848/

  4. Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2625–2634. IEEE, Boston (June 2015). https://doi.org/10.1109/CVPR.2015.7298878, http://ieeexplore.ieee.org/document/7298878/

  5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: SlowFast networks for video recognition. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6201–6210. IEEE, Seoul (October 2019). https://doi.org/10.1109/ICCV.2019.00630, https://ieeexplore.ieee.org/document/9008780/

  6. Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933–1941. IEEE, Las Vegas (June 2016). https://doi.org/10.1109/CVPR.2016.213, http://ieeexplore.ieee.org/document/7780582/

  7. Ghadiyaram, D., Tran, D., Mahajan, D.: Large-scale weakly-supervised pre-training for video action recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12038–12047. IEEE, Long Beach (June 2019). https://doi.org/10.1109/CVPR.2019.01232, https://ieeexplore.ieee.org/document/8953267/

  8. Giancola, S., Amine, M., Dghaily, T., Ghanem, B.: SoccerNet: a scalable dataset for action spotting in soccer videos. CoRR abs/1804.04527. arXiv: 1804.04527 (2018)

  9. Guo, J., et al.: GluonCV and GluonNLP: deep learning in computer vision and natural language processing. arXiv:1907.04433 (February 2020)

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas (June 2016). https://doi.org/10.1109/CVPR.2016.90, http://ieeexplore.ieee.org/document/7780459/

  11. Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981). https://doi.org/10.1016/0004-3702(81)90024-2, https://linkinghub.elsevier.com/retrieve/pii/0004370281900242

  12. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML 2015, JMLR.org, Lille, France, vol. 37, pp. 448–456 (July 2015)

    Google Scholar 

  13. Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013). https://doi.org/10.1109/TPAMI.2012.59

    Article  Google Scholar 

  14. Jocher, G., Stoken, A., Chaurasia, A., Borovec, J.: NanoCode012, Taoxie, Kwon, Y., Michael, K., Changyu, L., Fang, J., V, A., Laughing, tkianai, yxNONG, Skalski, P., Hogan, A., Nadar, J., imyhxy, Mammana, L., AlexWang1900, Fati, C., Montes, D., Hajek, J., Diaconu, L., Minh, M.T., Marc, albinxavi, fatih, oleg, wanghaoyang0106: ultralytics/yolov5: v6.0 - YOLOv5n ‘Nano’ models, Roboflow integration, TensorFlow export, OpenCV DNN support (October 2021). https://doi.org/10.5281/zenodo.5563715

  15. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4694–4702. IEEE, Boston (June 2015). https://doi.org/10.1109/CVPR.2015.7299101, http://ieeexplore.ieee.org/document/7299101/

  16. Kanungo, T., Mount, D., Netanyahu, N., Piatko, C., Silverman, R., Wu, A.: An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 881–892 (2002). https://doi.org/10.1109/TPAMI.2002.1017616

    Article  MATH  Google Scholar 

  17. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1725–1732. IEEE, Columbus (June 2014). https://doi.org/10.1109/CVPR.2014.223, https://ieeexplore.ieee.org/document/6909619

  18. Kay, W., et al.: The Kinetics Human Action Video Dataset. arXiv:1705.06950 (May 2017)

  19. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS 2014, vol. 1, pp. 568–576. MIT Press, Montreal (December 2014)

    Google Scholar 

  20. Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402 (December 2012)

  21. Su, W., et al.: VL-BERT: pre-training of generic visual-linguistic representations. CoRR abs/1908.08530. arXiv: 1908.08530 (2019)

  22. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497. IEEE, Santiago (December 2015). https://doi.org/10.1109/ICCV.2015.510, http://ieeexplore.ieee.org/document/7410867/

  23. Tran, D., Wang, H., Feiszli, M., Torresani, L.: video classification with channel-separated convolutional networks. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5551–5560. IEEE, Seoul (October 2019). https://doi.org/10.1109/ICCV.2019.00565, https://ieeexplore.ieee.org/document/9008828/

  24. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6450–6459 (June 2018). https://doi.org/10.1109/CVPR.2018.00675, iSSN: 2575-7075

  25. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 6000–6010. Curran Associates Inc., Long Beach (December 2017)

    Google Scholar 

  26. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision, pp. 3551–3558 (December 2013). https://doi.org/10.1109/ICCV.2013.441, iSSN: 2380-7504

  27. Wang, L., et al.: Temporal segment networks: towards good practices for deep action recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part VIII. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_2

    Chapter  Google Scholar 

  28. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. IEEE, Salt Lake City (June 2018). https://doi.org/10.1109/CVPR.2018.00813, https://ieeexplore.ieee.org/document/8578911/

  29. Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 3645–3649. IEEE, Beijing (September 2017). https://doi.org/10.1109/ICIP.2017.8296962, http://ieeexplore.ieee.org/document/8296962/

  30. Yang, C., Xu, Y., Shi, J., Dai, B., Zhou, B.: Temporal pyramid network for action recognition. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 588–597. IEEE, Seattle (June 2020). https://doi.org/10.1109/CVPR42600.2020.00067, https://ieeexplore.ieee.org/document/9157586/

  31. Zhu, Y., et al.: A comprehensive study of deep video action recognition. arXiv:2012.06567 (December 2020)

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Luan, S.K., Yin, H., Sinnott, R. (2022). Action Recognition in Australian Rules Football Through Deep Learning. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol 13352. Springer, Cham. https://doi.org/10.1007/978-3-031-08757-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08757-8_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08756-1

  • Online ISBN: 978-3-031-08757-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics