Abstract
In recent years, the Computer Vision community is witnessing slow shift in Computer Vision tasks from CNN to Attention-based architectures like Transformers which have attained top accuracy on the major recognition tasks. Historically, most of the research works concentrated on image level recognition task as basic ones. However there is still huge room for improvements when moving towards video level tasks. Generally, video models are built on top of image level architectures with attention mechanism or 3D-CNN layers that are globally interconnected across spatial and temporal dimensions. While recent studies of Transformers advocated more on architectural changes, CNN based approaches focus more on best practice of training neural networks which led to a better speed-accuracy trade-off. These best practices are yet to be developed for video recognition tasks. The major contribution of this paper is a novel approach for video-model training. This approach is based on unsupervised hard samples mining to focus training on them rather than on whole dataset. By incorporating this approach into training pipeline of Video Swin Transformer we achieved state-of-the-art accuracy on Kinetics-400 dataset (84.4% and 96.5% for Top-1 and Top-5 accuracy, respectively).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)
Papers with Code http://paperswithcode.com/task/image-classification Accessed 11 Feb 2022
Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press http://www.deeplearningbook.org (2016)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing systems, pp.1097–1105 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, vol. abs/1409.1556 http://arxiv.org/abs/1409.1556 (2014)
Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: The IEEE Conference on Computer Vision and Pattern Recognition (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Lin, T., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (2018)
Nguyen, X., Hu, Y., Ahmad Amin, M., et al.: Detecting video inter-frame forgeries based on convolutional neural network model. International Journal of Image, Graphics and Signal Processing (IJIGSP) 12(3), 1–12 (2020). https://doi.org/10.5815/ijigsp.2020.03.01
Aung, M., Khaing, P., San, M.: Study for license plate detection. International Journal of Image, Graphics and Signal Processing (IJIGSP) 11(12), 39–46 (2019). https://doi.org/10.5815/ijigsp.2019.12.05
AL-Alimi, D, Shao, Y., Alalimi, A., Abdu, A.: Mask R-CNN for geospatial object detection. Int. J. Inf. Technol. Computer Sci.(IJITCS) 12(5), 63–72 (2020). https://doi.org/10.5815/ijitcs.2020.05.05
Arnab, A., Dehghani, M., Heigold, G., et al.: Vivit: A video vision transformer. CoRR, vol. abs/2103.15691 (2021). http://arxiv.org/abs/2103.15691
Bao, H., Dong, L., Wei, F., et al.: Unilmv2: Pseudo-masked language models for unified language model pre-training. International Conference on Machine Learning, pp. 642–652 (2020)
Bertasius, G., Wang, H., Torresani, L.: Is Space-Time Attention All You Need for Video Understanding? CoRR, vol. abs/2102.05095 (2021). http://arxiv.org/abs/2102.05095
Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations (2021)
Touvron, H., Cord, M., Douze, M., et al.: Training Data-Efficient Image Transformers & Distillation Through Attention. CoRR, vol. abs/2012.12877 (2020). http://arxiv.org/abs/2012.12877
Liu, Z., Lin, Y., Cao, Y., et al.: Swin transformer: Hierarchical vision transformer using shifted windows. CoRR, vol. abs/2103.14030 (2021). http://arxiv.org/abs/2103.14030
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4489–4497 (2015)
Qiu, Z., Yao, T., Mei, T.: Learning spatio-temporal representation with pseudo-3d residual networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5533–5541 (2017)
Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 318–335. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_19
Liu, Z., Ning, J., Cao, Y., Wei, Y., et al: Video Swin Transformer. CoRR, vol. abs/2106.13230 (2021). http://arxiv.org/abs/2106.13230
Wightman, R., Touvron, H., Jégou, H.: ResNet Strikes Back: An Improved Training Procedure in Timm. CoRR, vol. abs/2110.00476 (2021). http://arxiv.org/abs/2110.00476
Zhang, X., Wang, Q., Zhang, J., Zhong, Z.: Adversarial AutoAugment. CoRR, vol. abs/1912.11188 (2019). http://arxiv.org/abs/1912.11188
Robinson, J., Chuang, C., Sra, S., Jegelka, S.: Contrastive Learning with Hard Negative Samples. CoRR, vol. abs/2010.04592 (2020). http://arxiv.org/abs/2010.04592
Lin, T., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal Loss for Dense Object Detection. CoRR, vol. abs/1708.02002 (2017). http://arxiv.org/abs/1708.02002
Carreira, J., Zisserman, A.: Quo Vadis, Action Recognition? a new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)
Wu, W., He, D., Lin, T., et al.: MVFNet: Multi-View Fusion Network for Efficient Video Recognition. CoRR, vol. abs/2012.06977 (2020). http://arxiv.org/abs/2012.06977
Tran, D., Wang, H., Torresani, L., et al.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459 (2018)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
Cao, Y., Xu, J., Lin, S., Wei, F., Hu, H.: Gcnet: Non-local networks meet squeeze excitation networks and beyond. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (2019)
Yin, M., Yao, Z., Cao, Y., et al.: Disentangled non-local neural networks. In: Proceedings of the European Conference on Computer Vision (2020)
Neimark, D., Bar, O., Zohar, M., Asselmann, D.: Video Transformer Network. CoRR, vol. abs/2102.00719 (2021). http://arxiv.org/abs/2102.00719
Fan, H., Xiong, B., Mangalam, K., et al.: Multiscale Vision Transformers. CoRR, vol. abs/2104.11227 (2021). http://arxiv.org/abs/2104.11227
Li, X., Zhang, Y., Liu, C., et al.: Vidtr: Video Transformer Without Convolutions. CoRR, vol. abs/2104.11746 (2021). http://arxiv.org/abs/2104.11746
Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6202–6211 (2019)
Feichtenhofer, C.: X3d: Expanding architectures for efficient video recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 203–213 (2020)
Nanjundan, S., Sankaran, S., Arjun, C.R., Paavai Anand, G.: Identifying the Number of Clusters for K-Means: A Hypersphere Density Based Approach. CoRR, vol. abs/1912.00643 (2019). http://arxiv.org/abs/1912.00643
Kay, W., Carreira, J., Simonyan, K., et al.: The Kinetics Human Action Video Dataset. CoRR, vol. abs/1705.06950 (2017). http://arxiv.org/abs/1705.06950
Chen, K., Wang, J., Pang, J., et al.: MMAction2: Open-Source Toolbox for Video Understanding. https://github.com/open-mmlab/mmaction2 Accessed 11 Feb 2022
Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. CoRR, vol. abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980
Huang, G., Sun, Y., Liu, Z., Sedra, D., Weinberger, K. Q.: Deep networks with stochastic depth. European conference on computer vision, pp. 646–661, Springer (2016). https://doi.org/10.1007/978-3-319-46493-0_39
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zarichkovyi, A., Stetsenko, I.V. (2023). Hard Samples Make Difference: An Improved Training Procedure for Video Action Recognition Tasks. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2022. Lecture Notes in Networks and Systems, vol 544. Springer, Cham. https://doi.org/10.1007/978-3-031-16075-2_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-16075-2_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16074-5
Online ISBN: 978-3-031-16075-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)