Abstract
This paper introduces a fast and efficient network architecture, NeXtVLAD, to aggregate frame-level features into a compact feature vector for large-scale video classification. Briefly speaking, the basic idea is to decompose a high-dimensional feature into a group of relatively low-dimensional vectors with attention before applying NetVLAD aggregation over time. This NeXtVLAD approach turns out to be both effective and parameter efficient in aggregating temporal information. In the 2nd Youtube-8M video understanding challenge, a single NeXtVLAD model with less than 80M parameters achieves a GAP score of 0.87846 in private leaderboard. A mixture of 3 NeXtVLAD models results in 0.88722, which is ranked 3rd over 394 teams. The code is publicly available at https://github.com/linrongc/youtube-8m.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The prevalence of digital cameras and smart phones exponentially increases the number of videos, which are then uploaded, watched and shared through internet. Automatic video content classification has become a critical and challenging problem in many real world applications, including video-based search, recommendation and intelligent robots etc. To accelerate the pace of research in video content analysis, Google AI launched the second Youtube-8M video understanding challenge, aiming to learn more compact video representation under limited budget constraints. Because of both unprecedent scale and diversity of Youtube-8M dataset [1], they also provided the frame-level visual and audio features which are extracted by pre-trained convolutional neural networks (CNNs). The main challenge is how to aggregate such pre-extracted features into a compact video-level representation effectively and efficiently.
NetVLAD, which was developed to aggregate spatial representation for the task of place recognition [2], was found to be more effective and faster than common temporal models, such as LSTM [3] and GRU [4], for the task of temporal aggregation of visual and audio features [5]. One of the main drawbacks of NetVLAD is that the encoded features are in high dimension. A non-trivial classification model based on those features would need hundreds of millions of parameters. For instance, a NetVLAD network with 128 clusters will encode a feature of 2048 dimension as an vector of 262,144 dimension. A subsequent fully-connected layer with 2048-dimensional outputs will result in about 537M parameters. The parameter inefficiency would make the model harder to be optimized and easier to be overfitting.
To handle the parameter inefficiency problem, inspired by the work of ResNeXt [6], we developed a novel neural network architecture, NeXtVLAD. Different from NetVLAD, the input features are decomposed into a group of relatively lower-dimensional vectors with attention before they are encoded and aggregated over time. The underlying assumption is that one video frame may contain multiple objects and decomposing the frame-level features before encoding would be beneficial for models to produce a more concise video representation. Experimental results on Youtube-8M dataset have demonstrated that our proposed model is more effective and efficient on parameters than the original NetVLAD model. Moreover, the NeXtVLAD model can converge faster and more resistant to overfitting.
2 Related Works
In this section, we provide a brief review of most relevant researches on feature aggregation and video classification.
2.1 Feature Aggregation for Compact Video Representation
Before the era of deep neural networks, researchers have proposed many encoding methods, including BoW (Bag of visual Words) [7], FV (Fisher Vector) [8] and VLAD (Vector of Locally Aggregated Descriptors) [9] etc., to aggregate local image descriptors into a global compact vector, aiming to achieve more compact image representation and improve the performance of large-scale visual recognition. Such aggregation methods are also applied to the researches of large-scale video classification in some early works [10, 11]. Recently, [2] proposed a differentiable module, NetVLAD, to integrate VLAD into current neural networks and achieved significant improvement for the task of place recognition. The architecture was then proved to very effective in aggregating spatial and temporal information for compact video representation [5, 12].
2.2 Deep Neural Networks for Large-Scale Video Classification
Recently, with the availability of large-scale video datasets [1, 13, 14] and mass computation power of GPUs, deep neural networks have achieved remarkable advances in the field of large-scale video classification [15,16,17,18]. These approaches can be roughly assigned into four categories: (a) Spatiotemporal Convolutional Networks [13, 17, 18], which mainly rely on convolution and pooling to aggregate temporal information along with spatial information. (b) Two Stream Networks [16, 19,20,21], which utilize stacked optical flow to recognize human motions in addition to the context frame images. (c) Recurrent Spatial Networks [15, 22], which applies Recurrent Neural Networks, including LSTM or GRU to model temporal information in videos. (d) Other approaches [23,24,25,26], which use other solutions to generate compact features for video representation and classification.
3 Network Architecture for NeXtVLAD
We will first review the NetVLAD aggregation model before we dive into the details of our proposed NeXtVLAD model for feature aggregation and video classification.
3.1 NetVLAD Aggregation Network for Video Classification
Considering a video with M frames, N-dimensional frame-level descriptors x are extracted by a pre-trained CNN recursively. In NetVLAD aggregation of K clusters, each frame-level descriptor is firstly encoded to be a feature vector of \(N \times K\) dimension using the following equation:
where \(c_{k}\) is the N-dimensional anchor point of cluster k and \(\alpha _k(x_i)\) is a soft assignment function of \(x_i\) to cluster k, which measures the proximity of \(x_i\) and cluster k. The proximity function is modeled using a single fully-connected layer with softmax activation,
Secondly, a video-level descriptor y can be obtained by aggregating all the frame-level features,
and intra-normalization is applied to suppress bursts [27]. Finally, the constructed video-level descriptor y is reduced to an H-dimensional hidden vector via a fully-connected layer before being fed into the final video-level classifier.
As shown in Fig. 1, the parameter number of NetVLAD model before video-level classification is about
where the dimension reduction layer (second fully-connected layer) accounts for the majority of total parameters. For instance, a NetVLAD model with \(N=1024\), \(K=128\) and \(H=2048\) contains more than 268M parameters.
3.2 NeXtVLAD Aggregation Network
In our NeXtVLAD aggregation network, the input vector \(x_i\) is first expanded as \(\dot{x}_i\) with a dimension of \(\lambda N\) via a linear fully-connected layer, where \(\lambda \) is a width multiplier and it is set to be 2 in all of our experiments. Then a reshape operation is applied to transform \(\dot{x}\) with a shape of \((M, \lambda N)\) to \(\tilde{x}\) with a shape of \((M, G, \lambda N/G)\), in which G is the size of groups. The process is equivalent to splitting \(\dot{x}_i\) into G lower-dimensional feature vectors \(\Big \{\tilde{x}^g_i \Big | g \in \{1,\ldots ,G\}\Big \}\), each of which is subsequently represented as a mixture of residuals from cluster anchor points \(c_k\) in the same lower-dimensional space:
where the proximity measurement of the decomposed vector \(\tilde{x}_i^g\) consists of two parts for the cluster k:
in which \(\sigma (.)\) is a sigmoid function with output scale from 0 to 1. The first part \(\alpha _{gk}(\dot{x}_i)\) measures the soft assignment of \(\tilde{x}^g_i\) to the cluster k, while the second part \(\alpha _{g}(\dot{x}_i)\) can be regarded as an attention function over groups.
Then, a video-level descriptor is achieved via aggregating the encoded vectors over time and groups:
after which we apply an intra-normalization operation, a dimension reduction fully-connected layer and a video-level classifier as same as those of the NetVLAD aggregation network.
As noted in Fig. 2, because the dimension of video-level descriptors \(y_{jk}\) is reduced by G times compared to NetVLAD, the number of parameters shrinks. Specifically, the total number of parameters is:
Since G is much smaller than H and N, roughly speaking, the number of parameters of NeXtVLAD is about \(\frac{G}{\lambda }\) times smaller than that of NetVLAD. For instance, a NeXtVLAD network with \(\lambda =2\), \(G=8\), \(N=1024\), \(K=128\) and \(H=2048\) only contains \(71M+\) parameters, which is about 4 times smaller than that of NetVLAD, \(268M+\) (Fig. 3).
3.3 NeXtVLAD Model and SE Context Gating
The basic model we used for 2nd Youtube-8M challenge has the similar architecture with the winner solution [5] for the first Youtube-8M challenge. Video and audio features are encoded and aggregated separately with a two-stream architecture. The aggregated representation is enhanced by a SE Context Gating module, aiming to modeling the dependency among labels. At last, a logistic classifier with sigmoid activation is adopted for video-level multi-label classification.
Inspired by the work of Squeeze-and-Excitation networks [28], as shown in Fig. 4, the SE Context Gating consists of 2 fully-connected layers with less parameters than the original Context Gating introduced in [5]. The total number of parameters is:
where r denotes the reduction ratio that is set to be 8 or 16 in our experiments. During the competition, we find that reversing the whitening process, which is applied after performing PCA dimensionality reduction of frame-level features, is beneficial for the generalization performance of NeXtVLAD model. The possible reason is that whitening after PCA will distort the feature space by eliminating different contributions between feature dimensions with regard to distance measurements, which could be critical for the encoder to find better anchor points and soft assignments for each input feature. Since the Eigen values \(\big \{e_j\big | j \in \{1,\ldots ,N\}\big \}\) of PCA transformation is released by the Google team, we are able to reverse the whitening process by:
where x and \(\hat{x}\) are the input and reversed vector respectively.
3.4 Knowledge Distillation with On-the-Fly Naive Ensemble
Knowledge distillation [29,30,31] was designed to transfer the generalization ability of the cumbersome teacher model to a relatively simpler student network by using prediction from teacher model as an additional “soft target” during training. During the competition, we tried the network architecture introduced in [32] to distill knowledge from a on-the-fly mixture prediction to each sub-model.
As shown in Fig. 5, the logits of the mixture predictions \(z^e\) is a weighted sum of logits \(\big \{z^m \big | m \in \{1,2,3\} \big \}\) from the 3 corresponding sub-models:
where \(a_m(.)\) represents the gating network,
and \(\bar{x}\) represents the frame mean of input features x. The knowledge of the mixture prediction is distilled to each sub-model through minimizing the KL divergence written as:
where C is the total number of class labels and p(.) represents the rank soft prediction:
where T is a temperature which can adjust the relative importance of logits. As suggested in [29], larger T will increase the importance of logits with smaller values and encourage models to share more knowledge about the learned similarity measurements of the task space. The final loss of the model is:
where \(\mathcal {L}_{bce}^m\) (\(\mathcal {L}_{bce}^e\)) means the binary cross entropy between the ground truth labels and prediction from model m (mixture prediction).
4 Experimental Results
This section provides the implementation details and presents our experimental results on the Youtube-8M dataset [1].
4.1 Youtube-8M Dataset
Youtube-8M dataset (2018) consists of about 6.1M videos from Youtube.com, each of which has at least 1000 views with video time ranging from 120 to 300 s and is labeled with one or multiple tags (labels) from a vocabulary of 3862 visual entities. These videos further split into 3 partitions: train (70%), validate (20%) and test (10%). Along with the video ids and labels, visual and audio features are provided for every second of the videos, which are referred as frame-level features. The visual features consists of hidden representations immediately prior to the classification layer in Inception [33], which is pre-trained on Imagenet [34]. The audio features are extracted from a audio classification CNN [35]. PCA and whitening are then applied to reduce the dimension of visual and audio feature to 1024 and 128 respectively.
In the 2nd Youtube-8M video understanding challenge, submissions are evaluated using Global Average Precision (GAP) at 20. For each video, the predictions are sorted by confidence and the GAP score is calculated as:
in which p(i) is the precision and r(i) is the recall given the top i predictions.
4.2 Implementation Details
Our implementation is based on the TensorFlow [36] starter codeFootnote 1. All of the models are trained using the Adam optimizer [37] with an initial learning rate of 0.0002 on two Nvidia 1080 TI GPUs. The batch size is set to be 160 (80 on each GPU). We apply a \(l_2\)(1e-5) regularizer to the parameters of the video-level classifier and use a dropout ratio of 0.5 aiming to avoid overfitting. No data augmentation is used in training NeXtVLAD models and the padding frames are masked out during the aggregation process via:
where
In all the local experiments, models are trained for 5 epochs (about 120k steps) using only the training partition and the learning rate is exponentially decreased by a factor of 0.8 every 2M samples. Then the model is evaluated using only about \(\frac{1}{10}\) of the evaluation partition, which is consistently about 0.002 smaller than the score at public leaderboardFootnote 2 for the same models. As for the final submission model, it is trained for 15 epochs (about 460k steps) using both training and validation partitions and the learning rate is exponentially decreased by a factor of 0.9 every 2.5M samples. More details can be found at https://github.com/linrongc/youtube-8m.
4.3 Model Evaluation
We evaluate the performance and parameter efficiency of individual aggregation models in Table 1. For fair comparison, we apply a reverse whitening layer for video features, a dropout layer after concatenation of video and audio features and a logistic model as the video-level classifier in all the presented models. Except for NetVLAD_random which sampled 300 random frames for each video, all the other models didn’t use any data augmentation techniques. NetVLAD_small use a linear fully-connected layer to reduce the dimension of inputs to \(\frac{1}{4}\) of the original size for visual and audio features, so that the number of parameters are much comparable to other NeXtVLAD models.
From Table 1, one can observe that our proposed NeXtVLAD neural networks are more effective and efficient on parameters than the original NetVLAD model by a significantly large margin. With only about 30% of the size of NetVLAD_random model [5], NeXtVLAD increase the GAP score by about 0.02, which is a significant improvement considering the large size of Youtube-8M dataset. Furthermore, as shown in Fig. 6, the NeXtVLAD model is converging faster, which reaches a training GAP score of about 0.85 in just 1 epoch.
Surprisingly, the NetVLAD model performs even worse than the NetVLAD_small model, which indicates NetVLAD models tend to overfit the training dataset. Another interesting observation in Fig. 6 is that the most of GAP score gains happens around the beginning of a new epoch for NetVLAD model. The observation implies that the NetVLAD model are more prone to remember the data instead of find useful feature patterns for generalization.
To meet the competition requirements, we use an ensemble of 3 NeXtVLAD models with parameters (0.5drop, 112K, 2048H), whose size is about 944M bytes. As shown in Table 2, training longer can always lead to better performance of NeXtVLAD models. Our best submission is trained about 15 epochs, which takes about 3 days on two 1080 TI GPUs. If we only retain one branch from the mixture model, a single NeXtVLAD model with only 79M parameters will achieve a GAP score of 0.87846, which could be ranked 15/394 in the final leaderboard.
Due to time and resource limit, we set the parameters \(T=3\), which is the temperature in on-the-fly knowledge distillation, as suggested in [32]. We ran an AB test experiments after the competition, as shown in Table 3, somehow indicates \(T=3\) is not optimal. Further tuning of the parameter could result in a better GAP score.
5 Conclusion
In this paper, a novel NeXtVLAD model is developed to support large-scale video classification under budget constraints. Our NeXtVLAD model has provided a fast and efficient network architecture to aggregate frame-level features into a compact feature vector for video classification. The experimental results on Youtube-8M dataset have demonstrated that our proposed NeXtVLAD model is more effective and efficient on parameters than the previous NetVLAD model, which is the winner of the first Youtube-8M video understanding challenge.
References
Abu-El-Haija, S., et al.: Youtube-8m: a large-scale video classification benchmark. arXiv:1609.08675 (2016)
Arandjelović, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) EMNLP, ACL, pp. 1724–1734 (2014)
Miech, A., Laptev, I., Sivic, J.: Learnable pooling with context gating for video classification. CoRR (2017)
Xie, S., Girshick, R.B., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. CoRR (2016)
Sivic, J., Zisserman, A.: Video Google: a text retrieval approach to object matching in videos. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1470–1477 (2003)
Perronnin, F., Dance, C.R.: Fisher kernels on visual vocabularies for image categorization. In: CVPR IEEE Computer Society (2007)
Jegou, H., Douze, M., Schmid, C., Pérez, P.: Aggregating local descriptors into a compact image representation. In: CVPR IEEE Computer Society, pp. 3304–3311 (2010)
Laptev, I., Marszałek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: CVPR (2008)
Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: 17th International Conference on Proceedings of the Pattern Recognition, (ICPR 2004) Volume 3 - Volume 03. ICPR 2004, IEEE Computer Society, pp. 32–36 (2004)
Girdhar, R., Ramanan, D., Gupta, A., Sivic, J., Russell, B.: ActionVLAD: learning spatio-temporal aggregation for action classification. In: CVPR (2017)
Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: CVPR (2014)
Caba Heilbron, F., Escorcia, V., Ghanem, B., Carlos Niebles, J.: ActivityNet: a large-scale video benchmark for human activity understanding. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015
Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., Baskurt, A.: Sequential deep learning for human action recognition. In: Salah, A.A., Lepri, B. (eds.) HBU 2011. LNCS, vol. 7065, pp. 29–39. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25446-8_4
Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1. NIPS 2014, pp. 568–576. MIT Press (2014)
Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35, 221–231 (2013)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). ICCV 2015, IEEE Computer Society, pp. 4489–4497 (2015)
Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. CoRR (2016)
Wu, Z., Jiang, Y., Wang, X., Ye, H., Xue, X., Wang, J.: Fusing multi-stream deep networks for video classification. CoRR (2015)
Ng, J.Y.H., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: Computer Vision and Pattern Recognition (2015)
Ballas, N., Yao, L., Pal, C., Courville, A.C.: Delving deeper into convolutional networks for learning video representations. CoRR (2015)
Fernando, B., Gavves, E., Oramas, J.M., Ghodrati, A., Tuytelaars, T.: Modeling video evolution for action recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015
Wang, X., Farhadi, A., Gupta, A.: Actions transformations. In: CVPR (2016)
Bilen, H., Fernando, B., Gavves, E., Vedaldi, A.: Action recognition with dynamic image networks. CoRR (2016)
Wang, L., Li, W., Li, W., Gool, L.V.: Appearance-and-relation networks for video classification. Technical report, arXiv (2017)
Arandjelovic, R., Zisserman, A.: All about VLAD. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, pp. 1578–1585 (2013)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015)
Zhang, Y., Xiang, T., Hospedales, T.M., Lu, H.: Deep mutual learning. CoRR (2017)
Li, Z., Hoiem, D.: Learning without forgetting. CoRR (2016)
Lan, X., Zhu, X., Gong, S.: Knowledge distillation by on-the-fly native ensemble. arXiv:1806.04606 (2018)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. ICML 2015, pp. 448–456. JMLR.org (2015)
Deng, J., Dong, W., Socher, R., Li-jia, L., Li, K., Fei-fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)
Hershey, S., et al.: CNN architectures for large-scale audio classification. CoRR (2016)
Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation. OSDI 2016, USENIX Association, pp. 265–283 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR (2014)
Acknowledgement
The authors would like to thank Kaggle and the Google team for hosting the Youtube-8M video understanding challenge and providing the Youtube-8M Tensorflow Starter Code.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Lin, R., Xiao, J., Fan, J. (2019). NeXtVLAD: An Efficient Neural Network to Aggregate Frame-Level Features for Large-Scale Video Classification. In: Leal-Taixé, L., Roth, S. (eds) Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science(), vol 11132. Springer, Cham. https://doi.org/10.1007/978-3-030-11018-5_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-11018-5_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-11017-8
Online ISBN: 978-3-030-11018-5
eBook Packages: Computer ScienceComputer Science (R0)