Keywords

1 Introduction

The prevalence of digital cameras and smart phones exponentially increases the number of videos, which are then uploaded, watched and shared through internet. Automatic video content classification has become a critical and challenging problem in many real world applications, including video-based search, recommendation and intelligent robots etc. To accelerate the pace of research in video content analysis, Google AI launched the second Youtube-8M video understanding challenge, aiming to learn more compact video representation under limited budget constraints. Because of both unprecedent scale and diversity of Youtube-8M dataset [1], they also provided the frame-level visual and audio features which are extracted by pre-trained convolutional neural networks (CNNs). The main challenge is how to aggregate such pre-extracted features into a compact video-level representation effectively and efficiently.

NetVLAD, which was developed to aggregate spatial representation for the task of place recognition [2], was found to be more effective and faster than common temporal models, such as LSTM [3] and GRU [4], for the task of temporal aggregation of visual and audio features [5]. One of the main drawbacks of NetVLAD is that the encoded features are in high dimension. A non-trivial classification model based on those features would need hundreds of millions of parameters. For instance, a NetVLAD network with 128 clusters will encode a feature of 2048 dimension as an vector of 262,144 dimension. A subsequent fully-connected layer with 2048-dimensional outputs will result in about 537M parameters. The parameter inefficiency would make the model harder to be optimized and easier to be overfitting.

To handle the parameter inefficiency problem, inspired by the work of ResNeXt [6], we developed a novel neural network architecture, NeXtVLAD. Different from NetVLAD, the input features are decomposed into a group of relatively lower-dimensional vectors with attention before they are encoded and aggregated over time. The underlying assumption is that one video frame may contain multiple objects and decomposing the frame-level features before encoding would be beneficial for models to produce a more concise video representation. Experimental results on Youtube-8M dataset have demonstrated that our proposed model is more effective and efficient on parameters than the original NetVLAD model. Moreover, the NeXtVLAD model can converge faster and more resistant to overfitting.

2 Related Works

In this section, we provide a brief review of most relevant researches on feature aggregation and video classification.

2.1 Feature Aggregation for Compact Video Representation

Before the era of deep neural networks, researchers have proposed many encoding methods, including BoW (Bag of visual Words) [7], FV (Fisher Vector) [8] and VLAD (Vector of Locally Aggregated Descriptors) [9] etc., to aggregate local image descriptors into a global compact vector, aiming to achieve more compact image representation and improve the performance of large-scale visual recognition. Such aggregation methods are also applied to the researches of large-scale video classification in some early works [10, 11]. Recently, [2] proposed a differentiable module, NetVLAD, to integrate VLAD into current neural networks and achieved significant improvement for the task of place recognition. The architecture was then proved to very effective in aggregating spatial and temporal information for compact video representation [5, 12].

2.2 Deep Neural Networks for Large-Scale Video Classification

Recently, with the availability of large-scale video datasets [1, 13, 14] and mass computation power of GPUs, deep neural networks have achieved remarkable advances in the field of large-scale video classification [15,16,17,18]. These approaches can be roughly assigned into four categories: (a) Spatiotemporal Convolutional Networks [13, 17, 18], which mainly rely on convolution and pooling to aggregate temporal information along with spatial information. (b) Two Stream Networks [16, 19,20,21], which utilize stacked optical flow to recognize human motions in addition to the context frame images. (c) Recurrent Spatial Networks [15, 22], which applies Recurrent Neural Networks, including LSTM or GRU to model temporal information in videos. (d) Other approaches [23,24,25,26], which use other solutions to generate compact features for video representation and classification.

3 Network Architecture for NeXtVLAD

We will first review the NetVLAD aggregation model before we dive into the details of our proposed NeXtVLAD model for feature aggregation and video classification.

Fig. 1.
figure 1

Schema of NetVLAD model for video classification. Formulas in red denote the number of parameters (ignoring biases or batch normalization). FC means fully-connected layer. (Color figure online)

3.1 NetVLAD Aggregation Network for Video Classification

Considering a video with M frames, N-dimensional frame-level descriptors x are extracted by a pre-trained CNN recursively. In NetVLAD aggregation of K clusters, each frame-level descriptor is firstly encoded to be a feature vector of \(N \times K\) dimension using the following equation:

$$\begin{aligned} \begin{gathered} v_{ijk} = \alpha _k(x_i)(x_{ij} - c_{kj}) \\ i \in \{1,\ldots , M\}, j \in \{1,\ldots , N\}, k \in \{1,\ldots , K\} \end{gathered} \end{aligned}$$
(1)

where \(c_{k}\) is the N-dimensional anchor point of cluster k and \(\alpha _k(x_i)\) is a soft assignment function of \(x_i\) to cluster k, which measures the proximity of \(x_i\) and cluster k. The proximity function is modeled using a single fully-connected layer with softmax activation,

$$\begin{aligned} \alpha _k(x_i) = \frac{e^{w_k ^T x_i + b_k}}{\sum _{s=1}^K e^{w_s^T x_i + b_s}}. \end{aligned}$$
(2)

Secondly, a video-level descriptor y can be obtained by aggregating all the frame-level features,

$$\begin{aligned} y_{jk} = \sum _i^M v_{ijk} \end{aligned}$$
(3)

and intra-normalization is applied to suppress bursts [27]. Finally, the constructed video-level descriptor y is reduced to an H-dimensional hidden vector via a fully-connected layer before being fed into the final video-level classifier.

As shown in Fig. 1, the parameter number of NetVLAD model before video-level classification is about

$$\begin{aligned} N\times K\times (H\,+\,2), \end{aligned}$$
(4)

where the dimension reduction layer (second fully-connected layer) accounts for the majority of total parameters. For instance, a NetVLAD model with \(N=1024\), \(K=128\) and \(H=2048\) contains more than 268M parameters.

Fig. 2.
figure 2

Schema of our NeXtVLAD network for video classification. Formulas in red denote the number of parameters (ignoring biases or batch normalization). FC represents a fully-connected layer. The wave operation means a reshape transformation. (Color figure online)

3.2 NeXtVLAD Aggregation Network

In our NeXtVLAD aggregation network, the input vector \(x_i\) is first expanded as \(\dot{x}_i\) with a dimension of \(\lambda N\) via a linear fully-connected layer, where \(\lambda \) is a width multiplier and it is set to be 2 in all of our experiments. Then a reshape operation is applied to transform \(\dot{x}\) with a shape of \((M, \lambda N)\) to \(\tilde{x}\) with a shape of \((M, G, \lambda N/G)\), in which G is the size of groups. The process is equivalent to splitting \(\dot{x}_i\) into G lower-dimensional feature vectors \(\Big \{\tilde{x}^g_i \Big | g \in \{1,\ldots ,G\}\Big \}\), each of which is subsequently represented as a mixture of residuals from cluster anchor points \(c_k\) in the same lower-dimensional space:

$$\begin{aligned} \begin{gathered} v_{ijk}^g = \alpha _g(\dot{x}_i)\alpha _{gk}(\dot{x}_i)(\tilde{x}_{ij}^g - c_{kj})\\ g \in \{1,\ldots ,G\}, i \in \{1,\ldots ,M\}, j \in \{1,\ldots ,\frac{\lambda N}{G}\}, k \in \{1,\ldots , K\}, \end{gathered} \end{aligned}$$
(5)

where the proximity measurement of the decomposed vector \(\tilde{x}_i^g\) consists of two parts for the cluster k:

$$\begin{aligned} \alpha _{gk}(\dot{x}_i) = \frac{e^{w_{gk}^T\dot{x}_i + b_{gk}}}{\sum _{s=1}^K e^{w_{gs}^T\dot{x}_i + b_{gs}}}, \end{aligned}$$
(6)
$$\begin{aligned} \alpha _{g}(\dot{x}_i) = \sigma (w_g^T\dot{x}_i + b_g), \end{aligned}$$
(7)

in which \(\sigma (.)\) is a sigmoid function with output scale from 0 to 1. The first part \(\alpha _{gk}(\dot{x}_i)\) measures the soft assignment of \(\tilde{x}^g_i\) to the cluster k, while the second part \(\alpha _{g}(\dot{x}_i)\) can be regarded as an attention function over groups.

Then, a video-level descriptor is achieved via aggregating the encoded vectors over time and groups:

$$\begin{aligned} y_{jk} = \sum _{i, g} v^g_{ijk}, \end{aligned}$$
(8)

after which we apply an intra-normalization operation, a dimension reduction fully-connected layer and a video-level classifier as same as those of the NetVLAD aggregation network.

As noted in Fig. 2, because the dimension of video-level descriptors \(y_{jk}\) is reduced by G times compared to NetVLAD, the number of parameters shrinks. Specifically, the total number of parameters is:

$$\begin{aligned} \lambda N (N\,+\,G\,+\,K(G\,+\,\frac{H\,+\,1}{G})). \end{aligned}$$
(9)

Since G is much smaller than H and N, roughly speaking, the number of parameters of NeXtVLAD is about \(\frac{G}{\lambda }\) times smaller than that of NetVLAD. For instance, a NeXtVLAD network with \(\lambda =2\), \(G=8\), \(N=1024\), \(K=128\) and \(H=2048\) only contains \(71M+\) parameters, which is about 4 times smaller than that of NetVLAD, \(268M+\) (Fig. 3).

Fig. 3.
figure 3

Overview of our NeXtVLAD model designed for Youtube-8M video classification.

3.3 NeXtVLAD Model and SE Context Gating

The basic model we used for 2nd Youtube-8M challenge has the similar architecture with the winner solution [5] for the first Youtube-8M challenge. Video and audio features are encoded and aggregated separately with a two-stream architecture. The aggregated representation is enhanced by a SE Context Gating module, aiming to modeling the dependency among labels. At last, a logistic classifier with sigmoid activation is adopted for video-level multi-label classification.

Inspired by the work of Squeeze-and-Excitation networks [28], as shown in Fig. 4, the SE Context Gating consists of 2 fully-connected layers with less parameters than the original Context Gating introduced in [5]. The total number of parameters is:

$$\begin{aligned} \frac{2F^2}{r} \end{aligned}$$
(10)

where r denotes the reduction ratio that is set to be 8 or 16 in our experiments. During the competition, we find that reversing the whitening process, which is applied after performing PCA dimensionality reduction of frame-level features, is beneficial for the generalization performance of NeXtVLAD model. The possible reason is that whitening after PCA will distort the feature space by eliminating different contributions between feature dimensions with regard to distance measurements, which could be critical for the encoder to find better anchor points and soft assignments for each input feature. Since the Eigen values \(\big \{e_j\big | j \in \{1,\ldots ,N\}\big \}\) of PCA transformation is released by the Google team, we are able to reverse the whitening process by:

$$\begin{aligned} \hat{x}_j = x_j * \sqrt{e_j} \end{aligned}$$
(11)

where x and \(\hat{x}\) are the input and reversed vector respectively.

Fig. 4.
figure 4

The schema of the SE Context Gating. FC denotes fully-connected and BN denotes batch normalization. B represents the batch size and F means the feature size of x.

3.4 Knowledge Distillation with On-the-Fly Naive Ensemble

Knowledge distillation [29,30,31] was designed to transfer the generalization ability of the cumbersome teacher model to a relatively simpler student network by using prediction from teacher model as an additional “soft target” during training. During the competition, we tried the network architecture introduced in [32] to distill knowledge from a on-the-fly mixture prediction to each sub-model.

Fig. 5.
figure 5

Overview of a mixture of 3 NeXtVLAD models with on-the-fly knowledge distillation. The orange arrows indicate the distillation of knowledge from mixture predictions to the sub-models. (Color figure online)

As shown in Fig. 5, the logits of the mixture predictions \(z^e\) is a weighted sum of logits \(\big \{z^m \big | m \in \{1,2,3\} \big \}\) from the 3 corresponding sub-models:

$$\begin{aligned} z^e = \sum _{m\,=\,1}^3 a_m(\bar{x}) * z^m \end{aligned}$$
(12)

where \(a_m(.)\) represents the gating network,

$$\begin{aligned} a_m(\bar{x}) = \frac{e^{w_m^T\bar{x} + b_m}}{\sum _s^3 e^{w_s^T \bar{x} + b_s}} \end{aligned}$$
(13)

and \(\bar{x}\) represents the frame mean of input features x. The knowledge of the mixture prediction is distilled to each sub-model through minimizing the KL divergence written as:

$$\begin{aligned} \mathcal {L}_{kl}^{m, e} = \sum _{c=1}^C p^e(c) \log \frac{p^e(c)}{p^m(c)}, \end{aligned}$$
(14)

where C is the total number of class labels and p(.) represents the rank soft prediction:

$$\begin{aligned} p^m(c) = \frac{exp(z^m_c/T)}{\sum _{s=1}^C exp(z^m_s/T)}, \ p^e(c) = \frac{exp(z^e_c/T)}{\sum _{s=1}^C exp(z^e_s/T)}. \end{aligned}$$
(15)

where T is a temperature which can adjust the relative importance of logits. As suggested in [29], larger T will increase the importance of logits with smaller values and encourage models to share more knowledge about the learned similarity measurements of the task space. The final loss of the model is:

$$\begin{aligned} \mathcal {L} = \sum _{m=1}^3 \mathcal {L}_{bce}^{m} + \mathcal {L}_{bce}^e + T^2 * \sum _{m=1}^3\mathcal {L}_{kl}^{m, e} \end{aligned}$$
(16)

where \(\mathcal {L}_{bce}^m\) (\(\mathcal {L}_{bce}^e\)) means the binary cross entropy between the ground truth labels and prediction from model m (mixture prediction).

4 Experimental Results

This section provides the implementation details and presents our experimental results on the Youtube-8M dataset [1].

4.1 Youtube-8M Dataset

Youtube-8M dataset (2018) consists of about 6.1M videos from Youtube.com, each of which has at least 1000 views with video time ranging from 120 to 300 s and is labeled with one or multiple tags (labels) from a vocabulary of 3862 visual entities. These videos further split into 3 partitions: train (70%), validate (20%) and test (10%). Along with the video ids and labels, visual and audio features are provided for every second of the videos, which are referred as frame-level features. The visual features consists of hidden representations immediately prior to the classification layer in Inception [33], which is pre-trained on Imagenet [34]. The audio features are extracted from a audio classification CNN [35]. PCA and whitening are then applied to reduce the dimension of visual and audio feature to 1024 and 128 respectively.

In the 2nd Youtube-8M video understanding challenge, submissions are evaluated using Global Average Precision (GAP) at 20. For each video, the predictions are sorted by confidence and the GAP score is calculated as:

$$\begin{aligned} GAP = \sum _{i\,=\,1}^{20} p(i)\,*\,r(i) \end{aligned}$$
(17)

in which p(i) is the precision and r(i) is the recall given the top i predictions.

4.2 Implementation Details

Our implementation is based on the TensorFlow [36] starter codeFootnote 1. All of the models are trained using the Adam optimizer [37] with an initial learning rate of 0.0002 on two Nvidia 1080 TI GPUs. The batch size is set to be 160 (80 on each GPU). We apply a \(l_2\)(1e-5) regularizer to the parameters of the video-level classifier and use a dropout ratio of 0.5 aiming to avoid overfitting. No data augmentation is used in training NeXtVLAD models and the padding frames are masked out during the aggregation process via:

$$\begin{aligned} v_{ijk}^g = mask(i)\alpha _g(\dot{x}_i)\alpha _{gk}(\dot{x}_i)(\tilde{x}_{ij}^g - c_{kj}) \end{aligned}$$
(18)

where

$$\begin{aligned} mask(i) = {\left\{ \begin{array}{ll} 1 &{} \quad \text {if} \ i \le M\\ 0 &{} \quad \text {else} \end{array}\right. } \end{aligned}$$
(19)

In all the local experiments, models are trained for 5 epochs (about 120k steps) using only the training partition and the learning rate is exponentially decreased by a factor of 0.8 every 2M samples. Then the model is evaluated using only about \(\frac{1}{10}\) of the evaluation partition, which is consistently about 0.002 smaller than the score at public leaderboardFootnote 2 for the same models. As for the final submission model, it is trained for 15 epochs (about 460k steps) using both training and validation partitions and the learning rate is exponentially decreased by a factor of 0.9 every 2.5M samples. More details can be found at https://github.com/linrongc/youtube-8m.

4.3 Model Evaluation

We evaluate the performance and parameter efficiency of individual aggregation models in Table 1. For fair comparison, we apply a reverse whitening layer for video features, a dropout layer after concatenation of video and audio features and a logistic model as the video-level classifier in all the presented models. Except for NetVLAD_random which sampled 300 random frames for each video, all the other models didn’t use any data augmentation techniques. NetVLAD_small use a linear fully-connected layer to reduce the dimension of inputs to \(\frac{1}{4}\) of the original size for visual and audio features, so that the number of parameters are much comparable to other NeXtVLAD models.

Table 1. Performance (on local validation partition) comparison for single aggregation models. The parameters inside parenthesis represents (group number G, dropout ratio, cluster number K, hidden size H)
Fig. 6.
figure 6

Training GAP on Youtube-8M dataset. The ticks of x axis are near the end of each epoch.

From Table 1, one can observe that our proposed NeXtVLAD neural networks are more effective and efficient on parameters than the original NetVLAD model by a significantly large margin. With only about 30% of the size of NetVLAD_random model [5], NeXtVLAD increase the GAP score by about 0.02, which is a significant improvement considering the large size of Youtube-8M dataset. Furthermore, as shown in Fig. 6, the NeXtVLAD model is converging faster, which reaches a training GAP score of about 0.85 in just 1 epoch.

Surprisingly, the NetVLAD model performs even worse than the NetVLAD_small model, which indicates NetVLAD models tend to overfit the training dataset. Another interesting observation in Fig. 6 is that the most of GAP score gains happens around the beginning of a new epoch for NetVLAD model. The observation implies that the NetVLAD model are more prone to remember the data instead of find useful feature patterns for generalization.

To meet the competition requirements, we use an ensemble of 3 NeXtVLAD models with parameters (0.5drop, 112K, 2048H), whose size is about 944M bytes. As shown in Table 2, training longer can always lead to better performance of NeXtVLAD models. Our best submission is trained about 15 epochs, which takes about 3 days on two 1080 TI GPUs. If we only retain one branch from the mixture model, a single NeXtVLAD model with only 79M parameters will achieve a GAP score of 0.87846, which could be ranked 15/394 in the final leaderboard.

Due to time and resource limit, we set the parameters \(T=3\), which is the temperature in on-the-fly knowledge distillation, as suggested in [32]. We ran an AB test experiments after the competition, as shown in Table 3, somehow indicates \(T=3\) is not optimal. Further tuning of the parameter could result in a better GAP score.

Table 2. The GAP scores of submissions during the competition. All the other parameters used are (0.5drop, 112K, 2048H). The final submissions are tagged with *
Table 3. The results (on local validation set) of an AB test experiment for T tuning.

5 Conclusion

In this paper, a novel NeXtVLAD model is developed to support large-scale video classification under budget constraints. Our NeXtVLAD model has provided a fast and efficient network architecture to aggregate frame-level features into a compact feature vector for video classification. The experimental results on Youtube-8M dataset have demonstrated that our proposed NeXtVLAD model is more effective and efficient on parameters than the previous NetVLAD model, which is the winner of the first Youtube-8M video understanding challenge.