Abstract
Representation method is critical to visual tracking. A robust representation describes the target accurately, leading to good tracking performance. In this work, a novel representation is proposed, which is designed to be simultaneously low-rank and joint sparse for the local patches within a target region. In this representation, the subspace structure is exploited by the low-rank constraint to reflect the global information of all the patches, and the sparsity structure is captured by the joint sparsity restriction to describe the locally intimate relationship between the neighboring patches. Importantly, to make the representation computationally applicable to visual tracking, a novel fast algorithm based on greedy strategy is proposed, and the performance analysis of this algorithm is also presented. Thus, the tracking in this work is formulated as a locally low-rank and joint sparse matching problem within particle filtering framework. A large number of experimental results show that the tracking drift problem is effectively alleviated in various challenging situations by using the proposed representation method. Both qualitative and quantitative evaluations demonstrate that the proposed tracker performs favorably against many other state-of-the-art trackers. Benefitting from the good adaptive capability of the representation, all the parameters of the proposed tracking algorithm are fixed in all the experiments.
Similar content being viewed by others
Notes
References
Adam, A., Rivlin, E., & Shimshoni, I. (2006). Robust fragments-based tracking using the integral histogram. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1, 798–805.
Arulampalam, M., Maskell, S., Gordon, N., & Clapp, T. (2002). A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing (TSP), 50(2), 174–188.
Babenko, B., Member, S., Yang, M. H., & Member, S. (2011). Robust object tracking with online multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(8), 1619–1632.
Bach, F., Jenatton, R., Mairal, J., & Obozinski, G. (2011). Convex optimization with sparsity-inducing norms. Optimization for Machine Learning, 19–53.
Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1), 183–202.
Cai, J., Candès, E., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.
Candes, E., & Plan, Y. (2010). Matrix completion with noise. Proceedings of the IEEE, 98(6), 925–936.
Candes, E., Li, X., Ma, Y., & Wright, J. (2009). Robust principal component analysis?. arXiv:0912.3599v1.
Coates, A., & Ng, A. (2011). The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning (ICML).
Dinh, T. B., Vo, N., & Medioni, G. (2011). Context tracker: Exploring supporters and distracters in unconstrained environments. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (pp. 1177–1184).
Grabner, H., & Bischof, H. (2006). On-line boosting and vision. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1, 260–267.
Grabner, H., Leistner, C., & Bischof, H. (2008). Semi-supervised on-line boosting for robust tracking. European Conference on Computer Vision (ECCV), 813399, 234–247.
Hager, G. D., & Belhumeur, P. N. (1996). Real-time tracking of image regions with changes in geometry and illumination. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 403–410).
Hare, S., Saffari, A., & Torr, P. (2011). Struck: Structured output tracking with kernels. In IEEE International Conference on Computer Vision (ICCV) (pp. 263–270).
Henriques, F., Caseiro, R., Martins, P., & Batista, J. (2012). Exploiting the circulant structure of tracking-by-detection with kernels. In European Conference on Computer Vision (ECCV) (pp. 702–715).
Henriques, J. F., Caseiro, R., Martins, P., & Batista, J. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (pp. 1–14).
Hong, Z., Mei, X., Prokhorov, D., & Tao, D. (2013). Tracking via robust multi-task multi-view joint sparse representation. In IEEE International Conference on Computer Vision (ICCV), IEEE (pp. 649–656).
Hue, C., Vermaak, J., & Gangnet, M. (2002). Color-based probabilistic tracking. In European Conference on Computer Vision (ECCV) (pp. 661–675).
Isard, M. (1998). CONDENSATION—Conditional density propagation for visual tracking. International Journal of Computer Vision (IJCV), 29(1), 5–28.
Jia, X., Lu, H., & Yang, M. H. (2012). Visual tracking via adaptive structural local sparse appearance model. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1822–1829).
Kalal, Z., Matas, J., & Mikolajczyk, K. (2010). P-N learning: Bootstrapping binary classifiers by structural constraints. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (pp. 49–56).
Kalal, Z., Mikolajczyk, K., & Matas, J. (2012). Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(7), 1409–1422.
Kriegmant, D. J., Engineering, E., & Haven, N. (1996). What is the set of images of an object under all possible lighting conditions? In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 270–277).
Kwon, J., & Lee, K. (2010). Visual tracking decomposition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1269–1276).
Kwon, J., & Lee, K. M. (2011). Tracking by Sampling Trackers. In IEEE International Conference on Computer Vision (ICCV) (pp. 1195–1202).
Li, X., Dick, A., Shen, C., van den Hengel, A., & Wang, H. (2013). Incremental learning of 3d-dct compact representations for robust visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(4), 863–881.
Lin, Z., Chen, M., & Ma, Y. (2010). The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. UIUC Technical Report arXiv:1009.5055v2.
Liu, B., Yang, L., Huang, J., & Meer, P. (2010). Robust and fast collaborative tracking with two stage sparse optimization. In European Conference on Computer Vision (ECCV) (pp. 1–14).
Liu, B., Huang, J., Yang, L., & Kulikowsk, C. (2011). Robust tracking using local sparse appearance model and K-selection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (pp. 1313–1320).
Liu, B., Huang, J., Kulikowski, C., & Yang, L. (2013). Robust visual tracking using local sparse appearance model and k-selection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(12), 2968–2981.
Mei, X., & Ling, H. (2009). Robust visual tracking using L1 minimization. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1436–1443).
Mei, X., & Ling, H. (2011). Robust visual tracking and vehicle classification via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(11), 2259–2272.
Mei, X., Ling, H., & Wu, Y. (2011). Minimum error bounded efficient l1 tracker with occlusion detection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1257–1264).
Oron, S., Bar-Hillel, A., Levi, D., & Avidan, S. (2012). Locally orderless tracking. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 1, 1940–1947.
Ross, D. A., Lim, J., Lin, R. S., & Yang, M. H. (2007). Incremental learning for robust visual tracking. International Journal of Computer Vision (IJCV), 77(1–3), 125–141.
Sevilla-Lara, L., & Learned-Miller, E. (2012). Distribution fields for tracking. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (pp. 1910–1917).
Smeulders, A. W. M., Chu, D. M., Cucchiara, R., Calderara, S., Dehghan, A., & Shah, M. (2014). Visual tracking: An experimental survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(7), 1442–1468.
Sui, Y., & Zhang, L. (2015). Visual tracking via locally structured gaussian process regression. IEEE Signal Processing Letters (SPL), 22(9), 1331–1335.
Sui, Y., Tang, Y., & Zhang, L. (2015a). Discriminative low-rank tracking. In IEEE International Conference on Computer Vision (ICCV) (pp. 3002–3010).
Sui, Y., Zhang, S., & Zhang, L. (2015b). Robust visual tracking via sparsity induced subspace learning. IEEE Transactions on Image Processing (TIP), 48(12), 4686–4700.
Sui, Y., Zhao, X., Zhang, S., Yu, X., Zhao, S., & Zhang, L. (2015c). Self-expressive tracking. Pattern Recognition (PR), 48(9), 2872–2884.
Wang, D., & Lu, H. (2012). Object tracking via 2DPCA and L1-regularization. IEEE Signal Processing Letters, 19(11), 711–714.
Wang, D., & Lu, H. (2014). Visual tracking via probability continuous outlier model. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR).
Wang, D., Lu, H., & Yang, M. H. (2013a). Least soft-thresold squares tracking. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2371–2378).
Wang, D., Lu, H., & Yang, M. H. (2013b). Online object tracking with sparse prototypes. IEEE Transactions on Image Processing (TIP), 22(1), 314–325.
Wright, J., Ma, Y., Mairal, J., & Sapiro, G. (2010). Sparse representation for computer vision and pattern recognition. Proceedings of The IEEE, 98(6), 1031–1044.
Wu, Y., Lim, J., & Yang, M. H. (2013). Online object tracking: A benchmark. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2411–2418).
Xiangyuan, L., Ma, A. J., Yuen, P. C. (2014). Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR).
Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking. ACM Computing Surveys, 38(4), 13–57.
Yuan, X., & Yan, S. (2010). Visual classification with multi-task joint sparse representation. EEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 21, 4349–4360.
Zhang, K., Zhang, L., Yang, M. H. (2012a) Real-time compressive tracking. In European Conference on Computer Vision (ECCV) (pp. 866–879).
Zhang, T., Ghanem, B., & Liu, S. (2012b) Robust visual tracking via multi-task sparse learning. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2042–2049).
Zhang, T., Ghanem, B., Liu, S., & Ahuja, N. (2012c) Low-rank sparse learning for robust visual tracking. In European Conference on Computer Vision (ECCV) (pp. 470–484).
Zhong, W., Lu, H., & Yang, M. H. (2012). Robust object tracking via sparsity-based collaborative model. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1838–1845).
Acknowledgments
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 61172125 and Grant 61132007.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Patrick Perez.
Appendices
Appendix 1: Proof of the Theorems
Theorem 1
(orthogonality) Algorithm 1 ensures that the active columns of the dictionary \({\mathbf {D}}\) will not be chosen again for the support in the next iteration and afterwards.
Proof
At the i-th iteration, the term \(\left\| {\mathbf {D}}{\mathbf {Z}}-{\mathbf {X}}\right\| _F^2\) is minimized with respect to \({\mathbf {Z}}\) in step 5, such that the support of each column of \({\mathbf {Z}}\) is in \(\mathcal {S}^i\). Let \({\mathbf {D}}_{\mathcal {S}^i}\) denote the matrix with size \(d\times |\mathcal {S}^i|\) that contains the columns from \({\mathbf {D}}\) belonging to this support. Thus, to minimize \(\left\| {\mathbf {D}}{\mathbf {Z}}-{\mathbf {X}}\right\| _F^2\) is equivalent to minimize \(\left\| {\mathbf {D}}_{\mathcal {S}^i}{\mathbf {Z}}_{\mathcal {S}^i}-{\mathbf {X}}\right\| _F^2\), where \({\mathbf {Z}}_{\mathcal {S}^i}\) consists of the non-zero rows of the matrix \({\mathbf {Z}}\). The solution is given by zeroing the derivative of the quadratic form
Here the following formula of the residual is used, i.e., \({\mathbf {R}}^{(i)}={\mathbf {X}}-{\mathbf {D}}{\mathbf {Z}}^{(i)}={\mathbf {X}}-{\mathbf {D}}_{\mathcal {S}^i}{\mathbf {Z}}_{\mathcal {S}^i}\). Thus, the above relation indicates that the columns in \({\mathbf {D}}\) that are part of the support \(\mathcal {S}^i\) are necessarily orthogonal to the residual \({\mathbf {R}}^{(i)}\). It implies that in the next iteration and afterwards, these columns will not be chosen again as the active columns. \(\square \)
Theorem 2
(normalization invariance) Algorithm 1 produces the same solution when using either the original dictionary \({\mathbf {D}}\) or its normalized version \(\widetilde{{\mathbf {D}}}\).
Proof
At the i-th iteration, the errors in step 3 are computed from
Denoting \(\widetilde{{\mathbf {d}}}_j={\mathbf {d}}_j/\left\| {\mathbf {d}}_j\right\| _2\), it is clearly to have \(\left\| \widetilde{{\mathbf {d}}}_j\right\| _2=1\). Thus, the above formula is equivalent to
Thus, it can be seen that using the normalized columns leads to the same choice for the index \(j_0\) in step 4.
Then, let \(\widetilde{{\mathbf {D}}}={\mathbf {D}}{\mathbf {W}}\) denotes the normalized version of \({\mathbf {D}}\), where \({\mathbf {W}}\) is a diagonal matrix containing \(\frac{1}{\left\| {\mathbf {d}}_j\right\| _2}\) on the main-diagonal. In step 5, the Least Squares is used to minimize \(\left\| {\mathbf {D}}{\mathbf {Z}}-{\mathbf {X}}\right\| _F^2\), subject to the support of each column \(support\left( {\mathbf {z}}_k\right) \subseteq \mathcal {S}^{(i)}\). Let \({\mathbf {D}}_{\mathcal {S}}\) denote the sub-matrix of \({\mathbf {D}}\) that contains the columns in \(\mathcal {S}\). The solution of the above minimization is found by
and the residual is
Let \(\widetilde{{\mathbf {D}}}_{\mathcal {S}}={\mathbf {D}}_{\mathcal {S}}{\mathbf {W}}_{\mathcal {S}}\) denote the sub-matrix of the normalized matrix \(\widetilde{{\mathbf {D}}}\), where \({\mathbf {W}}_{\mathcal {S}}\) denotes the sub-matrix of \({\mathbf {W}}\), whose columns are contained in \(\mathcal {S}\). The residual in the i-th iteration obtained by Algorithm 1 that uses this normalized matrix is
It can be seen from the above formula that the residual is the same for either the original or the normalized matrix \({\mathbf {D}}\). Thus, as the residual drives step 3 in the next iteration, Algorithm 1 is indifferent to the normalization. \(\square \)
Theorem 3
(performance in the worst case) For a system of linear equations \({\mathbf {D}}{\mathbf {Z}}={\mathbf {X}}\), if a solution \({\mathbf {Z}}\) exists obeying
where \(\mu _{{\mathbf {D}}}=\max _{1\le i,j\le m, i\ne j}\frac{\left| {\mathbf {d}}_i^T{\mathbf {d}}_j\right| }{\left\| {\mathbf {d}}_i\right\| _2\left\| {\mathbf {d}}_j\right\| _2}\), and \(\left| z_{min}\right| \) and \(\left| z_{max}\right| \) are respectively the minimal and maximal non-zero absolute values of \({\mathbf {Z}}\), Algorithm 2 is guaranteed to find it exactly.
Proof
Success of Algorithm 2 is guaranteed by the following requirement
where \({\mathbf {S}}\) denotes the set that contains the indices of the \(\left| {\mathbf {S}}\right| \) non-zero rows of \({\mathbf {Z}}\). Without loss of generality, each column of \({\mathbf {D}}\) is assumed to have the unit \(\ell _2\)-length, i.e., \(\left\| {\mathbf {d}}_j\right\| _2=1\). Plugging \({\mathbf {x}}_k=\sum _{t\in {\mathbf {S}}}z_{t,k}{\mathbf {d}}_t\) for the k-th column of \({\mathbf {X}}\), the term in the left-hand-side in Eq. (22) is
where n denotes the number of the columns of \({\mathbf {X}}\), \(\left| z_{min}\right| \) and \(\left| z_{max}\right| \) are the minimum and maximum non-zero absolute values of \({\mathbf {Z}}\), and \(\mu _{{\mathbf {D}}}=\max _{1\le i,j\le m, i\ne j}\frac{\left| {\mathbf {d}}_i^T{\mathbf {d}}_j\right| }{\left\| {\mathbf {d}}_i\right\| _2\left\| {\mathbf {d}}_j\right\| _2}\), for \({\mathbf {D}}\in \mathbb {R}^{d\times m}\).
Then, the term in the right-hand-side in Eq. (22) is
Thus, requiring
necessarily leads to the satisfaction that guarantees the success of Algorithm 2, i.e.,
The condition on the sparsity of \({\mathbf {Z}}\), can be found by following the similar steps, i.e.,
Hence, the conclusion is obtained as shown in Theorem 3. \(\square \)
Theorem 4
(performance in average) For a system of linear equations \({\mathbf {D}}{\mathbf {Z}}={\mathbf {X}}\) (\({\mathbf {D}}\in \mathbb {R}^{d\times m}\)), if a solution \({\mathbf {Z}}\) exists obeying
where \(\mu _{{\mathbf {D}}}=\max _{1\le i,j\le m, i\ne j}\frac{\left| {\mathbf {d}}_i^T{\mathbf {d}}_j\right| }{\left\| {\mathbf {d}}_i\right\| _2\left\| {\mathbf {d}}_j\right\| _2}\), and \(\left| z_{min}\right| \) and \(\left| z_{max}\right| \) are respectively the minimal and maximal non-zero absolute values of \({\mathbf {Z}}\), Algorithm 2 is likely to find it exactly with probability of failure vanishing to zero for \(d\rightarrow \infty \).
Proof
It is assumed that \({\mathbf {X}}\) is constructed by \({\mathbf {x}}_k={\mathbf {D}}{\mathbf {z}}_k=\sum _{t\in {\mathbf {S}}}z_{t,k}{\mathbf {d}}_t\), for the k-th column of \({\mathbf {X}}\), where the non-zeros of \({\mathbf {z}}_k\) are chosen as arbitrary positive values from \(\left[ \left| z_{min}\right| ,\left| z_{max}\right| \right] \) multiplied by i.i.d. Rademacher random variables \({\epsilon }_t\) assuming values \(\pm 1\) with equal probabilities. \({\mathbf {S}}\) contains the indices of the non-zero rows of \({\mathbf {Z}}\), such that \(rank\left( {\mathbf {Z}}\right) \le \left| {\mathbf {S}}\right| \).
First, the probability of failure of Algorithm 2 is found by
where T is an arbitrary threshold value. Without loss of generality, each column of \({\mathbf {D}}\) is assumed to have the unit \(\ell _2\)-length, i.e., \(\left\| {\mathbf {d}}_j\right\| _2=1\). Plugging \({\mathbf {x}}_k={\mathbf {D}}{\mathbf {z}}_k=\sum _{t\in {\mathbf {S}}}z_{t,k}{\mathbf {d}}_t\), we get
where n denotes the number of the columns of \({\mathbf {X}}\), \(\left| z_{min}\right| \) is the minimum non-zero absolute value of \({\mathbf {Z}}\). Thus, the probability is found by
where \(\mu _k=\max _{\left| {\mathbf {S}}\right| =k}\max _{i\notin {\mathbf {S}}}\sum _{t\in {\mathbf {S}}}\left( {\mathbf {d}}_i^T{\mathbf {d}}_t\right) ^2\), and \(\left| z_{max}\right| \) is the maximum non-zero absolute value of \({\mathbf {Z}}\). The above steps relies on the following rationales:
Then, the second probability term in the Eq. (23) is similarly found by
where m denotes the number of the columns of \({\mathbf {D}}\). Thus, returning to Eq. (23), the probability of a failure is bounded by
The above bound is a function of T and it should be minimized with respect to T to obtain the most tight upper-bound. However, the optimal value of T is hard to be evaluated in closed form. Thus, here we simply assign \(T=\frac{1}{2}n\left| z_{min}\right| \), leading to
Furthermore, it is obvious that \(\mu _{\left| {\mathbf {S}}\right| }\le \left| {\mathbf {S}}\right| \mu _{\mathbf {D}}^2\). Thus, for \({\mathbf {D}}\in \mathbb {R}^{d\times m}\), denoting \(\rho =\frac{z_{min}}{z_{max}}\) and \(m=\lambda d\) (\(\lambda >1\)), the probability of failure of Algorithm 2 in recovering a solution of \(rank\left( {\mathbf {Z}}\right) \le \left| {\mathbf {S}}\right| \) is found by
If further assuming that \(\left| {\mathbf {S}}\right| \propto \frac{c}{\mu _{\mathbf {D}}^2\log d}\), where c is an arbitrary constant. this probability becomes
When choosing \(c<\frac{1}{128}\rho ^2\), this probability becomes arbitrarily small as \(n\rightarrow \infty \). Thus, we obtain
The condition on the sparsity of \({\mathbf {Z}}\), can be found by following the similar steps, i.e.,
Hence, the conclusion is obtained as shown in Theorem 4. \(\square \)
Appendix 2: Convex Relaxation Algorithm
The convex counterpart of the proposed greedy algorithm is addressed in this section, which relaxes the simultaneously low-rank and sparse problem to a convex problem. The minimization on trace-norm and \(\ell _1\)-norm, both of which are convex functions, are used to approximate rank- and \(\ell _0\)-minimizations:
where \(\left\| \cdot \right\| _*\) denotes trace-norm of a matrix, which is computed from the sum of the singular values of this matrix. To the best of our knowledge, there is no closed-form about such a convex problem. Thus, the iterative algorithm also needs to be developed and another two slack variables are required to be introduced:
where \({\mathbf {Z}}\) is renamed to \({\mathbf {Z}}_1\), and \({\mathbf {Z}}_2\) and \({\mathbf {Z}}_3\) denote the newly introduced slack variables. The four subproblems over the four variables are independently convex. Thus, the iterative algorithm alternately minimizes the subproblems over the four variables, and the approximate solution can be sure to be found. Inexact augmented lagrange multiplier (Lin et al. 2010) method can be used in solve this problem:
where \(\mathbf {Y}_1\), \(\mathbf {Y}_2\), and \(\mathbf {Y}_3\) are the Lagrange multipliers, \(\tau >0\) is a penalty parameter, and the expression \(\langle \mathbf {A},\mathbf {B} \rangle =trace\left( \mathbf {A}^T\mathbf {B} \right) \) is the inner-product of matrices \(\mathbf {A}\) and \(\mathbf {B}\).
Before deriving the iterative algorithm, the tool function is first defined as follows.
where \(\sigma >0\) denotes the shrinkage threshold and if x is vector or matrix, it operates on each element independently.
(1) The subproblem over \({\mathbf {Z}}_1\) can be re-arranged as
and solved by using singular values thresholding algorithm (Cai et al. 2010):
where \(\left[ \mathbf {U},\mathbf {S},\mathbf {V}^T\right] =svd\left( {\mathbf {Z}}_3+\mathbf {Y}_2/\tau \right) \).
(2) The subproblem over \({\mathbf {Z}}_2\) can be re-arranged as
and solved by using iterative shrinkage-thresholding algorithm (Beck and Teboulle 2009):
(3) The subproblem over \({\mathbf {Z}}_3\) is simply a quadric form and solved by least squares:
(4) The subproblem over \({\mathbf {E}}\) can be re-arranged as
and solved by using accelerated proximal gradient method (Bach et al. 2011):
where \(\mathbf {e}_i\) is the i-th column vector of matrix \(\mathbf {E}\), and \(\mathbf {w}_i\) is the i-th column vector of the matrix \(\mathbf {W}={\mathbf {X}}-\mathbf {D}{\mathbf {Z}}_3+\mathbf {Y}_1/\tau \).
Algorithm 5 depicts the formal description of the convex relaxation method.
Rights and permissions
About this article
Cite this article
Sui, Y., Zhang, L. Robust Tracking via Locally Structured Representation. Int J Comput Vis 119, 110–144 (2016). https://doi.org/10.1007/s11263-016-0881-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-016-0881-x