Abstract
A video tracking method based on superpixel with inter-frame constrained coding is proposed in this paper. The 3-D CIE Lab color feature is extracted at each superpixel to characterize local image information. Based on the color feature, the superpixel-based coding model is achieved between the adjacent frames for correctly tracking object. The proposed tracking method considers the interaction of corresponding superpixels between the adjacent frames of complex scenes, which enhances the stability of encoding. Due to the update of codebook and classifier parameter, the proposed method is robust for long-term object tracking. We test the proposed method on 15 challenging sequences involving drastic illumination change, partial or full occlusion, and large pose variation. The proposed method shows excellent performance in comparison with eight previously proposed trackers.
Similar content being viewed by others
References
Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006)
Donoho, D.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)
B. Liu, L. Yang, J. Huang, P. Meer, L. Gong, C. Kulikowski. Robust and fast collaborative tracking with two stage sparse optimization. In: ECCV, pp. 624–637 (2010)
Wang, D., Lu, H., Yang, M.-H.: Online object tracking with sparse prototypes. IEEE TIP 22(1), 314–325 (2013)
Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: NIPS, pp. 2223–2231 (2009)
Mei, X., Ling, H., Wu Y., Blasch, E., Bai, L.: Minimum error bounded efficient L1 tracker with occlusion detection. In: CVPR, pp. 1257–1264 (2011)
Mei, X., Ling, H.: Robust visual tracking using L1 minimization. In: ICCV, pp. 1436–1443 (2009)
Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: CVPR, pp. 2169–2178 (2006)
Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y.: Locality-constrained linear coding for image classification. In: CVPR, pp. 3360–3367 (2010)
Ren, X., Malik, J.: Learning a classification model for segmentation. In: ICCV, pp. 10–17 (2003)
Levinshtein, A., Stere, A., Kutulakos, K., Fleet, D., Dickinson, S., Siddiqi, K.: Turbopixels: fast superpixels using geometric flows. IEEE PAMI 31(12), 2290–2297 (2009)
Kim, G., Xing, E.P., Li, F.-F., Kanade, T.: Distributed cosegmentation via submodular optimization on anisotropic diffusion. In: ICCV, pp. 169–176 (2011)
Wu, Y., Ling, H., Yu, J., Li, F., Mei, X., Cheng, E.: Blurred target tracking by blur-driven tracker. ICCV 2, 1100–107 (2011)
Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE PAMI 33(11), 2259–2272 (2011)
Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20, 273–297 (1995)
Wang, S., Lu H., Yang, F., Yang, M.H.: Superpixel tracking. In: ICCV, pp. 1323–1330 (2011)
Wu, Y., Cheng, J., Wang, J., Lu, H., Ling, H., Blasch, E.: Real-time probabilistic covariance tracking with efficient model update. IEEE TIP 21(5), 2824–2837 (2012)
Fulkerson, B., Vedaldi, A., Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: ICCV, pp. 670–677 (2009)
Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. IEEE PAMI 33(8), 1619–1632 (2011)
Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: ECCV, pp. 864–877 (2012)
Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In CVPR, pp. 1910–1917 (2012)
Lim, J., Ross, D., Lin, R.-S., Yang, M.-H.: Incremental learning for visual tracking. In: NIPS, pp. 793C800 (2005)
Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: CVPR, pp. 798–805 (2006)
Kwon, J., Lee, K.M.: Visual tracking decomposition. In: CVPR, pp. 1269–1276 (2010)
Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via online boosting. In: CBMV, pp. 47–56 (2006)
Grabner, H., Leistner, C., Bischof, H.: Semisupervised on-line boosting for robust tracking. In: ECCV, pp. 234–247 (2008)
Godec, M., Roth, P.M., Bischof, H.: Hough-based tracking of non-rigid objects. In: CVPR, pp. 81–88 (2011)
Zhang, S., Yao, H., Sun, X., Lu, X.: Sparse coding based visual tracking: review and experimental comparison. Pattern Recogn. 46, 1772–1788 (2013)
Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: CVPR, pp. 2411–2418 (2013)
Acknowledgments
This work is supported by the Project Supported by Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2014JM8301); The Fundamental Research Funds for the Central Universities; the National Natural Science Foundation of China under grant No. 60972148, 61072106, 61173092, 61271302, 61272282, 61001206, 61202176, 61271298; The Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project): No. B07048; the Program for Cheung Kong Scholars and Innovative Research Team in University: IRT1170.
Author information
Authors and Affiliations
Corresponding author
Appendix A
Appendix A
In this appendix, we provide the derivation of Eq. (1).
Given the sample feature \(\mathrm{{\mathbf {x}}}_i^t \), according to Eq. (2), we can estimate the \(\mathrm{{\mathbf {c}}}_i^{t-1} \). Furthermore, for the objective function, the two terms, \(\mathrm{{\mathbf {B}}}^{t-1}\) and \(\mathrm{{\mathbf {c}}}_i^{t-1} \), are constants respectively for each frame, so the \(\mathrm{{\mathbf {c}}}_i^t \) can be achieved by solving a convex optimization problem.
Let \(J( \mathrm{{\mathbf {C}}})=\sum \nolimits _{i=1}^N {\left\| {\mathrm{{\mathbf {x}}}_i^t -\mathrm{{\mathbf {B}}}^{t-1}\mathrm{{\mathbf {c}}}_i^t } \right\| } ^2+\lambda \left\| {\mathrm{{\mathbf {c}}}_i^{t-1} -\mathrm{{\mathbf {c}}}_i^t } \right\| ^2\)
Introduce a Lagrange multiplier function:
Let \( \partial J\left( {{\mathbf {C}},\eta } \right) /\partial {\mathbf {c}}_{i}^{t} = 0 .\)
where \(\mathrm{{\mathbf {E}}}\) is identity matrix.
Because \(\mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t =1\), we normalize the \(\widehat{\mathrm{{\mathbf {c}}}_i^t }\),
Furthermore, for achieving the normalizing result of \(\mathrm{{\mathbf {c}}}_i^t \), the \(\eta \) should be:
Rights and permissions
About this article
Cite this article
Tian, X., Jiao, L., Zheng, X. et al. Inter-frame constrained coding based on superpixel for tracking. Vis Comput 31, 701–715 (2015). https://doi.org/10.1007/s00371-014-0996-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-014-0996-4