Skip to main content
Log in

Inter-frame constrained coding based on superpixel for tracking

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

A video tracking method based on superpixel with inter-frame constrained coding is proposed in this paper. The 3-D CIE Lab color feature is extracted at each superpixel to characterize local image information. Based on the color feature, the superpixel-based coding model is achieved between the adjacent frames for correctly tracking object. The proposed tracking method considers the interaction of corresponding superpixels between the adjacent frames of complex scenes, which enhances the stability of encoding. Due to the update of codebook and classifier parameter, the proposed method is robust for long-term object tracking. We test the proposed method on 15 challenging sequences involving drastic illumination change, partial or full occlusion, and large pose variation. The proposed method shows excellent performance in comparison with eight previously proposed trackers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  2. Donoho, D.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

  3. B. Liu, L. Yang, J. Huang, P. Meer, L. Gong, C. Kulikowski. Robust and fast collaborative tracking with two stage sparse optimization. In: ECCV, pp. 624–637 (2010)

  4. Wang, D., Lu, H., Yang, M.-H.: Online object tracking with sparse prototypes. IEEE TIP 22(1), 314–325 (2013)

    MathSciNet  Google Scholar 

  5. Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: NIPS, pp. 2223–2231 (2009)

  6. Mei, X., Ling, H., Wu Y., Blasch, E., Bai, L.: Minimum error bounded efficient L1 tracker with occlusion detection. In: CVPR, pp. 1257–1264 (2011)

  7. Mei, X., Ling, H.: Robust visual tracking using L1 minimization. In: ICCV, pp. 1436–1443 (2009)

  8. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: CVPR, pp. 2169–2178 (2006)

  9. Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y.: Locality-constrained linear coding for image classification. In: CVPR, pp. 3360–3367 (2010)

  10. Ren, X., Malik, J.: Learning a classification model for segmentation. In: ICCV, pp. 10–17 (2003)

  11. Levinshtein, A., Stere, A., Kutulakos, K., Fleet, D., Dickinson, S., Siddiqi, K.: Turbopixels: fast superpixels using geometric flows. IEEE PAMI 31(12), 2290–2297 (2009)

    Article  Google Scholar 

  12. Kim, G., Xing, E.P., Li, F.-F., Kanade, T.: Distributed cosegmentation via submodular optimization on anisotropic diffusion. In: ICCV, pp. 169–176 (2011)

  13. Wu, Y., Ling, H., Yu, J., Li, F., Mei, X., Cheng, E.: Blurred target tracking by blur-driven tracker. ICCV 2, 1100–107 (2011)

    Google Scholar 

  14. Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE PAMI 33(11), 2259–2272 (2011)

    Article  Google Scholar 

  15. Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20, 273–297 (1995)

    MATH  Google Scholar 

  16. Wang, S., Lu H., Yang, F., Yang, M.H.: Superpixel tracking. In: ICCV, pp. 1323–1330 (2011)

  17. Wu, Y., Cheng, J., Wang, J., Lu, H., Ling, H., Blasch, E.: Real-time probabilistic covariance tracking with efficient model update. IEEE TIP 21(5), 2824–2837 (2012)

  18. Fulkerson, B., Vedaldi, A., Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: ICCV, pp. 670–677 (2009)

  19. Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. IEEE PAMI 33(8), 1619–1632 (2011)

  20. Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: ECCV, pp. 864–877 (2012)

  21. Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In CVPR, pp. 1910–1917 (2012)

  22. Lim, J., Ross, D., Lin, R.-S., Yang, M.-H.: Incremental learning for visual tracking. In: NIPS, pp. 793C800 (2005)

  23. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: CVPR, pp. 798–805 (2006)

  24. Kwon, J., Lee, K.M.: Visual tracking decomposition. In: CVPR, pp. 1269–1276 (2010)

  25. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via online boosting. In: CBMV, pp. 47–56 (2006)

  26. Grabner, H., Leistner, C., Bischof, H.: Semisupervised on-line boosting for robust tracking. In: ECCV, pp. 234–247 (2008)

  27. Godec, M., Roth, P.M., Bischof, H.: Hough-based tracking of non-rigid objects. In: CVPR, pp. 81–88 (2011)

  28. Zhang, S., Yao, H., Sun, X., Lu, X.: Sparse coding based visual tracking: review and experimental comparison. Pattern Recogn. 46, 1772–1788 (2013)

    Article  Google Scholar 

  29. Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: CVPR, pp. 2411–2418 (2013)

Download references

Acknowledgments

This work is supported by the Project Supported by Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2014JM8301); The Fundamental Research Funds for the Central Universities; the National Natural Science Foundation of China under grant No. 60972148, 61072106, 61173092, 61271302, 61272282, 61001206, 61202176, 61271298; The Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project): No. B07048; the Program for Cheung Kong Scholars and Innovative Research Team in University: IRT1170.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaolin Tian.

Appendix A

Appendix A

In this appendix, we provide the derivation of Eq. (1).

$$\begin{aligned} \begin{array}{l} \mathop {\min }\limits _{\mathrm{{\mathbf {c}}}_i^t \in \mathrm{{\mathbf {C}}}} \displaystyle \sum \limits _{i=1}^N {\left\| {\mathrm{{\mathbf {x}}}_i^t -\mathrm{{\mathbf {B}}}^{t-1}\mathrm{{\mathbf {c}}}_i^t } \right\| } ^2+\lambda \left\| {\mathrm{{\mathbf {c}}}_i^{t-1} -\mathrm{{\mathbf {c}}}_i^t } \right\| ^2 \\ \mathrm{s.t.}\;\;\mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t =1,\;\forall i \\ \end{array} \end{aligned}$$

Given the sample feature \(\mathrm{{\mathbf {x}}}_i^t \), according to Eq. (2), we can estimate the \(\mathrm{{\mathbf {c}}}_i^{t-1} \). Furthermore, for the objective function, the two terms, \(\mathrm{{\mathbf {B}}}^{t-1}\) and \(\mathrm{{\mathbf {c}}}_i^{t-1} \), are constants respectively for each frame, so the \(\mathrm{{\mathbf {c}}}_i^t \) can be achieved by solving a convex optimization problem.

Let \(J( \mathrm{{\mathbf {C}}})=\sum \nolimits _{i=1}^N {\left\| {\mathrm{{\mathbf {x}}}_i^t -\mathrm{{\mathbf {B}}}^{t-1}\mathrm{{\mathbf {c}}}_i^t } \right\| } ^2+\lambda \left\| {\mathrm{{\mathbf {c}}}_i^{t-1} -\mathrm{{\mathbf {c}}}_i^t } \right\| ^2\)

Introduce a Lagrange multiplier function:

$$\begin{aligned} J( {\mathrm{{\mathbf {C}}},\eta })&= \sum \limits _{i=1}^N {\left\| {\mathrm{{\mathbf {x}}}_i^t \mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t -\mathrm{{\mathbf {B}}}^{t-1}\mathrm{{\mathbf {c}}}_i^t } \right\| } ^2\\&+\lambda \left\| {\mathrm{{\mathbf {c}}}_i^{t-1} \mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t -\mathrm{{\mathbf {c}}}_i^t } \right\| ^2+2\eta ( {1-\mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t }) \end{aligned}$$

Let \( \partial J\left( {{\mathbf {C}},\eta } \right) /\partial {\mathbf {c}}_{i}^{t} = 0 .\)

$$\begin{aligned}&\left[ \left( {{\mathbf {B}}^{{t - 1^{T} }} - {\mathbf {1x}}_{i}^{{t^{T} }} } \right) \left( {{\mathbf {B}}^{{t - 1^{T} }} - {\mathbf {1x}}_{i}^{{t^{T} }} } \right) ^{T}\right. \\&\quad \left. +\, \lambda \left( {{\mathbf {c}}_{i}^{{t - 1}} {\mathbf {1}}^{T} - {\mathbf {E}}} \right) ^{T} \left( {{\mathbf {c}}_{i}^{{t - 1}} {\mathbf {1}}^{T} - {\mathbf {E}}} \right) \right] {\mathbf {c}}_{i}^{t} = \eta {\mathbf {1}} \end{aligned}$$

where \(\mathrm{{\mathbf {E}}}\) is identity matrix.

$$\begin{aligned}&\widehat{{{\mathbf {c}}_{i}^{t} }} = \eta \left[ \left( {{\mathbf {B}}^{{t - 1^{T} }} - {\mathbf {1x}}_{i}^{{t^{T} }} } \right) \left( {{\mathbf {B}}^{{t - 1^{T} }} - {\mathbf {1x}}_{i}^{{t^{T} }} } \right) ^{T}\right. \nonumber \\&\qquad \qquad \quad \left. +\, \lambda \left( {{\mathbf {c}}_{i}^{{t - 1}} {\mathbf {1}}^{T} - {\mathbf {E}}} \right) ^{T} \left( {{\mathbf {c}}_{i}^{{t - 1}} {\mathbf {1}}^{T} - {\mathbf {E}}} \right) \right] /{\mathbf {1}} \end{aligned}$$
(9)

Because \(\mathrm{{\mathbf {1}}}^T\mathrm{{\mathbf {c}}}_i^t =1\), we normalize the \(\widehat{\mathrm{{\mathbf {c}}}_i^t }\),

$$\begin{aligned} {\mathbf {c}}_{i}^{t} = \widehat{{{\mathbf {c}}_{i}^{t} }}/\left( {{\mathbf {1}}^{T} \widehat{{{\mathbf {c}}_{i}^{t} }}} \right) \end{aligned}$$
(10)

Furthermore, for achieving the normalizing result of \(\mathrm{{\mathbf {c}}}_i^t \), the \(\eta \) should be:

$$\begin{aligned} \eta \!=\! \frac{1}{{{\mathbf {1}}^{T} \left[ {\left( {{\mathbf {B}}^{{t \!-\! 1^{T} }} \!-\! {\mathbf {1x}}_{i}^{{t^{T} }} } \right) \left( {{\mathbf {B}}^{{t \!-\! 1^{T} }} \!-\! {\mathbf {1x}}_{i}^{{t^{T} }} } \right) ^{T} \!+\! \lambda \left( {{\mathbf {c}}_{i}^{{t \!-\! 1}} {\mathbf {1}}^{T} \!-\! {\mathbf {E}}} \right) ^{T} \left( {{\mathbf {c}}_{i}^{{t \!-\! 1}} {\mathbf {1}}^{T} \!-\! {\mathbf {E}}} \right) } \right] ^{{ \!-\! 1}} {\mathbf {1}}}} \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, X., Jiao, L., Zheng, X. et al. Inter-frame constrained coding based on superpixel for tracking. Vis Comput 31, 701–715 (2015). https://doi.org/10.1007/s00371-014-0996-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-014-0996-4

Keywords

Navigation