Skip to main content
Log in

Robust object tracking via online Principal Component–Canonical Correlation Analysis (P3CA)

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Effective object representation plays a significant role in object tracking. To fulfill the requirements of tracking robustness and effectiveness, in this paper, we propose an adaptive appearance model called Principal Component–Canonical Correlation Analysis (P3CA). P3CA is a compact association of principal component analysis (PCA) and canonical correlation analysis (CCA), which results in robust tracking along with low computation cost. CCA is incorporated into P3CA appearance model for its effectiveness in handling occlusion due to the introduction of canonical correlation score instead of holistic information to evaluate the target goodness. However, it is time consuming and often suffers from Small Sample Size (3S) problem. To address these issues, PCA is incorporated and we obtain our P3CA subspace by performing CCA on the low-dimensional data gained by projecting the high-dimensional observations to the PCA subspaces. In addition, to account for appearance variations, we propose a novel online updating algorithm for P3CA subspace, which updates the PCA and CCA subspaces cooperatively and synchronously. Finally, we incorporate the dynamic P3CA appearance model with the particle filter framework in a probabilistic manner and select the candidate object with the largest weight as the final tracking result. Comparative results on several challenging sequences demonstrate that our tracker performs better than a number of state-of-the-art methods proposed recently in handling partial occlusion and various appearance variations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Collins, R., Lipton, A., Kanade, T., Fujiyoshi, H., Duggins, D., Tsin, Y., Tolliver, D., Enomoto, N., Hasegawa, O., Burt, P., Wixson, L.: A System for Video Surveillance and Monitoring. Carnegie Mellon University, the Robotics Institute, Pittsburg (2000)

    Google Scholar 

  2. Tian, Y., Lu, M., Hampapur, A.: Robust and efficient foreground analysis for real-time video surveillance. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1182–1187. San Diego, CA, USA, June 20–26 (2005)

  3. Moeslund, T., Granum, E.: A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81(3), 231–268 (2001)

    Article  MATH  Google Scholar 

  4. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  5. Ross, D., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1), 125–141 (2008)

    Article  Google Scholar 

  6. Mei, X., Ling, H.: Robust visual tracking using l1 minimization. In: Proc. IEEE Int. Conf. Computer Vision, pp. 1436–1443. Kyoto, Japan, Sept 27–Oct 4 (2009)

  7. Wang, Q., Chen, F., Xu, W., Yang, M.: An experimental comparison of online object tracking algorithms. In: Proc. SPIE: Image and Signal Processing, pp. 81381A–81381A. San Diego, August (2011)

  8. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. Int. Joint Conf. Artificial Intelligence, pp. 674–679. Vancouver, British Columbia, Canada, August (1981)

  9. Azarbayejani, A., Pentland, A.: Recursive estimation of motion, structure, and focal length. IEEE Trans. Pattern Anal. Mach. Intell. 17(6), 562–575 (1995)

    Article  Google Scholar 

  10. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  11. Grabner, H., Bischof, H.: On-line boosting and vision. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 260–267. New York, NY, USA, June 17–22 (2006)

  12. Babenko, B., Yang, M., Belongie, S.: Visual tracking with online multiple instance learning. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 983–990. Miami, Florida, USA, June 20–25 (2009)

  13. P\(\acute{e}\)rez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proc. European Conf. Computer Vision, pp. 661–675. Copenhagen, Denmark, May 28–31 (2002)

  14. Baker, S., Matthews, I.: Lucas-kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  15. Xi, L., Weiming, H., Zhongfei, Z., Xiaoqin, Z., Guan, L.: Robust visual tracking based on incremental tensor subspace learning. In: Proc. IEEE Int. Conf. Computer Vision, pp. 1–8. Rio de Janeiro, Brazil, Oct 14–20 (2007)

  16. Fan, J., Wu, Y., Dai, S.: Discriminative spatial attention for robust tracking. In: Proc. Europe Conf. Computer Vision, pp. 480–493. Heraklion, Crete, Greece, Sept 5–11 (2010)

  17. Kwon, J., Lee, K.: Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1208–1215. Miami, Florida, USA, June 20–25 (2009)

  18. Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2007)

    Article  Google Scholar 

  19. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 798–805. New York, NY, USA, June 17–22 (2006)

  20. Fragkiadaki, K., Shi, J.: Detection free tracking: exploiting motion and topology for segmenting and tracking under entanglement. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 2073–2080. Colorado Springs, CO, USA, June 20–25 (2011)

  21. Kim, M.: Correlation-based incremental visual tracking. Pattern Recognit. 45(3), 1050–1060 (2012)

    Article  MATH  Google Scholar 

  22. Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 142–149. Hilton Head, SC, USA, June 13–15 (2000)

  23. Isard, M., MacCormick, J.: Bramble: a bayesian multiple-blob tracker. In: Proc. IEEE Int. Conf. Computer Vision, pp. 34–41. Vancouver, Canada, July 7–14 (2001)

  24. Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1296–1311 (2003)

    Article  Google Scholar 

  25. Matthews, L., Ishikawa, T., Baker, S.: The template update problem. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 810–815 (2004)

    Article  Google Scholar 

  26. Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. In: Proc. European Conf. Computer Vision, pp. 234–247. Marseille, France, Oct 12–18 (2008)

  27. Stalder, S., Grabner, H., Van Gool, L.: Beyond semi-supervised tracking: tracking should be as simple as detection, but not simpler than recognition. In: Proc. IEEE Int. Conf. Computer Vision Workshops, pp. 1409–1416. Kyoto, Japan, Sept 27–Oct 4 (2009)

  28. Kalal, Z., Matas, J., Mikolajczyk, K.: P-n learning: bootstrapping binary classifiers by structural constraints. In: Proc. IEEE Conf. Computer Vision and, Pattern Recognition, pp. 49–56, June 13–18 (2010)

  29. Babenko, B., Yang, M., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  30. Fan, J., Shen, X., Wu, Y.: Scribble tracker: a matting-based approach for robust tracking. IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1633–1644 (2012)

    Google Scholar 

  31. Wang, F., Zhao, Q.: A new particle filter for nonlinear filtering problems. Chin. J. Comput. 3(2), 346–352 (2008)

    Google Scholar 

  32. Sun, Q., Zeng, S., Heng, P., Xia, D.: The theory of canonical correlation analysis and its application to feature fusion. Chin. J. Comput. 28(9), 1524–1533 (2005)

    MathSciNet  Google Scholar 

  33. Hong, Z., Yang, J.: Optimal discriminant plane for a small number of samples and design method of classifier on the plane. Pattern Recognit. 24(4), 317–324 (1991)

    Article  MathSciNet  Google Scholar 

  34. Duin, R.: Small sample size generalization. In: Proc. of the 9th Scandinavian Conf. on Image Anal., pp. 957–964. Uppsala, Sweden, June 6–9 (1995)

  35. Feng, Z., Yang, B., Chen, Y., Zheng, Y., Xu, T., Li, Y., Xu, T., Zhu, D.: Features extraction from hand images based on new detection operators. Pattern Recognit. 44(5), 1089–1105 (2011)

    Google Scholar 

  36. Zhao, Q., Sun, Z.: Image-based robot motion simulation. Opt. Commun. 205(4–6), 257–263 (2002)

    Article  Google Scholar 

  37. Shlens, J.: A Tutorial on Principal Component Analysis. Systems Neurobiology Laboratory, University of California, San Diego (2005)

    Google Scholar 

  38. Zhao, Q., Sun, Z., Sun, F., Zhu, J.: Appearance-based robot visual servo via a wavelet neural network. Int. J. Control Autom. Syst. 6(4), 607–612 (2008)

    Google Scholar 

  39. Liu, B., Yang, L., Huang, J., Meer, P., Gong, L., Kulikowski, C.: Robust and fast collaborative tracking with two stage sparse optimization. In: Proc. Europe Conf. Computer Vision, pp. 624–637. Heraklion, Crete, Greece, Sept 5–11 (2010)

  40. Bai, T., Li, Y.F.: Robust visual tracking with structured sparse representation appearance model. Pattern Recognit. 45(6), 2390–2404 (2012)

    Article  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their constructive comments and suggestions. And the authors would also like to thank Amit Adam, David Ross, Xue Mei, Boris Babenko et.al for providing their video sequences for our experiment and source codes for comparisons. This work was supported by the National Natural Science Foundation of China (61175096).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuxia Wang.

Appendix: Proofs of Eqs. (11)–(14)

Appendix: Proofs of Eqs. (11)–(14)

According to the definition of the covariance matrix, Eq. (11) can be obtained as:

$$\begin{aligned} {\varvec{\Sigma }}_{xy}^\mathrm{new}&= \frac{1}{(t+m)}(\mathbf{x}-\mathbf{m}_x^\mathrm{new})(\mathbf{y}-\mathbf{m}_y^\mathrm{new})^T\nonumber \\&= \frac{1}{(t+m)}\sum _{i=1}^{t+m}(\mathbf{x}_i-\mathbf{m}_x^\mathrm{new})(\mathbf{y}_i-\mathbf{m}_y^\mathrm{new})^T\nonumber \\&= \frac{1}{(t+m)}\sum _{i=1}^{t}\left(\mathbf{x}_i-\mathbf{m}_x^{\prime }\right)\left(\mathbf{y}_i-\mathbf{m}_y^{\prime }\right)^T\nonumber \\&+\frac{1}{(t+m)}\sum _{i=t+1}^{t+m}\left(\mathbf{x}_i-\mathbf{m}_x^{\prime }\right)\left(\mathbf{y}_i-\mathbf{m}_y^{\prime }\right)^T\nonumber \\&+\frac{m}{(t+m)^2}\sum _{i=1}^{t+m}\left(\mathbf{x}_i-\mathbf{m}_x^{\prime }\right)(\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T\nonumber \\&+\frac{m}{(t+m)^2}(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })\sum _{i=1}^{t+m}\left(\mathbf{y}_i-\mathbf{m}_y^{\prime }\right)^T\nonumber \\&+\frac{m^2}{(t+m)^3}\sum _{i=1}^{t+m}(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })(\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T \end{aligned}$$
(15)

The second term in (15) can be figured out by (16)

$$\begin{aligned}&{\sum _{i=t+1}^{t+m}\left(\mathbf{x}_i-\mathbf{m}_x^{\prime }\right)\left(\mathbf{y}_i-\mathbf{m}_y^{\prime }\right)^T}\nonumber \\&\quad =m\left({\varvec{\Sigma }}_{xy}^{\prime \prime }+\mathbf{m}_x^{\prime \prime }\mathbf{m}_y^{^{\prime \prime }T}\right)-m\mathbf{m}_x^{\prime \prime }\mathbf{m}_y^{^{\prime }T}-m\mathbf{m}_x^{\prime }\mathbf{m}_y^{^{\prime \prime }T}\nonumber \\&\quad + m\mathbf{m}_x^{\prime }\mathbf{m}_y^{^{\prime }T}\nonumber \\&\quad =m\left({\varvec{\Sigma }}_{xy}^{\prime \prime }+(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })(\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T\right) \end{aligned}$$
(16)

The third term in (15) can be figured out by (17)

$$\begin{aligned}&{\sum _{i=1}^{t+m}\left(\mathbf{x}_i-\mathbf{m}_x^{\prime } \right)(\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T}\nonumber \\&\quad =\left(\sum _{i=1}^{t}\mathbf{x}_i+\sum _{i=t+1}^{t+m} \mathbf{x}_i-\sum _{i=1}^{t+m}\mathbf{m}_x^{\prime }\right) (\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T\nonumber \\&\quad =-m(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })(\mathbf{m}_y^{\prime } -\mathbf{m}_y^{\prime \prime })^T \end{aligned}$$
(17)

Similar as (17), the fourth term in (15) can be gained by (18)

$$\begin{aligned}&{(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })\sum _{i=1}^{t+m} \left(\mathbf{y}_i-\mathbf{m}_y^{\prime }\right)^T}\nonumber \\&\quad =-m(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })(\mathbf{m}_y^{\prime } -\mathbf{m}_y^{\prime \prime })^T \end{aligned}$$
(18)

Finally, (15)can be simplified using (16)–(18) as (19):

$$\begin{aligned} {\varvec{\Sigma }}_{xy}^\mathrm{new}&= \frac{t}{(t+m)}{\varvec{\Sigma }}_{xy}^{\prime } +\frac{m}{(t+m)}{\varvec{\Sigma }}_{xy}^{\prime \prime }\nonumber \\&+\frac{tm}{(t+m)^2}(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime }) (\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T \end{aligned}$$
(19)

In order to get the updated inverse intra-class matrix \(({\varvec{\Sigma }}_{xx}^{-1})^\mathrm{new}\) and \(({\varvec{\Sigma }}_{yy}^{-1})^\mathrm{new}\) in Eq. (12), we need obtain \({\varvec{\Sigma }}_{xx}^\mathrm{new}\) and \({\varvec{\Sigma }}_{yy}^\mathrm{new}\) firstly, which can obtained using the same method as shown in (15)–(19) and simply by replacing \(\mathbf{y}\) with \(\mathbf{x}\) and \(\mathbf{x}\) with \(\mathbf{y}\), respectively. Therefore \({\varvec{\Sigma }}_{xx}^\mathrm{new}\) and \({\varvec{\Sigma }}_{yy}^\mathrm{new}\) can be expressed as (20) and (21):

$$\begin{aligned} {\varvec{\Sigma }}_{xx}^\mathrm{new}&= \frac{t}{(t+m)}{\varvec{\Sigma }}_{xx}^{\prime } +\frac{m}{(t+m)}{\varvec{\Sigma }}_{xx}^{\prime \prime }\nonumber \\&+\frac{tm}{(t+m)^2}(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime }) (\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })^T \end{aligned}$$
(20)
$$\begin{aligned} {\varvec{\Sigma }}_{yy}^\mathrm{new}&= \frac{t}{(t+m)}{\varvec{\Sigma }}_{yy}^{\prime } +\frac{m}{(t+m)}{\varvec{\Sigma }}_{yy}^{\prime \prime }\nonumber \\&+\frac{tm}{(t+m)^2}(\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime }) (\mathbf{m}_y^{\prime }-\mathbf{m}_y^{\prime \prime })^T \end{aligned}$$
(21)

Since we have obtained (20), then we can obtain \(({\varvec{\Sigma }}_{xx}^{-1})^\mathrm{new}\) in Eq. (12) using Sherman-Morrison formula, \((A+uv^T)^{-1}\) \(=A^{-1}-(A^{-1}uv^TA^{-1})/(1+v^TA^{-1}u)\), by letting \(A=\mathbf{D}_x=t/(t+m){\varvec{\Sigma }}_{xx}^{\prime }+m/(t+m){\varvec{\Sigma }}_{xx}^{\prime \prime }\) and \(u=v=a=\sqrt{mt}/(t+m)(\mathbf{m}_x^{\prime }-\mathbf{m}_x^{\prime \prime })\). \(({\varvec{\Sigma }}_{yy}^{-1})^\mathrm{new}\) in Eq. (12) can be obtained using the same method.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, Y., Zhao, Q. Robust object tracking via online Principal Component–Canonical Correlation Analysis (P3CA). SIViP 9, 159–174 (2015). https://doi.org/10.1007/s11760-013-0430-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-013-0430-9

Keywords

Navigation