Abstract
Visual tracking has attracted more and more attention in recent years. In this paper, we proposed a novel tracker that is composed of a feature network, a dual classifier, a target location module, and a sample collecting and pooling module. The dual classifier contains two classifiers, called long-term classifier and short-term classifier, in which the long-term classifier is to maintain the long-term appearance of the target and the short-term classifier is for prompt response to the sudden change of the target. The training samples are divided into positive samples, negative samples, hard positive samples and hard negative samples and are used to train the two classifiers, respectively. Furthermore, in order to overcome the unreliability in locating the target by highest score, a density clustering method is introduced into the target locating process. Experimental results conducted on two benchmark datasets demonstrate the effectiveness of the proposed tracking method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)
Nam, H., Baek, M., Han, B.: Modeling and propagating CNNs in a tree structure for visual tracking. arXiv:1608.07242 (2016)
Han, B., Sim, J., Adam, H.: Branchout: regularization for online ensemble tracking with convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3356–3365. IEEE Press, Honolulu (2017)
Hong, Z., Chen, Z., Wang, C., et al.: MUlti-store tracker (MUSTer): a cognitive psychology inspired approach to object tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 749–758. IEEE Press, Boston (2015)
Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)
Kristan, M.: The visual object tracking VOT2016 challenge results. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 777–823. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48881-3_54
Jia, Y., Shelhamer, E., et al.: Caffe: convolutional architecture for fast feature embedding. In: ICMM 2014, pp. 675–678. ACM, Orlando (2014)
Ester, M., Kriegel, H., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD-1996, pp. 226–231. AAAI Press, Portland (1996)
Song, Y., Ma, C., et al.: Crest: convolutional residual learning for visual tracking. In: ICCV 2017, pp. 2555–2564. IEEE Press, Venice (2017)
Deng, J., Dong, W., et al.: Imagenet: a large-scale hierarchical image database. In: CVPR 2009, pp. 248–255. IEEE Press, Miami (2009)
Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: CVPR 2016, pp. 4293–4302. IEEE Press, Las Vegas (2016)
Zhang, T., Xu, C., Yang, M. H.: Multi-task correlation particle filter for robust object tracking. In: CVPR 2017, pp. 4819–4827. IEEE Press, Honolulu (2017)
Fan, H., Ling, H.: Parallel tracking and verifying: a framework for real-time and high accuracy visual tracking. In: ICCV 2017, pp. 5486–5494. IEEE Press, Venice (2017)
Danelljan, M., Häger, G., et al.: Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1561–1575 (2017)
Tao, R., Gavves, E., Smeulders, A.W.M.: Siamese instance search for tracking. In: CVPR 2016, pp. 1420–1429. IEEE Press, Las Vegas (2016)
Danelljan, M., Hager, G., et al.: Convolutional features for correlation filter based visual tracking. In: ICCV Workshops, pp. 58–66. IEEE Press, Santiago (2015)
Danelljan, M., Hager, G., et al.: Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking. In: CVPR 2016, pp. 1430–1438. IEEE Press, Las Vegas (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Chatfield, K., Simonyan, K., Vedaldi, A., et al.: Return of the devil in the details: delving deep into convolutional nets. arXiv:1405.3531 (2014)
He, K., Zhang, X., et al.: Deep residual learning for image recognition. In: CVPR 2016, pp. 770–778. IEEE Press, Las Vegas (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, B., Guo, C. (2019). Robust Object Tracking Based on Deep Feature and Dual Classifier Trained with Hard Samples. In: Lu, H., Tang, H., Wang, Z. (eds) Advances in Neural Networks – ISNN 2019. ISNN 2019. Lecture Notes in Computer Science(), vol 11555. Springer, Cham. https://doi.org/10.1007/978-3-030-22808-8_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-22808-8_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22807-1
Online ISBN: 978-3-030-22808-8
eBook Packages: Computer ScienceComputer Science (R0)