Abstract:
In this paper, we propose a robust l1 tracking method based on a two phases sparse representation, which consists of a patch and a global appearance trackers. While recen...Show MoreMetadata
Abstract:
In this paper, we propose a robust l1 tracking method based on a two phases sparse representation, which consists of a patch and a global appearance trackers. While recently proposed l1 trackers showed impressive tracking accuracies, tracking the dynamic appearance is not easy to them. To overcome dynamic appearance change and achieve robust visual tracking, we model the dynamic appearance of the object by a set of local rigid patches and enhance the distinctiveness of the global appearance tracker by positive/negative learning. The integration of two approaches makes visual tracking robust to occlusion and illumination variation. We demonstrate the experiments with five challenging video sequences and compare with state-of-art trackers. We show that the proposed method successfully handle occlusion, noise, scale, illumination, and appearance change of the object.
Published in: 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)
Date of Conference: 30 October 2013 - 02 November 2013
Date Added to IEEE Xplore: 02 December 2013
ISBN Information: