Abstract:
During target tracking process, the state of the target is usually unpredictable. In theory, it is often beneficial to automatically assign suitable features to describe ...Show MoreMetadata
Abstract:
During target tracking process, the state of the target is usually unpredictable. In theory, it is often beneficial to automatically assign suitable features to describe the specific target in each frame. Inspired by this, in this paper, we propose a novel dynamic feature-adaptive tracking framework (DFAT) which automatically assigns appropriate features to the consecutive frames during tracking process to boost the tracking performance. To implement DFAT, a large pool consisting of trackers/experts based on correlation filtering (CF) is constructed which is called candidate pool (CandPool). The diversity of the experts lies in their feature configurations and we call them candidate experts (CandExp). In this way, different features can be assigned for continuously changed scenarios and the target. Then to assign suitable experts, for each frame, we design the dynamic tracking process as the following three steps: (1) Several experts which are called executive experts (ExeExp) are selected from the CandPool according to CandExps’ past performance. (2) The ExeExps generate the tracking results and the performance of them are evaluated via a novel evaluation mechanism. (3) The selection rate of each CandExp in the CandPool is updated according to the performance evaluation and the final tracking result is selected. To better evaluate the CandExp, we propose two novel criteria: (1) content similarity weighted intra-evaluation, and (2) response confidence based self-evaluation. Compared with traditional post-event ensemble trackers that use fixed experts, the proposed method learns to dynamically assign appropriate ExeExps selected from a large CandPool which leads to adaption to different cases. Moreover, overfitting caused by fixed experts can also be mitigated via dynamic tracking. Experiments on both public available general and satellite videos based data sets demonstrate the superiority of the proposed method.
Published in: IEEE Transactions on Circuits and Systems for Video Technology ( Volume: 33, Issue: 1, January 2023)