Abstract:
The large shape variability and partial occlusions challenge most object detection and tracking methods for non- rigid targets such as pedestrians. Single camera tracking...Show MoreMetadata
Abstract:
The large shape variability and partial occlusions challenge most object detection and tracking methods for non- rigid targets such as pedestrians. Single camera tracking is limited in the scope of its applications because of the limited field of view (FOV) of a camera. This initiates the need for a multiple-camera system for completely monitoring and tracking a target, especially in the presence of occlusion. When the object is viewed with multiple cameras, there is a fair chance that it is not occluded simultaneously in all the cameras. In this paper, we developed a method for the fusion of tracks obtained from two cameras placed at two different positions. First, the object to be tracked is identified on the basis of shape information measured by MPEG- 7 ART shape descriptor. After this, single camera tracking is performed by the unscented Kalman filter approach and finally the tracks from the two cameras are fused. A sensor network model is proposed to deal with the situations in which the target moves out of the field of view of a camera and reenters after sometime. Experimental results obtained demonstrate the effectiveness of our proposed scheme for tracking objects under occlusion.
Date of Conference: 03-05 December 2007
Date Added to IEEE Xplore: 07 January 2008
Print ISBN:0-7695-3067-2