Extracting adaptive contextual cues from unlabeled regions | IEEE Conference Publication | IEEE Xplore

Extracting adaptive contextual cues from unlabeled regions


Abstract:

Existing approaches to contextual reasoning for enhanced object detection typically utilize other labeled categories in the images to provide contextual information. As a...Show More

Abstract:

Existing approaches to contextual reasoning for enhanced object detection typically utilize other labeled categories in the images to provide contextual information. As a consequence, they inadvertently commit to the granularity of information implicit in the labels. Moreover, large portions of the images may not belong to any of the manually-chosen categories, and these unlabeled regions are typically neglected. In this paper, we overcome both these drawbacks and propose a contextual cue that exploits unlabeled regions in images. Our approach adaptively determines the granularity (scene, inter-object, intra-object, etc.) at which contextual information is captured. In order to extract the proposed contextual cue, we consider a scene to be a structured configuration of objects and regions; just as an object is a composition of parts. We thus learn our proposed “contextual meta-objects” using any off-the-shelf object detector, which makes our proposed cue widely accessible to the community. Our results show that incorporating our proposed cue provides a relative improvement of 12% over a state-of-the-art object detector on the challenging PASCAL dataset.
Date of Conference: 06-13 November 2011
Date Added to IEEE Xplore: 12 January 2012
ISBN Information:

ISSN Information:

Conference Location: Barcelona

Contact IEEE to Subscribe

References

References is not available for this document.