Loading [a11y]/accessibility-menu.js
Contextual boost for pedestrian detection | IEEE Conference Publication | IEEE Xplore

Contextual boost for pedestrian detection


Abstract:

Pedestrian detection from images is an important and yet challenging task. The conventional methods usually identify human figures using image features inside the local r...Show More

Abstract:

Pedestrian detection from images is an important and yet challenging task. The conventional methods usually identify human figures using image features inside the local regions. In this paper we present that, besides the local features, context cues in the neighborhood provide important constraints that are not yet well utilized. We propose a framework to incorporate the context constraints for detection. First, we combine the local window with neighborhood windows to construct a multi-scale image context descriptor, designed to represent the contextual cues in spatial, scaling, and color spaces. Second, we develop an iterative classification algorithm called contextual boost. At each iteration, the classifier responses from the previous iteration across the neighborhood and multiple image scales, called classification context, are incorporated as additional features to learn a new classifier. The number of iterations is determined in the training process when the error rate converges. Since the classification context incorporates contextual cues from the neighborhood, through iterations it implicitly propagates to greater areas and thus provides more global constraints. We evaluate our method on the Caltech benchmark dataset [11]. The results confirm the advantages of the proposed framework. Compared with state of the arts, our method reduces the miss rate from 29% by [30] to 25% at 1 false positive per image (FPPI).
Date of Conference: 16-21 June 2012
Date Added to IEEE Xplore: 26 July 2012
ISBN Information:

ISSN Information:

Conference Location: Providence, RI, USA

Contact IEEE to Subscribe

References

References is not available for this document.