Abstract.
In this paper, we discuss an appearance-matching approach to the difficult problem of interpreting color scenes containing occluded objects. We have explored the use of an iterative, coarse-to-fine sum-squared-error method that uses information from hypothesized occlusion events to perform run-time modification of scene-to-template similarity measures. These adjustments are performed by using a binary mask to adaptively exclude regions of the template image from the squared-error computation. At each iteration higher resolution scene data as well as information derived from the occluding interactions between multiple object hypotheses are used to adjust these masks. We present results which demonstrate that such a technique is reasonably robust over a large database of color test scenes containing objects at a variety of scales, and tolerates minor 3D object rotations and global illumination variations.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received: 21 November 1996 / Accepted: 14 October 1997
Rights and permissions
About this article
Cite this article
Edwards, J., Murase, H. Coarse-to-fine adaptive masks for appearance matching of occluded scenes. Machine Vision and Applications 10, 232–242 (1998). https://doi.org/10.1007/s001380050075
Issue Date:
DOI: https://doi.org/10.1007/s001380050075