Abstract
A system is presented which searches with an active camera for known objects, constrained to lie on a table, in an otherwise unknown office using color images. Both camera actions and image processing methods are represented as concepts of a semantic network. Image processing methods comprise depth computation to find a table, generation of object hypotheses in an overview image, and object verification in a close-up view. Camera actions are pan, tilt, zoom, and motion on a linear sledge. System actions, either image processing or camera actions, are initialized by a graph search based control algorithm which tries to compute the best scoring instance of a goal concept. The sequence of actions for computing an instance is determined by precedences which are either adjusted manually or computed from reinforcement-learning. Results are presented comparing the two approaches.
This work was partially supported by the “Deutsche Forschungsgemeinschaft (DFG)” under grant NI 191/12–1. Only the authors are responsible for the content.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Niemann, H., Ahlrichs, U., Paulus, D. (2006). Learning an Analysis Strategy for Knowledge-Based Exploration of Scenes. In: Christensen, H.I., Nagel, HH. (eds) Cognitive Vision Systems. Lecture Notes in Computer Science, vol 3948. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11414353_11
Download citation
DOI: https://doi.org/10.1007/11414353_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-33971-7
Online ISBN: 978-3-540-33972-4
eBook Packages: Computer ScienceComputer Science (R0)