Mapping Human Understanding to Robotic Perception

https://doi.org/10.1016/j.procs.2015.07.244Get rights and content
Under a Creative Commons license
open access

Abstract

Humans are excellent at adapting their knowledge to various situations and adjusting their communication accordingly. Thus, a person who knows a great deal about a subject can still talk about it to a child, albeit in a much more simplified form. What is of interest here is whether a robot can do the reverse: in other words, can it adjust a limited knowledge that it receives from its sensors to a more complicated knowledge of the world that it doesn’t sense, but knows only abstractly? In other words, what kind of mapping is possible to adapt sensory knowledge to a more expressive knowledge of the world (or, in some cases, less expressive). When DARwin sees a red ball, does it really know that it is a ball? Can the fact that the object is moving in certain manner be leveraged for understanding that it is a ball? Similarly, when a robot or agent has access to a very specific domain, what has to happen to relate this knowledge to a more general domain? What kind of information has to be transferred and what can be omitted? The paper will review previous research in ontology mapping and alignment and, based on the existing research, propose some of the solutions.

Keywords

robotic ontology
ontology matching
granulation manipulation

Cited by (0)

Peer-review under responsibility of the Conference Program Chairs.