Inferring robot goals from violations of semantic knowledge

https://doi.org/10.1016/j.robot.2012.12.007Get rights and content

Abstract

A growing body of literature shows that endowing a mobile robot with semantic knowledge and with the ability to reason from this knowledge can greatly increase its capabilities. In this paper, we present a novel use of semantic knowledge, to encode information about how things should be, i.e. norms, and to enable the robot to infer deviations from these norms in order to generate goals to correct these deviations. For instance, if a robot has semantic knowledge that perishable items must be kept in a refrigerator, and it observes a bottle of milk on a table, this robot will generate the goal to bring that bottle into a refrigerator. The key move is to properly encode norms in an ontology so that each norm violation results in a detectable inconsistency. A goal is then generated to bring the world back in a consistent state, and a planner is used to transform this goal into actions. Our approach provides a mobile robot with a limited form of goal autonomy: the ability to derive its own goals to pursue generic aims. We illustrate our approach in a full mobile robot system that integrates a semantic map, a knowledge representation and reasoning system, a task planner, and standard perception and navigation routines.

Highlights

► Represent normative constraints as part of the robot’s ontology. ► Automatically detect violations of these constraints. ► Automatically generate planning goals to recover from these violations. ► Multiple concurrent norm violations can be managed. ► Proof-of-concept experiments shown on the full robotic system.

Introduction

Mobile robots intended for service and personal use are being increasingly endowed with the ability to represent and use semantic knowledge about the environment where they operate [1], [2]. This knowledge encodes general information about the entities in the world and their relations, for instance, that a kitchen is a type of room which is used for cooking and which typically contains a refrigerator, a stove, and a sink; that milk is a perishable food; and that perishable food is stored in a refrigerator. Once this knowledge is available to a robot, it can be exploited to better understand the environment or plan actions [3], [4], [5], [6], [7], assuming of course that this knowledge is a faithful representation of the properties of the environment. There is, however, an interesting issue which has received less attention so far: what happens if this knowledge turns out to be in conflict with the robot’s observations?

Suppose for concreteness that the robot observes a bottle of milk lying on a table. This observation conflicts with the semantic knowledge that milk, a perishable item, should be stored in a refrigerator. The robot has three options to resolve this contradiction: (a) to verify its perceptions, e.g., by looking for clues which may indicate that the observed object is not a milk bottle; (b) to update its semantic knowledge base, e.g., by adding a subclass of milk that is not perishable; or (c) to modify the environment, e.g., by putting the bottle in the refrigerator. While some works have addressed the first two options [6], [8], [9], [10], the last one has not received much attention so far. Interestingly, the last option leverages the distinctive ability of robots to modify their physical environment. The goal of this paper is to investigate this option.

Our investigation proceeds in four steps. First, we address the problem of how to encode normative knowledge in a robot, that is, semantic knowledge on how things should be. For this we use a hybrid semantic map  [8], which combines traditional robot maps with description logics  [11], and enrich it with the notion of “normative” concepts. Second, we study how the robot can automatically detect violations of its normative knowledge, and isolate the causes of these violations. For this we rely on our encoding of norms to transform norm violations into logical inconsistencies. This allows us to use the mechanisms of description logics to detect an inconsistency and to identify the objects and relations which are involved in it. Third, we discuss how to go from the detection of a violation to a recovery strategy. We define a mechanism to automatically generate a goal, which represents the intention to achieve a specific state of the world that satisfies the violated norm. If this goal is fed to a standard task planner, it will result in a plan to execute the actions needed to bring the world back to a consistent state — provided of course that the robot has the right action repertoire.

Troubles rarely come alone, so our fourth and last step is to extend the above mechanism to the case of multiple violations of norms. This extension is not straightforward because of several reasons: (i) standard inference systems based on tableau methods do not behave well with multiple, simultaneous inconsistencies; (ii) violations may be inter-dependent, and solving one violation may produce another one; and (iii) some violations may be more important than others. We propose an algorithm that alternates violation detection, goal generation, and simulated recovery until a feasible sequence of recovery plans is found, also taking into account user defined priorities. This algorithm enables a mobile robot to generate a “to do list” in order to keep its workspace, as it perceives it, consistent with respect to a set of given norms.

In the rest of this paper we describe the above steps in more detail. We complement the formal descriptions with algorithms and examples to allow other researchers to reproduce our results. We also report a proof-of-concept experiment that shows the concrete applicability of our approach to real robotic systems. It should be emphasized that in this work we focus on the detection of norms violations and on the goal inference mechanism: the development of perception and action capabilities and the possible use of semantic knowledge in that context are beyond the scope of this paper.

In the next section we review some related work. Section  3 introduces our semantic map. In Section  4 we present the above first three steps, while Section  5 deals with the extension to the case of multiple concurrent norm violations. Section  6 reports the proof-of-concept experiment. We then discuss our results in Section  7 and conclude.1

Section snippets

Related work

The robotics community increasingly recognizes that future robots will have to be endowed with semantic knowledge [1], [13], [14]. Most current approaches rely on a shallow interpretation of semantic knowledge: the data used by the robot are simply augmented with labels, like “door” or “kitchen”, which carry a semantic meaning to humans, but this meaning is not explicitly represented into the robot. Often these semantic labels are used for the human–robot interaction [15], [16]. Many proposals

A semantic map for mobile robot operation

The semantic map used in this work, derived from  [6], comprises two different but tightly interconnected parts: a spatial box, or S-Box, and a terminological box, or T-Box. Roughly speaking, the S-Box contains factual knowledge about the state of the environment and of the objects inside it, while the T-Box contains semantic knowledge about that domain, giving meaning to the entities in the spatial box in terms of concepts and relations. Recalling the example given in the previous section, the

Inferring goals from norm violations

We now show how to use our semantic map to perform the first three steps mentioned in the Introduction: encode norms, detect violations of these norms, and recover from these violations.

Managing multiple norm violations

Up to this point we have tacitly assumed that only one norm has been violated. We now extend our approach to the case in which there are multiple violations of norms. Dealing with multiple norms can be tricky, owing to the possible interactions among them  [52]. One of the difficulties is that by solving one norm one may lead to violate another one: similar issues arise when planning for multiple, interacting goals, which has been long known to be a complex problem  [53], [54]. Another

An integrated experiment

In order to test the suitability of our approach in a real robotic application, we have run a few proof-of-concept experiments in which our system has been integrated in an existing robotic application. In this section, we describe one such experiment. The experiment only deals with a single norm violation, since the case of multiple norms does not add any complexity from the point of view of robotic execution.

Discussion

We believe that the work reported here constitutes a first step in a promising and novel direction, and it may be extended in a number of interesting ways. We hint some of these in this section.

In our work we assume that the robot should always enforce consistency with the semantic knowledge. However, there are cases where norm violations might be allowed according to the current context. For instance, the norm that imposes that perishable food should be inside the fridge can be temporarily

Conclusions

One of the most exciting uses of semantic knowledge in a robotic system is perhaps the possibility to resolve situations of conflict or ambiguity by reasoning about the cause of the problem and its possible solutions. This paper has explored an often neglected aspect of this use: recognizing and correcting situations in the world that do not comply with the given semantic model, by generating appropriate goals for the robot. In this light, our framework also contributes to the robot’s goal

Acknowledgement

This work greatly benefits from discussions with Martin Günther, Joachim Hertzberg and Federico Pecora. Work by the first author was supported by the Spanish Government under Research Contract CICYT-DPI2011-25483. Work by the second author was supported by strategic funds from Örebro University.

Cipriano Galindo was born in Málaga (Spain) in 1977, and he received the European Ph.D. and M.S. degree in Computer Science from the University of Málaga in 2006 and 2001 respectively. He is currently an assistant professor at the System Engineering and Automation Department at the University of Málaga. During the period from Sept. 2004, to Feb. 2005, and Aug. 2008 he was at the Applied Autonomous Sensor Systems, Örebro University (Sweden), working on Anchoring and Semantic Maps. From Dec.,

References (58)

  • D. Chapman

    Planning for conjunctive goals

    Artificial Intelligence

    (1987)
  • R. Lundh et al.

    Autonomous functional configuration of a network robot system

    Robotics and Autonomous Systems

    (2008)
  • J. Hertzberg, A. Saffiotti (Eds.), Special issue on semantic knowledge in robotics, Robotics and Autonomous Systems 56...
  • D. Holz, Z. Marton, A. Nuechter A. Pronobis, R. Rusu (Eds.), Workshop Series on Semantic Perception, Mapping and...
  • O. Mozos, P. Jensfelt, H. Zender, M. Kruijff, W. Burgard, From labels to semantics: an integrated system for conceptual...
  • A. Ranganathan, F. Dellaert, Semantic modeling of places using objects, in: Proc. of Robotics: Science and Systems,...
  • C. Galindo, A. Saffiotti, S. Coradeschi, P. Buschka, J., Multi-hierarchical semantic maps for mobile robotics, in:...
  • S. Rockel, et al. An ontology-based multi-level robot architecture for learning from experiences, in: AAAI Spring...
  • C. Galindo, J. González, J. Fernández-Madrigal, A. Saffiotti, Robots that change their world: inferring goals from...
  • D. Holz, D. Munoz, A. Nüchter, R. Bogdan-Rusu (Eds.), Workshop on Semantic Perception, Mapping and Exploration, 2011,...
  • M. Beetz, R. Alami, J. Hertzberg, A. Saffiotti, M. Tenorth (Eds.) Workshop on Knowledge Representation for Autonomous...
  • A. Swadzba, S. Wachsmuth, C. Vorwerg, G. Rickheit, A computational model for the alignment of hierarchical scene...
  • O. Mozos, C. Stachniss, W. Burgard, Supervised learning of places from range data using adaboost, in: Proc. of the Int....
  • A. Pronobis et al.

    Multimodal semantic place classification

    International Journal of Robotics Research

    (2010)
  • J. Civera, D. Gálvez-Lóopez, L. Riazuelo, J. Tardós, J. Montiel, Towards semantic SLAM using a monocular camera, in:...
  • F. Dayoub, T. Duckett, G. Cielniak, Toward an object-based semantic memory for long-term operation of mobile service...
  • M. Waibel et al.

    Roboearth — a world wide web for robots

    IEEE Robotics and Automation Magazine

    (2011)
  • C. Martin, K. Barber, Agent autonomy: specification, measurement, and dynamic adjustment, in: Proc. of the Autonomy...
  • Cited by (0)

    Cipriano Galindo was born in Málaga (Spain) in 1977, and he received the European Ph.D. and M.S. degree in Computer Science from the University of Málaga in 2006 and 2001 respectively. He is currently an assistant professor at the System Engineering and Automation Department at the University of Málaga. During the period from Sept. 2004, to Feb. 2005, and Aug. 2008 he was at the Applied Autonomous Sensor Systems, Örebro University (Sweden), working on Anchoring and Semantic Maps. From Dec., 2009, he is an associate, full time professor at the University of Málaga.

    Alessandro Saffiotti (M.Sc., Ph.D.) is full professor of Computer Science at the University of Orebro, Sweden, where he heads the AASS Cognitive Robotic Systems laboratory. His research interests encompass artificial intelligence, autonomous robotics, and technology for elderly people. He is the inventor of the notion of “Ecology of physically embedded intelligent systems”, a new approach to include robotic technologies in everyday life. He has published more than 160 papers in international journals and conferences, and organized many international events. In 2005 he was a program chair of IJCAI, the premier conference on Artificial Intelligence. He is or has been participating in a dozen EU projects and networks. He is a member of ECAI, of AAAI, and a senior member of IEEE.

    View full text