Elsevier

Information Sciences

Volume 245, 1 October 2013, Pages 218-239
Information Sciences

Human-inspired model for norm compliance decision making

https://doi.org/10.1016/j.ins.2013.05.017Get rights and content

Abstract

One of the main goals of the agent community is to provide a trustworthy technology that allows humans to delegate some specific tasks to software agents. Frequently, laws and social norms regulate these tasks. As a consequence agents need mechanisms for reasoning about these norms similarly to the user that has delegated the task to them. Specifically, agents should be able to balance these norms against their internal motivations before taking action. In this paper, we propose a human-inspired model for making decisions about norm compliance based on three different factors: self-interest, enforcement mechanisms and internalized emotions. Different agent personalities can be defined according to the importance given to each factor. These personalities have been experimentally compared and the results are shown in this article.

Introduction

One of the main goals of the agent community is to provide a trustworthy technology that allows humans to delegate some specific tasks to software agents. Such purpose requires that software agents consider the legislation, social norms, etc. that regulate the performance of the task that has been entrusted to them.

Humans do not always follow norms. Instead, deliberated and rational violation of norms is a conduct that can be observed in all human societies [9]. Thus, software agents must be able to make decisions about norm compliance similarly to the user that has entrusted the task. Otherwise, the results obtained by the agent would not make sense to the user who might refuse from delegating more tasks to software agents. For these reasons, we consider that developing procedures that allow software agents to make decisions about norm compliance is crucial.

The existing literature has not proposed procedures that allow users to configure their agents to make decisions about norm compliance as users would do. For example, in some works, such as [7], [3], the decisions about norm compliance are based on rigid procedures defined off-line by the agent designer and hard-wired on agents. Static procedures assume that it is possible to define off-line which is the best decision in all circumstances. Other works, such as [5], [22], propose mechanisms for making on-line decisions about norm compliance. Specifically, these mechanisms consider the effects of violating and obeying norms on the agent goals. However, there are works in the psychology scene [17] that claim that norm compliance is not only explained by rational motivations; i.e., the impact of norms and their enforcement procedures (sanctions and rewards) on the agent’s goals. Besides that, there are emotional motivations, such as shame or pride, that sustain norm compliance in human societies. For this reason, we consider that it is necessary to endow software agents with mechanisms for making decisions about norm compliance by balancing between rational and emotional criteria, just as humans do.

With the aim of contributing towards the resolution of this open issue, this article proposes a set of functions that allow agents to determine their willingness to comply with norms according to rational and emotional factors. The way in which agents take these motivations into account allows modelling different agent personalities. Moreover, we have carried out several experiments for illustrating the performance of these functions and the behaviour exhibited by the different agent personalities. This article is organized as follows: Section 2 describes how the existing literature has faced with the problem of making decisions about norm compliance in software agents; Section 3 introduces the running example used in this paper; Section 4 contains the basic definitions used in this paper; Section 5 describes the functions that allow agents to make decisions about norm compliance; Section 6 describes the experiments that have been carried out; and Section 7 contains conclusions and future works.

Section snippets

Related work

Conte et al. defined in [11] a norm-autonomous agent as an agent whose behaviour is influenced by norms that are explicitly represented inside its mind. The delegation of complex, dynamic and realistic tasks to software agents makes necessary the explicit representation of norms in agent minds. Moreover, agents with an explicit representation of norms are able to belong to different societies, to communicate norms and to reason about them [22]. Therefore, norm-autonomous agents should have

Running example

Over the course of this paper we will use an example to illustrate the human-inspired model for norm compliance decision making. This example consists of a software agent, called assistant, that draws up traffic routes according to the preferences that a human user has specified. These preferences may include time constraints, consumption requirements, avoidance of toll roads, and so on. Therefore, the routes suggested by the assistant agent indicate not only the particular ways or directions

Preliminaries

The purpose of this paper is not to propose, compare or improve existing norm or agent models, but to make use of these models and propose a set of functions that can be used for making decisions about norm compliance based on rational and emotional criteria. The aim of this section is to provide the reader with the basic notions of norm, instance and normative agent that are used in this paper.

Reasoning about norms

In the running example used in this article, the assistant agent must decide to what extent instances are observed or violated in the proposed routes (deliberation step). The mechanism responsible for this task should consider the salience of norms, the relevance of each concrete instance and the user preferences. This section proposes the rules and functions that allow Normative BDI agents to face with this complex issue.

The process by which agents extend their mental state according to their

Experimental results

This section compares the performance of the different agent types with respect to their decisions about norm compliance, which are modelled using the willingness function (fw). As previously mentioned, the value of fw is obtained combining the values of the three willingness functions (fw,fw and fw) as a weighted average. The weights that each agent gives to these factors characterize each agent type. The experiments contained in this section are aimed at illustrating the performance of

Conclusions

This paper answers a main question related to the possibility of developing software agents that consider norms as humans would do. In response to this issue, this paper proposes a human-inspired model that allows software agents to consider both their preferences and the norm repercussions when they determine their willingness to comply with norms. The repercussion of norms is not only defined in terms of the utility of norms and the economic cost (vs. benefit) of the sanctions (vs. rewards),

Acknowledgments

This paper was partially funded by the Spanish government under Grants CONSOLIDER-INGENIO 2010 CSD2007-00022, TIN2009-13839-C03-01, TIN2008-06701-C03–03, TIN2008-04446 and by the FPU Grant AP-2007-01256 awarded to N. Criado. This research has also been partially funded by the Generalitat de Catalunya under the Grant 2009-SGR-1434 and Valencian Prometeo Project 2008/051.

References (28)

  • J. Broersen et al.

    The boid architecture – conflicts between beliefs, obligations, intentions and desires

  • C. Castelfranchi et al.

    Deliberative normative agents: principles and architecture

    Intelligent Agents VI. Agent Theories Architectures, and Languages

    (2000)
  • R. Conte et al.

    Autonomous norm acceptance

    LNCS

    (1999)
  • N. Criado et al.

    Normative deliberation in graded BDI agents

  • Cited by (0)

    View full text