A method for team intention inference

https://doi.org/10.1016/S1071-5819(03)00011-9Get rights and content

Abstract

Recent advances in man–machine interaction include attempts to infer operator intentions from operator actions, to better anticipate and support system performance. This capability has been investigated in contexts such as intelligent interface designs and operation support systems. While some progress has been demonstrated, efforts to date have focused on a single operator. In large and complex artefacts such as power plants or aircrafts, however, a team generally operates the system, and team intention is not reducible to mere summation of individual intentions. It is therefore necessary to develop a team intention inference method for sophisticated team–machine communication. In this paper a method is proposed for team intention inference in process domains. The method uses expectations of the other members as clues to infer a team intention and describes it as a set of individual intentions and beliefs of the other team members. We applied it to the operation of a plant simulator operated by a two-person team, and it was shown that, at least in this context, the method is effective for team intention inference.

Introduction

Along with the introduction of advanced automation technology, the reliability and safety of today's technological systems have been enhanced, and often the workload of human operators has been greatly reduced. On the other hand, these systems have increasingly complicated and their behaviours have become increasingly invisible to human operators. Even though new types of man–machine interfaces are proposed and implemented in real systems, there are some cases in which human operators cannot understand behaviours of these automated systems because the basic function of the man–machine interfaces is limited to information exchanges, lacking more conceptual and intentional aspects of communication that enable humans to manage cooperative work efficiently (Paris et al., 2000; Hutchins, 1995). Introduction of automation also raises a difficult problem in human–machine relations. As for the final authority of decision making, human-centred automation (Hollnagel, 1995) has been widely acknowledged because it is difficult to anticipate every situation beforehand in system design and the automated system cannot take responsibility for accidents. However humans do not always make an optimal decision because of the limitation of cognitive capability and thus, the probability of human errors cannot be eradicated (Reason, 1990). There might not be a straightforward answer to the problem of whether humans or machines run the show. In fact, invisible behaviours of the automated-system and a poor relationship between humans and machines can cause serious problems, and sometimes lead to a critical accident (e.g. TMI, airplane crash in NAGOYA airport).

In order to enhance the reliability and safety of highly automated systems, it is important to develop such a sophisticated means of man–machine communication that humans and machines can understand the process behind each other's behaviours and decisions and establish cooperative relations so that they can complementarily perform required tasks.

Research has been already carried out on intent inferencing or plan recognition methods to make an interactive system serve as a more cooperative partner in the user's task by plan completion, error handling, information management, and so on (Goodman and Litman, 1992; Rubin et al., 1988; Hollnagel, 1992) This research concerned only one person's intention and does not relate to social or team situations in which intentions of others are somewhat relevant to one’s intention. In other words they are only focused on man–machine communication. However, in large and complex artefacts such as power plants and aircrafts, a team operates the system. We have to therefore deal with the team intention in cooperative activities that is not the same as the mere summation of individual intentions. The present study aims to develop a method for team intention inference for sophisticated team–machine communication in process domains.

In Section 2, team intention is explained by individual intention and mutual belief based on philosophical arguments regarding cooperative activities. In Section 3, a team intention inference method is explained in detail. In Section 4, the implementation of our method for a plant simulator, dual reservoir system simulation (DURESS), is explicated. In Section 5, inference results of the proposed method applied to the log data of operation of a plant simulator operated by a two-person team are illustrated and discussed. Then conclusions are given in Section 6.

Section snippets

Team intention

In this section, we provide an explanation of team intention based on philosophical arguments regarding cooperative activities. We can find many discussions and analyses on various notions of intention. Such analyses typically have to do with intention of a single person and do not seriously relate to team situations. Most of the conventional intention inference methods are based on a model of single person. It is common place, however, that one's group or its members affect one's intention,

Method for team intention inference

When an observer outside of the team tries to understand team behaviours from the individual perspective, he or she infers individual intentions and beliefs of constituents to interpret collective behaviours and then specifies a team intention by checking consistencies among his/her inference results of each constituent's mental components. Considering this technique of humans to infer a team intention from the bottom up viewpoint, we propose a method that identifies a set of intentions and

Implementation

We used a thermal-hydraulic process simulation (DURESS) to generate decision plans and test data. In this section, the system architecture applied to this simulation is explicated.

Validation

The proposed method was applied to the operation of a plant simulator (DURESS), where two operators cooperatively controlled the system. The inference ability of the method was evaluated from the two standpoints. We compared the results by the proposed method and (i) the actual team intention obtained by a post-scenario interview and (ii) a human observer's inference results. The performance of the proposed method was also compared with that of mere summation of individual intention inference.

Conclusion

The major contribution of this study is to provide a method for team intention inference in process domains. The underlying assumption here is that the mechanism of intention formation behind team cooperative activities is different from that of the individual, and methods for individual intention inference cannot be directly applicable to team cooperative activities. The proposed method is based on the mechanism of team intention and uses expectations of the other members as clues to infer a

References (24)

  • E. Hollnagel

    The design of fault tolerant systemsprevention is better than cure

    Reliability Engineering and System Safety

    (1992)
  • K. Levi et al.

    Learning plans for an intelligent assistant by observing user behavior

    International Journal of Man–Machine Studies

    (1990)
  • M.E. Bratman

    Shared cooperative activity

    The Philosophical Review

    (1992)
  • M.S. Carberry

    Plan Recognition in Natural Language

    (1993)
  • M.S. Carberry

    Techniques for plan recognition

    User Modeling and User Adapted Interaction

    (2001)
  • R. Conte

    Institutions and intelligent systems

  • Devaney, M., Ram, A., 1998. Needles in a haystack: plan recognition in large spatial domains involving multiAgents....
  • Furuta, K., Sakai, T., Kondo, S., 1998. Heuristics for intention inferencing in plant operation. Proceedings of the 4th...
  • B.A. Goodman et al.

    On the interaction between plan recognition and intelligent interfaces

    User Modeling and User-Adapted Interaction

    (1992)
  • Hatakeyama, N., Furuta, K., 2000. Bayesian network modeling of operator's state recognition process. Proceeding of the...
  • E. Hollnagel

    Human Reliability Analysis

    (1993)
  • Hollnagel, E., 1995. Automation, Coping, and Control, Proceedings of Post HCI’95 Conference Seminar on Human–Machine...
  • Cited by (25)

    • Method to assess the adherence of internal logistics equipment to the concept of CPS for industry 4.0

      2020, International Journal of Production Economics
      Citation Excerpt :

      HM: 1 to 3 (cumulative) (Eason, 1991) IR: 1 to 3 (cumulative) (Kanno et al., 2003) UM: 1 to 10 (non-cumulative) (Parasuraman et al., 2000)

    • Understanding behaviour in problem structuring methods interventions with activity theory

      2016, European Journal of Operational Research
      Citation Excerpt :

      X is used to denote the joint task, Xa to denote agent A's part of X, Ia and Ba to respectively denote A's intention and belief. A we intention (WI) between A and B consists of (IaXa⋀BaXb⋀BaBbXa) in A's mind and (IbXb⋀BbXa⋀BbBaXb) in B's mind (Kanno, Nakata, & Furuta, 2003). In our example, the joint task (X) is focused around the partially shared and partially contested activity object (achieving a zero-carbon enterprise zone).

    View all citing articles on Scopus
    View full text