The role of intervening variables in driver–ACC cooperation
Introduction
Adaptive Cruise Control (ACC) is a kind of Intelligent Driver Support System (IDSS). Equipped with sensors, positioned on the front of the vehicle, this ACC can detect preceding vehicles and determine their range and speed. If a preceding vehicle is detected, the speed of the ACC-equipped car is adjusted to conform to a pre-set headway time; if not, the ACC maintains a pre-set speed, just like traditional cruise control. Thus, both the driver and the ACC are able to perform longitudinal control tasks (speed and headway regulation) simultaneously, and this simultaneous control may lead to interference. For example, if a preceding vehicle brakes suddenly (Interference 1, normally managed by driver–ACC cooperation), the deceleration brought about by the device may be insufficient to avoid a collision (Interference 2, related to the braking power of the device). The second type of interference requires the driver to reclaim control of the vehicle from the device and brake hard to avoid a collision.
Such driver–ACC interaction can be considered from the perspective of Human–Machine Cooperation (HMC). Cooperation is an interference management activity that supplements an agent's individual activities (Loiselet and Hoc, 2001). According to Castelfranchi (1998), interference occurs when “… the effects of the action of one agent are relevant for the goals of another …” (p. 161). Two agents are in a cooperative situation only if their individual activities may interfere with one another and only if at least one of the agents tries to manage that interference in order to facilitate either an individual activity or a common task (Hoc, 2001, Hoc, 2005).
Hoc and Blosseville (2003) have described several modes of cooperation between drivers and automation. Cooperation with ACC is in the control mode category, which is itself part of the function delegation mode. In this mode, drivers can delegate the longitudinal control to the ACC as long as they desire and can reclaim control from the ACC at any moment, based on their initial evaluation of the context and/or their subsequent evaluations concerning (i) the actions that must be taken to manage the real or potential interferences detected, and (ii) the agent that can best perform those actions. The adaptability of the human–machine system depends on the know-how-to-cooperate skills of the agents (Millot and Hoc, 1997). “Know-how-to-cooperate” requires both being able to detect and manage interference and being able to facilitate the achievement of other agents’ goals (Millot and Lemoine, 1998). Several variables can intervene in operator decision-making activities and thus must be taken into account, given that the “best” task allocation decisions result from the interaction between an agent's know-how-to-cooperate and these intervening variables.
Intervening variables have been described by Muir (1994) as hypothetical constructs that cannot be directly observed because they “reside in the human mind”, but that nonetheless “mediate the human's observable responses to environmental stimuli” (p. 1099). Muir named trust as one of the mediators in an operator's use of automation. Muir and Moray (1996) later showed that trust in automation could be positively correlated with the time operators spent using automation: in other words, operators tend to use automation when they trust it; when they do not trust the automation, they prefer to do the task manually.
Riley's initial theory of operator reliance on automation (see Riley, 1994, Riley, 1996) would appear to be relevant to the analysis of intervening variables in the use of automation and in operator cooperation. Riley hypothesized that operators decide to rely on automation based not only on their level of trust in automation, but also on their level of self-confidence. More precisely, Riley argued that operators will do the task manually if they have more confidence in their own ability to do the task than they have trust in the automated device; on the other hand, if their trust in automation is higher than their level of self-confidence, operators will rely on automation. Lee and Moray (1994) had already shown that automation use patterns are better explained by taking self-confidence and trust in automation into account; however, as Riley (1996) emphasized, the relationship between trust in automation, self-confidence, and use of automation is also mediated by other factors, including operator workload and the level of risk associated with the situation. Fig. 1 presents a schematic diagram of Riley's initial theory of operator reliance on automation.
As the figure shows, operator confidence is influenced, among others factors, by perceived workload and perceived risk, while reliance on automation is influenced by confidence and trust in automation, which is in turn influenced by machine accuracy.
In the following sections, the intervening variables mentioned above (trust, self-confidence, perceived workload, and perceived risks) are described in detail.
For several authors (e.g., Zuboff, 1988; Sheridan, 1992; Parasuraman and Riley, 1997), trust is particularly important in the domain of human supervisory control. Indeed, trust in automation has been shown to play a role in its use (Lee and Moray, 1992; Dassonville et al., 1996; Muir and Moray, 1996; Bisantz and Seong, 2001). Based on a review of the existing literature (e.g., Muir, 1994; Lee and See, 2004), the definition of trust proposed in this article is appropriate for analyzing the decision-making situations encountered during human–machine interactions in which human operators can choose whether or not to delegate function(s) to the automated part of the system. Trust is defined as a psychological state (e.g., Rousseau et al., 1998) resulting from knowledge, beliefs, and assessments (e.g., Castelfranchi and Falcone, 2000) related to the decision-making situation, which creates confident expectations (e.g., Corritore et al., 2003).
Operators have expectations of themselves, the automation, their cooperation with the automation, and the overall human–machine system performance. Their expectations of themselves relate to self-confidence, or in other words, to their “anticipated performance during manual control” (Lee and Moray, 1994, p. 154) or to the “perceived reliability of manual control” (Dzindolet et al., 2001, p. 8). Expectations of the automation are essentially related to its competence, the operator perceptions that the automated control will, or will not, perform its function properly (Muir and Moray, 1996). Rajaonah et al. (2006a) assumed that expectations of cooperation with automation include expectations about the cooperation itself (e.g., perceived quality of interaction) and expectations about the results of the cooperation (e.g., decreased stress). According to these authors, this last type of expectation may correspond to another kind of trust—trust in the cooperation—while expectations related to the system performance may correspond to overall trust in the human–machine system; trust in the cooperation and overall trust are assumed to play an important role in the operator's choice of automated or manual control.
Trust may be too high, too low or just right, depending on the situation: in the first two cases, the performance of human–machine system will not be optimized (Muir, 1994; Dzindolet et al., 2003). Thus, learning to calibrate trust may be a part of acquiring “know-how-to-cooperate” in order to rely appropriately on automation. Calibration is the correspondence between operator trust in the automation and the automation's capabilities (Lee and Moray, 1994; Muir, 1994; Lee and See, 2004). Moreover, Castelfranchi and Falcone (2000) distinguished two kinds of trust attribution: (i) internal trust related to the evaluation of the other agent's “ability/competence”, and (ii) external trust resulting from the evaluation of the external conditions (i.e., whether the conditions are or are not propitious for “the performance and for its success”) (p. 6). Thus, if the tactical decision to use, or not to use, an automated device is based on trust, the level of trust for a given situation must be adjusted according to both the perception of anticipated performance (self performance, device performance, global performance of the joint system, and consequences of the cooperation) and the situation characteristics. However, as emphasized by Lee and See (2004), “trust guides—but does not completely determine—reliance” (p. 51). As mentioned earlier, the relationship between operator trust and the use of automation is mediated by other factors, such as perceived workload and perceived risks (Riley, 1996).
De Waard (1996) defined workload by linking it to task demands and effort. “Task demands are determined by goals that have to be reached by performance. […] Workload is the result of the reaction to demand; it is the capacity that is allocated to task performance. Effort is a voluntary mobilization process of resources” (De Waard, 1996, p. 17). The subjective aspect of workload is the mental effort that the operator is conscious of making, the attention deliberately allocated to performing a task (Kahneman, 1973).
According to De Waard, who applies his research to the “driving” task, automation is one of the environmental factors that can affect driver workload. Stanton et al. (1997) mentioned that removing drivers from the control loop may lead to decreased attention to the overall driving task. The drivers are not in the control loop of a driving subtask if the subtask is delegated to the automated device, which acts autonomously to carry out the subtask. The result may be that some subtasks are less well-controlled: for example, poor control over lane position (Ward et al., 1995), excessively hard braking (Hoedemaeker and Brookhuis, 1998), and delayed responses to emergency situations (Rudin-Brown and Parker, 2004). Therefore, paradoxally, the consequence of this decreased attention to the overall task is that drivers may be overloaded in emergency situations (Stanton et al., 1997), both in terms of lateral and longitudinal control.
One effect of increased workload that is often cited in the literature about human–machine systems is decreased vigilance. Oken et al. (2006) noticed that the term vigilance has various definitions. Indeed, psychologists and cognitive neuroscientists define it as sustained attention; for animal behavior scientists and psychiatric clinicians, “vigilance” refers to attention to potential threats; and clinical neurophysiologists use the term “vigilance level” to refer to “arousal level on the sleep-wake spectrum without any mention of cognition or behavioral responsiveness” (Oken et al., p. 1885). Nevertheless, it seems that vigilance and arousal can be functionally distinguished: “vigilance is associated with attentional availability, whereas arousal is independent of attention and is based on neuronal activation” (Tassi et al., 2003, p. 83). Indeed, from a psychological perspective, the constructs for vigilance and attention are superimposed: more precisely, vigilance is the ability to sustain attention on a task over a period of time (Davies and Parasuraman, 1982) and the vigilance decrement marks the decline in the quality of the sustained attention over time (Mackworth, 1948).
Clearly, vigilance is required for monitoring automation. But, maintaining vigilance has a cost in terms of workload (Parasuraman et al., 1996). Thus, as Bainbridge (1983) noted “ironically”, implementing automation in an effort to reduce workload may paradoxally cause an increased workload, due to the greater cognitive workload associated with monitoring the automated device.
Furthermore, vigilance and trust in automation may be closely linked (Lee and See, 2004). Muir and Moray (1996) observed that the more operators trust automation, the less often they monitor it. Parasuraman et al. (1993) showed that multitask situations requiring that attention be distributed over many sources may produce complacent operator behavior, with the result that operators fail to detect problems in the automated control of a system-monitoring task. Such results are often interpreted in terms of trust (see Lee and See, 2004). It seems that the higher the trust in automation, especially highly reliable automation, the more failures will pass undetected.
It is assumed in the present paper that perceived workload has a variety of aspects from the operator viewpoint: the mental effort and the attention required to do the task, as well as decreases in vigilance. If operators perceive that using automation alters the human–machine system's performance, by either increasing the effort required, increasing the attention required, and/or decreasing vigilance, they may prefer to use manual control. Clearly, if the operators’ perceived workload is too high (whatever the cause), the risk is that they will not choose to use the automated device, even when using it would improve the overall performance of the human–machine system.
Risk is another factor linked to trust, particularly with regard to the decision to use an automated device, or not to use it. Indeed, an operator deciding to use the automatic control becomes “a risk-acceptant agent” (Castelfranchi and Falcone, 2000, p. 8). “A situation is experienced as being full of risk when a person expects that, in the future, he/she might eventually experience negative results that he/she cannot control, as the result of this situation” (Numan, 1998, p. 48). According to Numan, perceived risk depends on both the expected probability of a negative situation (e.g., perceived probability of collision with the preceding car attributed to the use of antilock brake system) and the seriousness of the situation.
Concerning the use of automated control, the perceived risks are primarily related to the riskiness of the device, or in other words, the probability of unexpected outcomes when using the device. If the device is perceived to be risky, then operators will ask themselves what negative outcomes might occur—either as a result of eventual failures or as a result of the actual limitations of automation (i.e., even the most sophisticated machine is not infallible)—and how serious the consequences of these outcomes might be. Above all, operators must decide if they will be able to react correctly should these negative outcomes occur. Clearly, any risk taken must remain within acceptable limits (Castelfranchi and Falcone, 2000; Luhmann, 2000). Therefore, logically, perceived risks are also closely linked to self-confidence, or the operators’ confidence in their own capabilities. This link is graphically illustrated by Riley's initial theory of reliance on automation (see Fig. 1).
The link between trust and risk resides in the functionality of the former. Trust presupposes a situation of risk (Shapiro, 1987; Brower et al., 2000) that arises when individuals are confronted with a choice and have incomplete knowledge concerning the possible outcomes of their choice (Luhmann, 2000). For Lewis and Weigert (1985, p. 969), trust is the mechanism that minimizes the feeling of risk by allowing people to live “as if certain rationally possible futures will not occur”; for Numan (1998), trust allows people to anticipate the future “assuming that [it] is certain” (p. 32). In other words, according to these authors, trust relies on anticipation, which consists of mentally reducing the possibility of negative outcomes resulting from an individual choice. Thus, to trust is to temporarily believe that problems will not occur, that the future will be all right. The higher the perceived risk, the greater the trust required (e.g., Brower et al., 2000).
Given the influence of perceived risk on confidence, trust, and reliance on automation, this factor must be taken into account when seeking to explain the task allocation decision-making process. Correctly evaluating the risks, as well as the means of managing them, requires abilities that may be acquired along with the know-how-to-cooperate skill.
This paper examines how the intervening variables mentioned above influence the way the driver interacts with an ACC device. Clearly, when negotiating the risks associated with the use of ACC—that the device will not detect valid targets, for example—drivers must ask themselves whether they will be able to deal with the negatives outcomes should such outcomes occur. If the driver is not familiar with the ACC, the attentional demand may well be very high. Certainly, above and beyond the attention allocated to road monitoring, attention must be allocated to ACC use (monitoring of the interface and manipulating the command buttons). Added to this already complicated mixture is the notion of trust. As Ashleigh and Stanton (2001) have shown, driver trust may influence use of ACC; a more recent experiment has found a positive correlation between overall driver trust and the time spent using the device (Rajaonah et al., 2006a). For these reasons, the interplay of the intervening variables is important and merits close examination.
Section snippets
Method
The main objective of this study was to investigate how trust in ACC, trust in the cooperation with ACC, self-confidence, perceived workload, and perceived risk could explain the way drivers use and cooperate with an ACC device, allowing the various driver behaviors to be differentiated. A driving simulator was chosen for the experiment because such equipment allows the same driving scenarios to be used with all the participants. A questionnaire completed after the experimental run was used to
Results
The data were analyzed using XLSTAT 2006 (Addinsoft). All statistical tests were two-tailed, with an alpha level of .05. The non-parametric Mann–Whitney U-test was used for measuring the statistical significance between the means because observational data did not follow a normal distribution (as shown by the Shapiro–Wilks test). For the same reason, the Spearman correlation coefficient was used to study the links between the variables instead of the Pearson coefficient.
Table 1, Table 2, Table 3
Discussion
This paper analyzed how intervening variables such as trust, perceived workload and perceived risk could explain driver reliance on ACC. The study was carried out on a driving simulator, and a questionnaire was used to try to assess the effect of the intervening variables.
The participants were divided a posteriori into two groups, depending on their ACC use rate during the experimental run: 26 participants were assigned to the high-use group, and 16 were assigned to the low-use group. High-use
Acknowledgments
This research was done as part of ARCOS (the French acronym for the Driving Safety Research Program) with the financial support of the Ministries of Research, Transportation, and Industry.
Special thanks to M.P. Pacaux-Lemoine, J. Floris, P. Simon, and J.C. Popieul.
References (50)
Ironies of automation
Automatica
(1983)- et al.
Assessment of operator trust in and utilization of automated decision-aids under different framing conditions
International Journal of Industrial Ergonomics
(2001) - et al.
A model of relational leadership: the integration of trust and leader–member exchange
Leadership Quarterly
(2000) Modelling social action for AI agents
Artificial Intelligence
(1998)- et al.
On-line trust: concepts, evolving themes, a mode
International Journal of Human-Computer Studies
(2003) - et al.
Trust between man and machine in a teleoperation system
Reliability Engineering and System Safety
(1996) - et al.
The role of trust in automation reliance
International Journal of Human-Computer Studies
(2003) Towards a cognitive approach to human–machine cooperation in dynamic situations
International Journal of Human-Computer Studies
(2001)- et al.
Behavioural adaptation to driving with an adaptive cruise control (ACC)
Transportation Research Part F
(1998) - et al.
Trust, self-confidence, and operators’ adaptation to automation
International Journal of Human-Computer Studies
(1994)
Vigilance, alertness, or sustained attention: physiological basis and measurement
Clinical Neurophysiology
Behavioural adaptation to adaptive cruise control (ACC): implications for preventive strategies
Transportation Research Part F
Making adaptive cruise control (ACC) limits visible
International Journal of Human-Computer Studies
Drive-by-wire: the case of driver workload and reclaiming control with adaptive cruise control
Safety Science
Trust: key elements in human supervisory control domains
Cognition, Technology and Work
The Psychology of Vigilance
Cooperation between human cognition and technology in dynamic situations
Foundations for an empirically determined scale of trust in automated systems
International Journal of Cognitive Ergonomics
Attention and Effort
Trust, control strategies and allocation of function in human–machine systems
Ergonomics
Cited by (73)
Adaptive forward collision warning system for hazmat truck drivers: Considering differential driving behavior and risk levels
2023, Accident Analysis and PreventionIntegrating human cognition in cyber-physical systems: A multidimensional fuzzy pattern model with application to thermal spraying
2022, Journal of Manufacturing SystemsDeveloping human-machine trust: Impacts of prior instruction and automation failure on driver trust in partially automated vehicles
2021, Transportation Research Part F: Traffic Psychology and BehaviourCitation Excerpt :In particular, a partial driving automation (SAE level 2) demands a supervisory intervention of the driver to resume vehicle control when necessary, for example, when the system encounters functional constraints, such as failure of traffic object detection. Therefore, drivers need to understand automation functioning and its failures to use it appropriately (Rajaonah, Tricot, Anceaux, & Millot, 2008; Seppelt & Lee, 2007). That is, insufficient understanding of interaction with the system or passive monitoring task leads to poor performance in decision making (DeGuzman, Hopkins, & Donmez, 2020; Louw et al., 2017) and inappropriate use of automation (Sarter, Woods, & Billings, 1997).
Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability
2021, Transportation Research Part F: Traffic Psychology and BehaviourCitation Excerpt :Perception of Risks: Risk is considered to be an intrinsic aspect affecting trust, i.e., when the perceived risk of a situation is high, a higher level of trust is needed to rely on AV’s decisions (Numan, 1998; Kim, Ferrin, & Rao, 2008; Pavlou, 2003). Therefore, it is essential to consider factors associated with risks in AVs when evaluating trust (Rajaonah, Tricot, Anceaux, & Millot, 2008). Zmud and Sener (2016) reported that safety risks due to system failures were the major concerns of using AVs.
Engineering human-in-the-loop interactions in cyber-physical systems
2020, Information and Software TechnologyHistory and future of human-automation interaction
2019, International Journal of Human Computer StudiesCitation Excerpt :Work in this area has been published in every decade, but particularly in the 1990s and early 2000s. The range of settings in which time-sensitive and safety-critical tasks have been studied is diverse and varied: from monitoring dynamic processes in factories (e.g., Lee and Moray, 1994), power plants (e.g., Vicente et al., 2001), and other professional settings (e.g., Bahner et al., 2008; van Gigh, 1971), to flight monitoring (e.g., Singh et al., 1997; Skitka et al., 1999, 2000), and semi-automated driving (e.g., Rajaonah et al., 2008, Seppelt and Lee, 2007). The diversity of domains (and the importance of preventing incidents) has allowed an exploration of deep general topics throughout the history of IJHCS, which remain relevant for today's research.