Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag March 27, 2018

Trust in Automated Vehicles

What We Can Learn From Dynamic Web Service Design

  • Alexander G. Mirnig

    Mag. phil. Alexander Mirnig, Research Fellow at the Center for Human-Computer Interaction, University of Salzburg. Areas of Expertise: Human-Computer Interaction in Semiautomated Vehicles, Design Patterns, Philosophy of Science for HCI.

    EMAIL logo
    , Sandra Trösterer

    Mag. rer. nat. Sandra Trösterer, HCI research fellow at the Center for Human-Computer Interaction, University of Salzburg. Area of work: Investigation of user interfaces, requirements, behavior, and experiences in dedicated research fields of the automotive domain, i. e., automated driving, in-car collaboration, and driver distraction.

    , Alexander Meschtscherjakov

    Dipl. Ing. Dr. Alexander Meschtscherjakov, Assistant Professor at the Center for Human-Computer Interaction, Department of Computer Sciences, University of Salzburg. Areas of expertise: Persuasive Interaction Technologies, Automotive User Interfaces, Contextual User Experience (UX).

    , Magdalena Gärtner

    Magdalena Gärtner, MA is a Research Fellow at the Center for Human Computer Interaction of the University of Salzburg. She holds a Master’s Degree in Communication Science with an emphasis on Human Computer Interaction (HCI). In her research, she focuses on different user groups (e. g., drivers) and their adoption of and interaction with new technologies (e. g., advanced driver assistance systems). Furthermore, she is engaged in the application, evaluation, and enhancement of user-centered research methods.

    and Manfred Tscheligi

    The image of Prof. Tscheligi is © Elke Holzmann and used with permission. Univ-Prof. Dr. Manfred Tscheligi, Head of the Center for Human-Computer Interaction, Professor at the Department of Computer Sciences, University of Salzburg. Areas of Expertise: Human-Computer Interaction, User Experience Research, Contextual Interfaces, Advanced Interaction Techniques and Approaches, Research by Design and Materiality, UX Methods & Tools.

From the journal i-com

Abstract

Increasing degrees of automation in on-road vehicles bear great potential for heightened driver safety and traffic efficiency in both the near and far future. The more the driver delegates control to the vehicle, the more salient the issue of trust in automated technology becomes. Misaligned trust can lead to mishandling of automation controls in individual instances and decreases the general acceptance of on-road automation on a broader scale. In this paper, we apply insights from trust research for dynamic web service interaction to the novel automated driving domain, in order to scope the problem space regarding trust in automated vehicles. We conclude that the appropriate communication of trustworthiness, the necessity to calibrate trust, the importance of intervention capabilities by the driver, and the unambiguous transparency of locus of control are all important aspects when in comes to understanding trust in automated vehicles.

1 Introduction

On-road vehicles of today are equipped with a wide variety of more or less advanced functionalities, which follow the concept of driver assistance. Such driver assistance systems (DAS) range from more passive supportive systems, such as navigation systems or parking assistants, to systems actively interfering with the driving tasks, such as adaptive cruise control or lane changing assistants. The latter are also referred to as advanced driver-assistance systems (ADAS) and each ADAS could be seen to represent an individual step towards more and more automation and cooperation of human and system in the vehicles of the future [26], [4], [25].

One reason for increased automation in vehicles is purely service-driven. More sophisticated ADAS increase interaction comfort and provide a selling point for the vehicles within their respective market. There is a broader perspective as well, aimed at continuously reducing manual control while increasing automation capabilities at the same time, with the end result being fully automated and connected (i. e., autonomous) vehicles. There are several reasons for this push towards full on-road automation. The two arguably most important ones are (a) increased safety due to reduction of accidents caused by human error and (b) increased efficiency, which leads to less time spent on the road and, thus, reduced emissions [7], [4].

Reducing the potential for human error by reducing human involvement in the driving task essentially transitions the human from an active driver role into a more passive, passenger-like role. If this transition is to be successful in achieving higher on-road safety, then it is not enough for driving automation to be as capable as humans. Automated vehicles must be the better drivers and, perhaps just as important, humans must trust their vehicles. Automated vehicles need to perform at such a high level that humans are comfortable handing over more and more control in safety-critical and potentially life-threatening environments.

While it is not hard to imagine a fully automated and connected vehicle scenario, where all systems perform flawlessly without any error-prone humans in the loop, the reality is a different one. Automation technology and on-road traffic are still transitioning from manual to automated vehicles, meaning there is a mix of vehicles with different automation capabilities on the road. Connectivity and vehicle-to-vehicle communication systems and appropriate infrastructure are still being developed. Most vehicles still rely on their own sensors – sensors that can still be prone to, e. g., weather or signal interference. It is, therefore, difficult speak of trust in “the” automation in such an environment, as there is a multitude of agents in current traffic contexts an individual may or may not put trust in.

Figure 1 
          The Tesla Model S features a big built-in tablet screen in place of the “traditional” center stack. Image c ◯ Tesla, Inc., taken from [33].
Figure 1

The Tesla Model S features a big built-in tablet screen in place of the “traditional” center stack. Image c ◯ Tesla, Inc., taken from [33].

Trust is not only a gatekeeper that makes people use (or not use in the case of distrust) something. It is also an important factor for correct and safe interaction. A tragic demonstration of what a mismatch between trust in a system and that system’s actual capabilities can cause was provided by the now infamous Tesla incident of 2016 [16]. The sensors of a Tesla Model S did not detect a tractor-trailer crossing over into the lane of the Model S. The vehicle was in autopilot mode, causing the Tesla to crash into the trailer at full speed, killing the driver in the process. The sky behind the trailer was very bright at that time and the trailer itself was white. This combination made it indistinguishable from the sky for the vehicle’s sensors. While it was certainly an uncommon and difficult to anticipate configuration of circumstances, the bottom line is that the driver put (for this specific situation) an inappropriate amount of trust into the vehicle’s capabilities; one that had fatal consequences.

Thus, it is important to thoroughly explore trust in automated vehicles as an essential prerequisite to reaching increased on-road safety. In this article, we provide a peek into this problem space and outline some of the challenges and possible approaches to tackle them. There is already research available on trust in technology but the specific topic of trust in relation to automated vehicles has not yet received as much attention. Thus, we draw further insights from related work on trust in technology, technology acceptance, and trust in automation. We highlight one specific use case of a dynamic web service platform, which placed trust as one of the main guiding factors in its interaction design. This use case was chosen due to its dynamic nature, which translates well into the somewhat nebulous area of mixed traffic. Beyond that, there is also the broader aspect of in-vehicle interaction becoming more and more similar to standard web interaction; many manufacturers are opting to integrate tablets or smartphone-like user interfaces in their vehicles (e. g., see Figure 1).

2 Related Work

In the following sections, we will touch on related literature regarding trust and trust in technology, technology acceptance and trust, and finally automated vehicles and trust.

2.1 Trust and Trust in Technology

Trust is a complex concept, often understood as either a type of interpersonal relationship or personal disposition towards others or contexts inhabited by other individuals [31], [2], [27]. The disposition or attitude can be understood as an expectation that certain conditions are fulfilled or goals are reached [19]. Such a relation or disposition of trust does not, however, necessarily involve only human individuals. Trust occurs between two types of agents, the trustor – the one who trusts – and the trustee – the one who is trusted [6]. When one of these agents, likely the trustee, is a technological artifact or a system, then this is referred to as “user-system trust” or “trust in technology”, as opposed to interpersonal trust.

There are several definitions of ‘trust’ in computational and technology contexts. Hoff and Bashir [17] state that most explanations of trust consist of three different components:

  1. A set of trustor, trustee, and something that is at stake

  2. An incentive for the trustee to perform the task

  3. The possibility for the trustee to fail the task.

The latter requirement is also found in Corritore et al. [9] and Patrick et al. [24], who explicitly state trust as occurring in situations where risk is involved. One of the most commonly cited and best known definitions that concisely captures the three requirements mentioned above was put forward by Lee and See [19]. They defined trust as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” This expectation in the other agent (the trustee) can either match or not match the trustees actions or capabilities. When a mismatch occurs, it can be characterized as either overtrust or undertrust. Overtrust refers to the expectations being put into the trustee to be higher than said trustee’s capabilities. Undertrust is the exact opposite: the trustee’s capabilities are higher than the expectations. When the expectations match the trustee’s actual capabilities, it is referred to as calibrated trust [19]. In line with this, trust calibration is the act of ensuring that neither over- nor undertrust occurs [23]. Finally, meta trust refers to the trust a person has that the other person’s trust in the trustee is appropriate [19].

2.2 Technology Acceptance and Trust

The question of whether a technology or system is accepted and integrated into their user’s daily life has to be asked anew with every new technology or system as they are introduced into society. In research, the Technology Acceptance Model (TAM) by Davis [10], has become a widely used key model to predict the potential acceptance or rejection of a technology or system. The model kept on developing throughout the years and experienced various extensions. Trust was added as one contributing factor to the technology acceptance model by various researchers [20]. Based on the TAM, Constantinides et al. [8] propose a new model of technology acceptance that also integrates the degree of autonomy to predict the acceptance of Internet of Things (IoT) retail services. Similar to ADAS in the automotive domain, IoT technologies are ubiquitous, intelligent and autonomous. The results emphasize that the customer acceptance of IoT services decreases when technological autonomy grows. Further, they found positive direct effects of technology trust on intention to accept the IoT services. Technology trust even gained relevance in situations when technological autonomy was high. These findings highlight that user perceptions of trust are important, especially when technologies are highly autonomous. Thus, both trust and degree of autonomy can be considered constituting factors for technology acceptance of automated systems.

2.3 Automated Vehicles and Trust

Vehicles are not just classified in a binary manner depending on whether they are automated or not but regarding their degree of automation, as there are many different ways in which a vehicle can have assistive or automated functionalities. The most well-known classification of automated vehicles is provided in the SAE J3016 standard [32]. This standard provides a descriptive categorization of vehicles on the basis of their technical capabilities. Generally speaking, the higher the vehicle’s capabilities are, i. e., the more tasks it is capable of performing, the higher its level of automation. Thereby, capabilities include the safe operation of the vehicle (i. e., latitudinal and longitudinal control), as well as the task to monitor the environment.

The SAE scale defines six levels, ranging from Level 0 (no driving automation) to Level 5 (full driving automation). See Table 1 for a brief description of the levels.

Table 1

SAE-Levels of Driving Automation.

SAE-Level Name Vehicle... Human driver...
0 No Automation performs the entire driving task. Assistance systems, where present, do not interfere with the driving task.
1 Driver Assistance performs either lateral or longitudinal control. performs the rest of the driving task.
2 Partial Automation performs lateral and longitudinal control. supervises automation system and monitors driving environment.
3 Conditional Automation performs entire driving task and monitors the driving environment. responds to requests to intervene and is receptive to relevant output by the system but is not required to actively monitor the environment or system performance.
4 High Automation performs entire driving task and monitors the driving environment. is not required to respond to requests to intervene. Limited to certain driving environments.
5 Full Automation performs entire driving task and monitors the driving environment. is not required to respond to requests to intervene. Not limited to specific driving environments.

These levels indicate, on one hand, that it will be essential for the automated vehicle to communicate its driving state and, thus, what is expected from the driver. On the other hand, it shows that there is more than one task the driver or the system have to perform (e. g., steering, accelerating, breaking, monitor the environment, supervising the system), each of which translates into different requirements for either the driver or system.

Numerous considerations will affect the development, success, and adoption of automated vehicles in the near future. Trust, in close relation to technology failure, is one of the most important ones to consider on the drivers’ side. In order to prepare the future drivers for the realities of highly automated driving, the first part of that process must be “building trust and proficiency in today’s ADAS, and charting a vision toward the policies, technologies, and human-centered interactions that will support tomorrow’s driverless vehicles.” [28].

Several researchers have already investigated trust in the context of automated vehicles, but results are inconclusive. While Gold et al. [15], for example, found that the experience of a drive with a highly automated driving system increased the drivers’ self-reported trust in automated vehicles, Feldhütter et al. [13] did not discover any such effects. Also, almost all studies on trust in automated vehicles are usually conducted in driving simulator environments. However, the level of trust experienced in a real automated vehicle may differ significantly from the level of trust experienced in the safe surroundings of a driving simulator, since simulations do not entail the risk necessary for “true” trust relations [9], [24].

In order to shed some light onto aspects that are relevant for understanding trust in the context of autonomous vehicles, we present results from trust research in the field of dynamic web services. In the following chapter, we first outline what the domains of dynamic web services and automated vehicles have in common together with a brief description of the project, and then describe a selection of results that can be considered relevant for both domains.

3 Dynamic Web Services and Automated Vehicles – What They Have in Common

We have researched trust in the EU FP7 project ANIKETOS (www.aniketos.eu[1]), which concluded in 2014. The following short summary is adapted from Mirnig et al. [22] and a more comprehensive descriptions can be found in Brucker et al. [5] and Meland et al. [21]. The project goal was to develop a dynamic web service platform. The idea behind it was not to provide individual web services, but a platform via which individual services can be offered and used on an on-demand basis. This would enable offering composite services, which are composed of several individual services, in one place. For example, when booking a flight, the platform would additionally be able to allow arranging transfer to and from the airport, accommodation, etc. The dynamic component would mean that these offers are not necessarily limited to only one provider and even allow exchanging one service with a different but functionally similar one on the fly. Figure 2 shows a high level overview of the dynamic web service environment. The individual services are initially developed by service developers, integrated into the framework by service providers, and invoked by the end users. Adaptation or recomposition of provided services can be done by the framework itself during run-time, depending on end user needs and whether the services fulfill certain trust requirements within the framework. The service end user interacts only with the individual services via the platform, thus being in direct contact with only a fraction of the platform.

Figure 2 
          A high level overview of the dynamic web platform and involved actors during design and run times. Taken from [22], Image c ◯ Per Håkon Meland.
Figure 2

A high level overview of the dynamic web platform and involved actors during design and run times. Taken from [22], Image c ◯ Per Håkon Meland.

The project faced a number of challenges regarding trust, with many of them related to the dynamic nature of the service provision and exchange. First, it was essential to clearly define and separate trust parameters between services and service components from parameters of user-system trust. Trust requirements between web services were specified on the system-level in the platform’s own modeling language and, stated roughly, served the primary purpose to establish connections between individual services. These were not necessarily connected to any relations of trust the end user might have with the platform, especially when they were interacting with the platform as a whole and not with the individual services.

This led to another challenge. At times, it was unclear who was the trustee in certain system-user relationships, when the user interacted with the services via the platform framework. The framework was supposed to ensure security and integrity between the services, which assumed security of the individual service as well. However, a weakness in the individual service could reflect badly on the whole platform, as can weaknesses in the platform reflect badly onto services implemented in it (even when unaffected by the weakness). Thus, there was an uncertainty regarding the trustees in any given situation, which is essential to know for the appropriate design and placement of trustworthiness cues.

The third challenge concerned the issue of efficiency. By stringing together and offering flexible reconfiguring of many different web services, the platform provided added speed and convenience at the cost of transparency. Less transparency gave the user fewer means to intervene and make informed decisions and, while increasing efficiency when everything went well, had a stronger impact when things inevitably did go wrong.

Such a platform is essentially a complex environment with many different actors, some automated and some human, where changes can occur at any time that may or may not need to be communicated to a human agent. This is where we draw the connection to the automated driving context, for which one could make the exact same statement. There are parallels to all of the abovementioned issues in the contemporary and near future context of automated vehicles. First, the communication between vehicle subsystems or between vehicles does not necessarily reflect the communication with the human driver by default. For example, an automated vehicle might yield to another automated vehicle without any visual indication. If there is no accompanying visual cue to inform an intervention-ready driver, the vehicle’s behaviour might be misinterpreted, causing an unnecessary control transition, consequentially lowering trust in the vehicle’s capabilities.

Second, depending on the level of automation of a vehicle and whether the automation is active, the trustee is not always the human driver, as it is the case with fully manual traffic. For example, when encountering a fully automated vehicle with a newspaper reading “driver” behind the wheel, the trust relationship will most likely be directed at the vehicle (or related factors such as brand image) rather than the human.

Finally, as long as there are possible scenarios for vehicles to transition control to the driver, it can be assumed that not all situations can or should be handled by the vehicle. This raises the question of whether individuals can be expected to entrust their lives to an “imperfect” technology and how much of these imperfections should be shown, so that better usage of the technology can be ensured.

In the following sections, we highlight three issues among several that arose in the course of the ANIKETOS project and which are relevant to the automated driving domain. These concern (a) the communication of trustworthiness cues, (b) potential ambiguity of the trustee, and (c) the question of how much of the underlying system should be visible to the user in order to allow informed interaction. These insights are mostly taken from lessons learned and internal documentation, as well as study results reported in Mirnig et al. [22].

3.1 Trust vs. Trustworthiness

Trust and trustworthiness are on opposite sides of the same coin and this became more and more visible as the platform development progressed. The term ‘trustworthiness’ refers to characteristics of someone or something that is the object of trust [9]. When it comes to the perception of trustworthiness of websites, several factors play a role, such as ease of navigation or freedom from typographical errors (see [3], [14], [9], [11]). Also, internet users may have different attitudes regarding privacy, leading to a different perception of the trustworthiness of websites. According to Ackerman et al. [1], Internet users may be either privacy fundamentalists, pragmatists, or marginally concerned. Riegelsberger and Sasse [30], [29] point out that the assessment of the trustworthiness of a website is always a secondary task to the primary goal of the website user. This is an important point, since it emphasizes the main purpose of automated systems, i. e., to let users focus on their primary task. And it also means that trustworthiness cues can not take the user’s center of attention most of the time, as that would distract them from their primary task.

Within the project, we found that as soon as website users had to put more effort to trust assessment than to their initial primary goal, the automated systems lost one of its main benefits, i. e., the added convenience due to automation. Thus, establishing trust and providing adequate security is a challenging balancing act, as inadequate trustbuilding strategies and/or feedback solutions may result in a decreased user experience due to a perceived loss of the advantages of the automated system. Based on our findings, we concluded that “trust assessment is a secondary task of primary importance” [22]. As a rule of thumb, trustworthiness cues must always be second to the interface means necessary for executing the primary task, whichever that might be at the time. If trustbuilding is elevated to the primary task level, it runs the danger of serving nothing but its own purpose, decreasing interaction quality and being detrimental to trust in the long run.

3.2 Trustor and Trustee, or Who Trusts Whom?

In our studies conducted for the ANIKETOS project, we found that users generally had a low initial trust in dynamic web service platforms. One explanation was that such services are not yet common and, hence, users lacked experience with them. They did not know what they could or could not do and had no interaction experience that could have given them any previous insight on how secure or trustworthy they might be. This may be similar to automated vehicles; the lower the initial trust can be expected, the more novel and, therefore, less known the technology is to the average user.

Furthermore, in the context of dynamic web services, we had also found that it was important for users to know who they were interacting with in terms of ensuring security in the background. In other words, it was not always clear to them who they were in a trust relationship with. We called this phenomenon the “opaqueness” of the trustee [22]. The degree to which the trustee is opaque to the trustor also influences how much the trustee is trusted, with more information available usually leading to higher trust.

Opaqueness can occur on two levels: identification or feature level. On the identification level, it is difficult for the trustor to simply identify a trustee or among several potential trustees. This can be postulated to mostly cause reduced dispositional trust, as the directed trust relation is unclear. On the feature level, the identity of the trustee may be clear but it is unclear what their exact features are. Knowing these features helps the trustor decide what the trustee is capable of performing, which influences their trust in the trustee to successfully perform certain tasks. Opaqueness on the feature level does, therefore, also influence directed trust.

3.3 Opening the Black Box – When and Why

An automated system could actually remain a black box as long as there is no risk involved for the user. However, as soon as risk is involved, a black box will become problematic. If the user is unaware of what is happening or what the system is doing, it is likely that their uncertainty will increase and their trust in the system will decrease accordingly [12].

We found that the involved risk for the website users had a strong influence on their desire for feedback. That is, if there was the risk of financial loss due to an insecure web service, or the danger to lose personal data, it was also of high importance to be informed by all means. Users would care less if there was, e. g., a change of weather service. As outlined above, information about the underlying system handling the web service recomposition was also of importance. In our studies, participants suggested that information on its reputation or a “trust seal” could help to make this black box more transparent. Another issue we identified was that users might not always perceive a potential risk as such. Hence, provided feedback should also include information on the type of risk involved.

Regarding feedback density, the challenge is to provide the users with all information necessary without annoying or scaring them. We found that particularly in situations with higher risk involved, users need to be informed and warned so that they have the chance to react immediately to potential security threats. Thereby, it is important that the user clearly understands what the issue is and what information needs to be provided in an appropriate manner. Of course, what is considered appropriate depends on the frequency and how much effort of the user a response to the feedback entails.

In analogy to driving a highly automated vehicle, using a dynamic web platform means that the user is leaving a lot of the control and choices to the system. In the ANIKETOS project, we found that it is important that users get a choice in case an unexpected event happens of which they get informed of and which involves risk for them. For example, if a web service for online payment is insecure and users are notified, they also want to have a choice of how to proceed instead of the system replacing the web service for a secure one automatically. That is, if users have no control in such situations, they also will not trust the platform. We concluded that “the user needs to be able to make an active choice right as the feedback happens” [22].

4 Trust in Automated Vehicles – The Way Forward

We identified three problem areas for automated systems, which concern the communication of trustworthiness true, the identification of trustor and trustee, and the amount of information communicated to the end user. In the following sections, discuss their implications and for the automated driving domain. Each subsection contains a brief description of the problem area together with a forward perspective on possible solutions.

4.1 Communicating Trustworthiness and Trust Calibration

Since trustworthiness cues should come second to the primary task, it is important to first identify what the primary task in a given situation actually is. In the context of automated driving, this is not always clear-cut: the user’s primary task could either be the manual driving task itself or an entirely different task, depending on the vehicle’s automation capabilities and whether the automation systems are active at the time. This is only an issue, however, if we assume that trust cues are always unrelated to – and, therefore, potentially distracting from – the primary in-vehicle task(s).

However, this need not necessarily be the case. Trust calibration means to provide trust cues that influence trust in both a positive (to avoid undertrust) and a negative way (to avoid overtrust). In the context of automated vehicles, such cues would relate to the vehicle’s capabilities and which tasks, maneuvers, or actions it can or can not perform – either in general or in a particular situation [23]. These can be realized, e. g., within the regular user interface communicating system status of the automation system. Instead of simply communicating whether automation is on or off, a display can contain further information on recommended, possible, and impossible actions via a simple color coding scheme, which is one of the more effective ways to communicate trustworthiness cues [22]. Such an indicator does not change the vehicle’s capabilities in any way but instead communicates them more clearly to the driver and helps them adjust their expectations appropriately. An example for such a trust cue communication interface would be a visual sensor status indicator with a color coded output that employs the traffic light metaphor (red, yellow, and green). The interface communicates the confidence in the reliability of its own sensor data at all times with the appropriate cue (green: full confidence – inaccuracies unlikely; yellow: medium confidence – inaccuracies possible; red: low confidence – inaccuracies highly likely). Each cue informs the driver and, in doing so, enables them to adjust their expectations in the vehicle’s performance and their own monitoring behaviour. Note that this is only an illustrative example, which targets only one trust-relevant factor (sensor reliability in this case).

Regarding the potential concern that situational distrust might decrease overall trust, there is a relation to the topic of system transparency and visibility of system status [18], which affects how much trust a person puts in a system. Furthermore, it can be assumed that repeated and successful interactions with the vehicle and its different functions would result in higher reliance. These successful interactions are more likely to occur, the better the user interface is able to calibrate trust and guide the driver’s interaction. We concluded that “reliance is fundamental for trust and reliance-fostering measures should be treated as important as (or perhaps even more important than) traditional trust building strategies” [22].

4.2 The Value of Interventions

The strong push towards full automation is, as initially mentioned, well motivated. However, beyond technical developments regarding motivation taking time to mature, there might be an additional reason to extend the transition phase. That reason is the potential of intervention capabilities to positively influence trust in risky contexts. In the web context, it seems almost trivial to state that higher risk equals higher need for information and intervention possibilities. This is because there is a wide variety of interactions on the web ranging from practically zero to high risk (e. g., merely navigating to a website vs. performing a monetary transaction). In the automotive context, however, things are a bit different, as the act of driving a vehicle is inherently risky. An argument can be made that this is not entirely different for on-line interactions, as even visiting a website can be harmful if that website had malicious code injected into it.

Operating a vehicle, however, always brings with it the risk of bodily harm or even death the moment one sits in the driver’s seat, making it an especially (and permanently) risky context. In combination with the findings described in section 3.3, this means that there is essentially no moment in which the automation should be a complete black box to the user. The question is then not if the black box should be opened or not, but how far it should be opened. As we explained in Sections 3.1 and 4.1, any such opening of the black box can interfere with the driver’s primary task – unless the information provided is directly relevant to said task.

One possible logical consequence from this combination of factors is to assume intervention capabilities by the driver to be an important trustbuilding measure. This is most easily realized in SAE level 3 vehicles, where intervention readiness is still an essential requirement on the driver’s part. Thus, extending instead of reducing the transition time to full automation might actually help instead of hinder trustbuilding in automated vehicles. This provides an interesting alternative perspective to the current “rush towards level 5”. Instead of trying to get there as quickly as possible, there might be additional trust-related advantages to giving both technology and infrastructure the time they need to mature. It should be noted, however, that level 3 is not the only automation level foreseeing interventions. A level 4 vehicle, which reaches the end of its operational design domain (ODD), also requires intervention or executes a minimal risk maneuver if no intervention occurs. The SAE J3016 standard includes intervention requests even for level 5 vehicles [32] but defines them as not mandatory to respond to. Thus, there is an argument for exploring the potential of not only critical but also non-critical interventions across driving levels.

4.3 Knowing Who to Trust Means Knowing Who is in Control

We found that in a dynamic environment, it can be difficult to clearly identify the trustee, which makes it difficult to properly adjust one’s expectations. This causes an unclear or even nonexistant trust relationship. In the automated driving context, this situation is rather similar, although for different reasons. What it comes down to are the different levels of automation and whether the human driver or automation system are in charge of the vehicle’s controls. Depending on who is in control and whether or not this is visible or communicated to other road users, the trust relation can quickly become unclear even when only two actors are involved.

Figure 3 
            The trustee-relationship with a manually driving human trustor and vehicles of different automation levels.
Figure 3

The trustee-relationship with a manually driving human trustor and vehicles of different automation levels.

In Figure 3, we illustrate three simple scenarios to illustrate the opaque trustee phenomenon for automated vehicles. Figure 3 A shows a level 0 vehicle approaching an intersection with another level 0 vehicle approaching from the right, indicating the intention to turn left. The driver going straight will need to trust the other vehicle to give them the right of way, which they can verify by looking at the driver’s head and eye movements. The trustee, in this case, is clearly the human. In B, the same level 0 vehicle encounters a level 5 vehicle signaling to turn left. The trustee, in this case, is clearly the automated vehicle. However, the level 0 driver can only establish the proper relation to the system, if they know that the vehicle is fully automated, either by the driver seat being empty or the person in the driver seat being visibly occupied with a non-driving task. But even that is not a safe indicator, as the individual in the other vehicle might simply be a careless driver of a vehicle of a lower automation level. Figure 3 C shows an even more difficult scenario, in which the level 0 driver encounters a level 3 driver. The trustee is not defined by the automation level in this case but by whether the automation is on (trustee: system) or off (trustee: human). Furthermore, level 3 requires the driver to respond to intervention requests (fallback performance), so even when the automation is active, a properly adjusted trust relation must assign some trust to the human to respond in case of an emergency. The intervention readiness requirement also means that the driver seat can never be empty, making it even more difficult to establish the proper trust relation(s).

This example is an oversimplification that assumes that there will be no other indicators apart from driver position and behaviour to communicate the automation level and who is in control of the vehicle. It is unlikely that this extreme will be the case and there can be expected to be at least static indicators of automation capabilities (e. g., differently colored license plates). What the example is supposed to illustrate, however, is that even such static indicators are likely not enough to eliminate critical ambiguities. Control modes can change dynamically, making, e. g., the information that a vehicle is a level 3 vehicle alone not enough to establish the proper trust relations. This is not limited only to level 3; control changes can occur in level 4 vehicles as well (upon reaching their ODD, as mentioned in section 4.2) and, at least per definition, even in level 5 vehicles on a non-mandatory basis. It will, therefore, be important to design for proper communication of control modes not only between driver and vehicle but also to outside the vehicle, either on a visual basis for manual drivers or on a vehicle-to-vehicle basis for connected vehicles.

5 Conclusion

To conclude, we want to reflect on what has been said previously. Based on the findings of a project on trust in dynamic web services, we could derive some similarities between a dynamic web environment and automated vehicles. In many cases, there are similar challenges when it comes to trust. Trust is always related to trustworthiness and we need to distinguish between users primary tasks and their secondary task to evaluate the trustworthiness of a system. With respect to automated driving, this means we need to know specifically what the task of the user is in any given situation. Is it driving? Is it monitoring the environment? Is it supervising the automated system? It also leads to the need to be careful when designing elements that show trustworthiness not to interfere too much with interaction elements important for the primary task.

When it comes to communicating trustworthiness, it will be essential to calibrate the right level of trust. Neither undertrust nor overtrust is desired. We argue for a clear communication of the capabilities and limitations the automated system has. This information has to be provided to a user in efficient and effective ways. It needs to include all necessary information, while not overloading the user. Above that, we claim that not only traditional trust building strategies will need to implemented but also reliance-fostering measures should be explored. Similar to trust in other domains, trust in automated vehicles will have to evolve and be observed over time.

We also showed that the opaqueness of the trustee has an influence on trust and that the black box sometimes needs to be made transparent. As we laid out, this might be dangerous when it interferes with the user’s primary task. Opposite to the current movement in the automotive industry [34], we argue that leapfrogging automation level 3 and aiming for fully automated vehicles may backfire trust building in autonomous vehicles in general. Instead, we suggest to also explore non-critical interventions across all driving levels.

Finally, context awareness will be an important factor in trust building in automated vehicles. Especially with the dynamic nature of driving and the fact that during the transition phase from manual to automated driving, we will encounter all kinds of levels of automated vehicles on the street. Therefore, the necessary continuous evaluation of trustworthiness of others is extremely difficult. Building trust will not stop within the vehicle but it needs also to be built between the automated vehicle and the world outside.

Funding source: Austrian Science Fund

Award Identifier / Grant number: I 2126- N15

Funding statement: The financial support by the Austrian Science Fund (FWF): I 2126- N15 is gratefully acknowledged.

About the authors

Alexander G. Mirnig

Mag. phil. Alexander Mirnig, Research Fellow at the Center for Human-Computer Interaction, University of Salzburg. Areas of Expertise: Human-Computer Interaction in Semiautomated Vehicles, Design Patterns, Philosophy of Science for HCI.

Sandra Trösterer

Mag. rer. nat. Sandra Trösterer, HCI research fellow at the Center for Human-Computer Interaction, University of Salzburg. Area of work: Investigation of user interfaces, requirements, behavior, and experiences in dedicated research fields of the automotive domain, i. e., automated driving, in-car collaboration, and driver distraction.

Alexander Meschtscherjakov

Dipl. Ing. Dr. Alexander Meschtscherjakov, Assistant Professor at the Center for Human-Computer Interaction, Department of Computer Sciences, University of Salzburg. Areas of expertise: Persuasive Interaction Technologies, Automotive User Interfaces, Contextual User Experience (UX).

Magdalena Gärtner

Magdalena Gärtner, MA is a Research Fellow at the Center for Human Computer Interaction of the University of Salzburg. She holds a Master’s Degree in Communication Science with an emphasis on Human Computer Interaction (HCI). In her research, she focuses on different user groups (e. g., drivers) and their adoption of and interaction with new technologies (e. g., advanced driver assistance systems). Furthermore, she is engaged in the application, evaluation, and enhancement of user-centered research methods.

Manfred Tscheligi

The image of Prof. Tscheligi is © Elke Holzmann and used with permission. Univ-Prof. Dr. Manfred Tscheligi, Head of the Center for Human-Computer Interaction, Professor at the Department of Computer Sciences, University of Salzburg. Areas of Expertise: Human-Computer Interaction, User Experience Research, Contextual Interfaces, Advanced Interaction Techniques and Approaches, Research by Design and Materiality, UX Methods & Tools.

Acknowledgment

The authors thank Mandy Wilfinger for proofreading, Dorothé Smit and Jakub Sypniewski for their helpful comments on a draft version of this paper, and Elke Beck for her contributions within the ANIKETOS project that this publication is based on, Per Håkon Meland for the service composition overview in Figure 2, and all the ANIKETOS consortium members for their efforts within the project.

References

[1] Mark S. Ackerman, Lorrie Faith Cranor, and Joseph Reagle. Privacy in e-commerce: Examining user scenarios and privacy preferences. In Proceedings of the 1st ACM Conference on Electronic Commerce, EC ’99, pages 1–8, New York, NY, USA, 1999. ACM.Search in Google Scholar

[2] Barber. The logic and limits of trust. 1983. Ref’d by gambetta-1988a near key area of interest.Search in Google Scholar

[3] Daniel Belanche, Luis Casaló Ariño, and Miguel Guinalíu. How to make online public services trustworthy. 9:291–308, 07 2012.Search in Google Scholar

[4] Klaus Bengler, Klaus Dietmayer, Berthold Farber, Markus Maurer, Christoph Stiller, and Hermann Winner. Three decades of driver assistance systems: Review and future perspectives. IEEE Intelligent Transportation Systems Magazine, 6(4):6–22, 2014.Search in Google Scholar

[5] Achim D. Brucker, Francesco Malmignati, Madjid Merabti, Qi Shi, and Bo Zhou. The Aniketos Service Composition Framework, pages 121–135. Springer International Publishing, Cham, 2014.Search in Google Scholar

[6] C. Clases. Vertrauen [trust]. Dorsch – Lexikon der Psychologie [Dorsch – encyclopedia of psychology], 2016.Search in Google Scholar

[7] ERTRAC Task Force “Connectivity and Automated Driving”. Automated driving roadmap version 7.0. Online Article, June 2017. Retrieved December, 2017 from http://www.ertrac.org/uploads/documentsearch/id48/ERTRAC_Automated_Driving_2017.pdf.Search in Google Scholar

[8] Efthymios Constantinides, Marius Kahlert, and Sjoerd A. de Vries. The relevance of technological autonomy in the acceptance of iot services in retail. 2017.Search in Google Scholar

[9] Cynthia L. Corritore, Beverly Kracher, and Susan Wiedenbeck. On-line trust: Concepts, evolving themes, a model. Int. J. Hum.-Comput. Stud., 58(6):737–758, June 2003.Search in Google Scholar

[10] Fred D. Davis. A technology acceptance model for empirically testing new end-user information systems: Theory and results. PhD thesis, Massachusetts Institute of Technology, 1985.Search in Google Scholar

[11] Steve Diller, Lynn Lin, and Vania Tashjian. The Evolving Role of Security, Privacy, and Trust in a Digitized World. In The human-computer interaction handbook, pages 1213–1225. L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 2003.Search in Google Scholar

[12] Mary Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda Pierce, and Hall Beck. The role of trust in automation reliance. 58:697–718, 06 2003.Search in Google Scholar

[13] Anna Feldhütter, Christian Gold, Adrian Hüger, and Klaus Bengler. Trust in automation as a matter of media influence and experience of automated vehicles. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 60, pages 2024–2028. SAGE Publications Sage CA: Los Angeles, CA, 2016.Search in Google Scholar

[14] Carlos Flavián, Miguel Guinalíu, and Raquel Gurrea. The role played by perceived usability, satisfaction and consumer trust on website loyalty. Inf. Manage., 43(1):1–14, January 2006.Search in Google Scholar

[15] Christian Gold, Moritz Körber, Christoph Hohenberger, David Lechner, and Klaus Bengler. Trust in automation–before and after the experience of take-over scenarios in a highly automated vehicle. Procedia Manufacturing, 3:3025–3032, 2015.Search in Google Scholar

[16] The Guardian. Tesla driver dies in first fatal crash while using autopilot mode. Online Article, July 2016. Retrieved December, 2017 from https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.Search in Google Scholar

[17] Kevin Anthony Hoff and Masooda Bashir. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3):407–434, 2015. PMID: 25875432.Search in Google Scholar

[18] J. Johnston, J. H. P. Eloff, and Les Labuschagne. Security and human computer interfaces. 22:675–684, 12 2003.Search in Google Scholar

[19] John D. Lee and Katrina A. See. Trust in automation: Designing for appropriate reliance. Human Factors, 46(1):50–80, 2004. PMID: 15151155.Search in Google Scholar

[20] Nikola Marangunić and Andrina Granić. Technology acceptance model: a literature review from 1986 to 2013. Universal Access in the Information Society, 14(1):81–95, 2015.Search in Google Scholar

[21] Per Håkon Meland, Erkuden Rios, Vasilis Tountopoulos, and Achim D. Brucker. The Aniketos Platform, pages 50–62. Springer International Publishing, Cham, 2014.Search in Google Scholar

[22] Alexander Mirnig, Sandra Trösterer, Elke Beck, and Manfred Tscheligi. To trust or not to trust. In S. Sauer, C. Bogdan, P. Forbrig, R. Bernhaupt, and M. Winckler, editors, Human-Centered Software Engineering, volume 8742 of Lecture Notes in Computer Science, pages 164–181. Springer, Berlin, Heidelberg, 2014.Search in Google Scholar

[23] Alexander G. Mirnig, Philipp Wintersberger, Christine Sutter, and Jürgen Ziegler. A framework for analyzing and calibrating trust in automated vehicles. In Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’16 Adjunct, pages 33–38, New York, NY, USA, 2016. ACM.Search in Google Scholar

[24] Andrew Patrick, Stephen Marsh, and Pamela Briggs. Designing systems that people will trust. 2005.Search in Google Scholar

[25] Aneesh Paul, Rohan Chauhan, Rituraj Srivastava, and Mriganka Baruah. Advanced driver assistance systems. Technical report, SAE Technical Paper, 2016.Search in Google Scholar

[26] J. Piao and Mike McDonald. Advanced driver assistance systems from autonomous to cooperative approach. Transport Reviews, 28(5):659–684, 2008.Search in Google Scholar

[27] D. G. Pruitt, S. H. Kim, and J. Z. Rubin. Social conflict: escalation, stalemate, and settlement. McGraw-Hill series in social psychology. McGraw-Hill, 2004.Search in Google Scholar

[28] Bryan Reimer. Driver assistance systems and the transition to automated vehicles: A path to increase older adult safety and mobility? Public Policy & Aging Report, 24(1):27–31, 2014.Search in Google Scholar

[29] Jens Riegelsberger, Angela Sasse, and John D. McCarthy. The researcher’s dilemma: Evaluating trust in computer-mediated communication. 58:759–781, 06 2003.Search in Google Scholar

[30] Jens Riegelsberger, M. Angela Sasse, and John D. McCarthy. The mechanics of trust: A framework for research and design. Int. J. Hum.-Comput. Stud., 62(3):381–422, March 2005.Search in Google Scholar

[31] Julian B. Rotter. A new scale for the measurement of interpersonal trust1. Journal of Personality, 35(4):651–665, 1967.Search in Google Scholar

[32] International SAE. Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. Standard J3016, 2016.Search in Google Scholar

[33] Tesla S Official Press Materials. Retrieved Oct. 2017.Search in Google Scholar

[34] Wired. Ford’s skipping the trickiest thing about self-driving cars. Online Article, October 2015. Retrieved April, 2017 from https://www.wired.com/2015/11/ford-self-driving-car-plan-google/.Search in Google Scholar

Published Online: 2018-03-27
Published in Print: 2018-04-25

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2017-0031/html
Scroll to top button