Skip to main content

Advertisement

Log in

Trust and resilient autonomous driving systems

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Autonomous vehicles, and the larger socio-technical systems that they are a part of are likely to have a deep and lasting impact on our societies. Trust is a key value that will play a role in the development of autonomous driving systems. This paper suggests that trust of autonomous driving systems will impact the ways that these systems are taken up, the norms and laws that guide them and the design of the systems themselves. Further to this, in order to have autonomous driving systems that are worthy of our trust, we need a superstructure of oversight and a process that designs trust into these systems from the outset. Rather than banning or avoiding all autonomous vehicles should a tragedy occur, despite these systems having some level of risk, we want resilient systems that can survive tragedies, and indeed, improve from them. I will argue that trust plays a role in developing these resilient systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. By autonomous vehicle here, I mean a complex system that necessarily involves sensors, analytics, actuators and decision making elements, in which the default setting means humans are not directly involved in decision making—a vehicle that ‘drives itself’: “These refer primarily to future vehicles that may have the ability to operate without human intervention for extended periods of time and to perform a broad range of actions” (Lin 2016, p. 69).

  2. There is an interesting discussion about social amplification of risk that certainly applies here, in which “risk events interact with psychological, social, and cultural processes in ways that can heighten or attenuate public perceptions of risk and related risk behavior. Behavioral patterns, in turn, generate secondary social or economic consequences but may act also to increase or decrease the physical risk itself” (Kasperson et al. 1988, pp. 178–179). Social amplification of risk likely underpins the general claims made in this paper. However, there is no space here to explore social amplification of risk further.

  3. Autonomous driving systems (ADSs) refer not just to autonomous vehicles but the larger socio-technical system that autonomous vehicles operate in. ADSs will be explained more in Sect. 2.

  4. Wagner and Koopman’s (2015) paper is one example that looks at trust and autonomous vehicles (Wagner and Koopman 2015). However, it is focussed on the reliance and predictability elements of trust—it does not cover the goodwill, affective or public elements, moreover, it is focused on autonomous vehicles and only implicitly concerns the autonomous driving systems that the vehicles will operate in.

  5. It is unlikely that an autonomous vehicle meets a strict definition of ‘decide’ that we apply to conscious agents, where the agent has the “ability to understand criticism of one’s agency, along with the ability to defend or alter one’s actions based on one’s principles or principled criticism of one’s agency” (Nyholm 2018a, p. 1208). In line with the account of agent being used, decide is used more loosely here to refer to an agent that pursues “a goal on the basis of representations in a way that is regulated by certain rules or principles, while being supervised by some authority who can stop us or to whom control can be ceded, at least within certain limited domains” (Nyholm 2018a, p. 1208).

  6. Certain approaches to autonomous vehicles and safety see a value in retaining a human on the loop, where control can be returned to the driver. “[I]f a driver is present in the vehicle, driver assistance systems can attain a safe state by handing control to the driver or by braking to a standstill” (Reschka 2016, p. 476). Moreover, as some argue, there are ethical issues about handing back control to the driver (Lin 2016, pp. 71–72).

  7. For those who have driven in regions where there is limited law and order around driving, it is descriptively true that driving is a different experience than driving in regions with tight regulations and effective law and order. Though we do adapt. “Anyone who has visited another country as a tourist, whether as a driver or a pedestrian, has experienced for themselves how drastically informal rules, culturally specific modes of behavior, and communication between road users affect traffic. Human drivers who encounter such divergent “canons of rules” initially react with a phase of irritation. In the second phase, the adaptation phase, they unconsciously adapt over time to these new imprecise and unwritten rules. In the consolidation phase these rules seem to be obvious although they are not set in stone and in some cases cannot even be precisely described” (Färber 2016, p. 126).

  8. For instance, Michael Wagner and Philip Koopman recognize that driving in “an “unstructured environment” such as a real-world road network includes plenty of unforeseen conditions. This lack of predictive capability demands new verification techniques to allow us to justify trusting self-driving cars in our everyday lives” (Wagner and Koopman 2015, p. 164).

  9. Which is to recognize that some people do in fact suffer significant trauma from road accidents (Clapp et al. 2014), or have existing anxieties that prevent them from driving etc., in the first place. However, these people are in the minority.

  10. This is taken from Francis Fukuyama (Fukuyama 1995).

  11. Rather than seeking a definition trust, I will use a purposive approach which explains trust by describing the relation between the trustor and the trustee. Here, I am drawing from Mark Coeckelbergh’s relational account of responsibility. “responsibility is always relational, it is responsibility to someone… We are responsible for what we do to others who might be affected by what we do” (Coeckelbergh 2016, pp. 750, 751 Emphasis original). On this, trust is always relational; a trustor is always trusting someone to do something.

  12. I credit Shannon Vallor for introducing me to this distinction between affective trust and public trust.

  13. Another approach might include trust that the public has in the system. However, a more relevant form for this discussion is the trust that individuals have in the public system.

  14. Computing disciplines, for instance, cover concepts of ‘trusted networks’ etc. But these, I have suggested elsewhere, run the risk of overlooking the human elements of trust (Henschke and Ford 2016).

  15. In line with the description of autonomous given earlier, I want to avoid discussion about decisions and consciousness. That is, questions of the sort ‘can a non-conscious system make decision?’ are not of interest here. Particularly when looking at the different elements of trust, what’s more important is if and when drivers, other road users and other humans in the autonomous driving system act as if the autonomous vehicles decide. That is, whether a vehicle decides or not is not the point of enquiry here, instead, it is if and when we treat the vehicles like they make decisions.

  16. Recent work by Kate Darling and others shows that people do display empathy for robots and tend to anthropomorphize them, despite knowing that they are not conscious in any way (Darling 2017; Darling et al. 2015).

  17. I thank one of the anonymous reviewers for pressing this point.

References

Download references

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. 1522240. Preliminary research on this topic was conducted as a Visiting Researcher at the Brocher Foundation (Switzerland) He thanks the European Research Council and the Australian National University ‘Grand Challenge’, which have provided generous support under awards Advanced Grant project on Collective Responsibility and Counterterrorism and the ‘Our Health In Our Hands’ grant, respectively. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation, the Brocher Foundation, the European Research Council or the Australian National University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Henschke.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Henschke, A. Trust and resilient autonomous driving systems. Ethics Inf Technol 22, 81–92 (2020). https://doi.org/10.1007/s10676-019-09517-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-019-09517-y

Keywords

Navigation