skip to main content
research-article

Trusting the (ro)botic other: by assumption?

Published: 05 January 2016 Publication History

Abstract

How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a consequence, they approach the robots involved as being trustworthy ("zones of trust"). Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust (as-if trust) may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet (virtual trust). For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously (while being science-fiction, immoral, or both). For another, the new trend in robotics is towards coactivity between human and machine operators in a team (away from making robots as autonomous as possible). Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain non-normative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible.

References

[1]
F. Grodzinsky, K. Miller and M. Wolf, "Developing artificial agents worthy of trust: "Would you buy a used car from this artificial agent?"," Ethics and Information Technology, vol. 13, no. 1, pp. 17--27, 2011.
[2]
P. B. de Laat, "How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trust," Ethics and Information Technology, vol. 12, no. 4, pp. 327--341, 2010.
[3]
D. Gambetta, "Can we trust trust?," in Trust: Making and breaking cooperative relations, Oxford, Blackwell, 1988, pp. 213--237.
[4]
N. Luhmann, Vertrauen: Ein Mechanismus der Reduktion sozialer Komplexität, 4 ed., Stuttgart: Lucius & Lucius, 2000.
[5]
P. Pettit, "The Cunning of Trust," Philosophy and Public Affairs, vol. 24, no. 3, pp. 202--225, 1995.
[6]
V. McGeer, "Trust, hope, and empowerment," Australasian Journal of Philosophy, vol. 86, no. 2, pp. 237--254, 2008.
[7]
P. B. de Laat, "Online diaries: Reflections on trust, privacy, and exhibitionism," Ethics and Information Technology, vol. 10, pp. 57--69, 2008.
[8]
H. T. Tavani, "Levels of Trust in the Context of Machine Ethics," Philosophy and Technology, online first.
[9]
J. Buechner and H. T. Tavani, "Trust and multi-agent systems: applying the "diffuse, default model" of trust to experiments involving artificial agents," Ethics and Information Technology, vol. 13, no. 1, pp. 39--51, 2011.
[10]
H. T. Tavani and J. Buechner, "Autonomy and Trust in the Context of Artificial Agents," Berlin, forthcoming.
[11]
M. U. Walker, Moral Repair: Reconstructing Moral Relations after Wrongdoing, Cambridge: Cambridge University Press, 2006.
[12]
H. Nissenbaum, "Accountability in a computerized society," Sceince and Engineering Ethics, vol. 2, pp. 25--42, 1996.
[13]
G. D. Crnkovic and B. Cürüklü, "Robots: ethical by design," Ethics and Information Technology, vol. 14, no. 1, pp. 61--71, 2012.
[14]
M. Coeckelbergh, "Moral appearances: emotions, robots, and human morality," Ethics and Information Technology, vol. 12, no. 3, pp. 235--241, 2010.
[15]
F. S. Grodzinsky, K. W. Miller and M. J. Wolf, "Developing Automated Deceptions and the Impact on Trust," Philosophy and Technology, online first.
[16]
M. Johnson, J. M. Bradshaw, P. J. Feltovich, C. M. Jonker, M. B. van Riemsdijk and M. Sierhuis, "Coactive Design: Designing Support for Interdependence in Joint Activity," Journal of Human-Robot Interaction, vol. 3, no. 1, pp. 43--69, 2014.
[17]
G. Klein, D. D. Woods, J. M. Bradshaw, R. M. Hoffman and P. J. Feltovich, "Ten challenges for making automation a 'team player' in joint human-agent activity," IEEE Intelligent Systems, vol. 19, no. 6, pp. 91--95, November/December 2004.
[18]
R. R. Hoffman, J. D. Lee, D. D. Woods, N. Shadbolt, J. Miller and J. M. Bradshaw, "The dynamics of trust in cyberdomains," IEEE Intelligent Systems, pp. 5--11, November/December 2009.
[19]
R. R. Hoffman, M. Johnson, J. M. Bradshaw and A. Underbrink, "Trust in Automation," IEEE Intelligent Systems, vol. 28, no. 1, pp. 84--88, January/February 2013.
[20]
A. C. Baier, "Putting Hope in its Place," in Reflections on How We Live, Oxford, Oxford University Press, 2010, pp. 216--229.
[21]
P. Pettit, "Trust, Reliance, and the Internet," Analyse & Kritik, vol. 26, pp. 108--121, 2004.

Cited By

View all
  • (2023)Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical frameworkEuropean Journal of Work and Organizational Psychology10.1080/1359432X.2023.220017233:2(158-171)Online publication date: 20-Apr-2023
  • (2019)Applying a Social-Relational Model to Explore the Curious Case of hitchBOTOn the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence10.1007/978-3-030-01800-9_17(311-323)Online publication date: 29-Jan-2019
  • (2018)Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot RightsInformation10.3390/info90400739:4(73)Online publication date: 29-Mar-2018

Index Terms

  1. Trusting the (ro)botic other: by assumption?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM SIGCAS Computers and Society
    ACM SIGCAS Computers and Society  Volume 45, Issue 3
    Special Issue on Ethicomp
    September 2015
    446 pages
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 January 2016
    Published in SIGCAS Volume 45, Issue 3

    Check for updates

    Author Tags

    1. artificial agents
    2. coactivity
    3. institutions
    4. men-machine team
    5. mental modelling
    6. trust

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical frameworkEuropean Journal of Work and Organizational Psychology10.1080/1359432X.2023.220017233:2(158-171)Online publication date: 20-Apr-2023
    • (2019)Applying a Social-Relational Model to Explore the Curious Case of hitchBOTOn the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence10.1007/978-3-030-01800-9_17(311-323)Online publication date: 29-Jan-2019
    • (2018)Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot RightsInformation10.3390/info90400739:4(73)Online publication date: 29-Mar-2018

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media