Skip to main content

A Method to Check that Participants Really are Imagining Artificial Minds When Ascribing Mental States

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Posters (HCII 2022)

Abstract

Written vignettes are often used in experiments to explore potential differences between the way participants interpret the behaviour and mental states of artificial autonomous actors (hence A-bots) contrasted against human actors. A reoccurring result from this body of research has been the similarity of results in mental-state attributions between A-bots and humans. This paper reports the results of a short measure consisting of four questions. We find that by asking participants about whether A-bots can feel pain or pleasure, whether they deserve rights, or whether they would be good parents, satisfactory differences can be derived between human and A-bot groups. By asking these questions, experimenters can be more confident that participants are constructing mental representations of A-bots differently than those of humans.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Awad, E., et al.: Blaming humans in autonomous vehicle accidents: shared responsibility across levels of automation (2018). arXiv preprint arXiv:1803.07170

  2. Schweigert, W.A.: Research Methods in Psychology: A Handbook. Waveland Press, Long Grove (2021)

    Google Scholar 

  3. Fiala, B., Arico, A., Nichols, S.: You, robot (2014)

    Google Scholar 

  4. Kneer, M.: Can a robot lie? exploring the folk concept of lying as applied to artificial agents. Cogn. Sci. 45(10), e13032 (2021)

    Article  Google Scholar 

  5. Thellman, S., Silvervarg, A., Ziemke, T.: Folk-psychological interpretation of human vs. humanoid robot behavior: exploring the intentional stance toward robots. Front. Psychol. 8(NOV), 1–14 (2017)

    Google Scholar 

  6. Hidalgo, C.A., Orghian, D., Canals, J.A., De Almeida, F., Martin, N.: How Humans Judge Machines. MIT Press, Cambridge (2021)

    Book  Google Scholar 

  7. Tobia, K., Nielsen, A., Stremitzer, A.: When does physician use of AI increase liability? J. Nucl. Med. 62(1), 17–21 (2021)

    Article  Google Scholar 

  8. Ashton, H., Franklin, M., Lagnado, D.: Testing a definition of intent for AI in a legal setting. Submitted manuscript (2022)

    Google Scholar 

  9. Franklin, M., Awad, E., Lagnado, D.: Blaming automated vehicles in difficult situations. Iscience 24(4), 102252 (2021)

    Article  Google Scholar 

  10. Laurent, S.M., Nuñez, N.L., Schweitzer, K.A.: The influence of desire and knowledge on perception of each other and related mental states, and different mechanisms for blame. J. Exp. Soc. Psychol. 60, 27–38 (2015)

    Article  Google Scholar 

  11. Malle, B.F., Guglielmo, S., Guglielmo, A.E.: Moral, cognitive, and social: the nature of blame. Social Think. Interpers. Behav., 313–331 (2012)

    Google Scholar 

  12. Gray, H.M., Gray, K., Wegner, D.M.: Dimensions of mind perception. Science 315(5812), 619 (2007)

    Article  Google Scholar 

  13. Sytsma, J., Machery, E.: Two conceptions of subjective experience. Philos. Stud. 151(2), 299–327 (2010)

    Article  Google Scholar 

  14. Buckwalter, W., Phelan, M.: Function and feeling machines: a defense of the philosophical conception of subjective experience. Philos. Stud. 166(2), 349–361 (2012). https://doi.org/10.1007/s11098-012-0039-9

    Article  Google Scholar 

  15. Huebner, B.: Commonsense concepts of phenomenal consciousness: does anyone care about functional zombies? Phenomenol. Cogn. Sci. 9(1), 133–155 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matija Franklin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ashton, H., Franklin, M. (2022). A Method to Check that Participants Really are Imagining Artificial Minds When Ascribing Mental States. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds) HCI International 2022 – Late Breaking Posters. HCII 2022. Communications in Computer and Information Science, vol 1655. Springer, Cham. https://doi.org/10.1007/978-3-031-19682-9_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19682-9_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19681-2

  • Online ISBN: 978-3-031-19682-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics