Designing medical artificial intelligence for in- and out-groups

https://doi.org/10.1016/j.chb.2021.106929Get rights and content

Highlights

  • Medical artificial intelligence (AI) can deliver worldwide access to healthcare.

  • In three studies, we addressed how designing medical AI varies between in- and out-groups.

  • We examined how non-medical and medical people varies in designing medical AI for in- and out-groups.

  • Out-group stereotype shapes the design of medical AI.

  • This health inequity has implications for AI stakeholders and health researchers.

Abstract

Medical artificial intelligence (AI) is expected to deliver worldwide access to healthcare. Through three experimental studies with Chinese and American participants, we tested how the design of medical AI varies between in- and out-groups. Participants adopted the role of a medical AI designer and decided how to develop medical AI for in- or out-groups based on their experimental condition. Studies 1 (pre-registered: N = 191) revealed that Chinese participants were less likely to adopt human doctors' assistance in medical AI system when targeting patients from US (i.e., out-groups) than for patients from China (i.e., in-groups). Study 2 (N = 190) revealed that US participants were less likely to adopt human doctors' assistance in medical AI system when targeting patients from China (i.e., out-groups) than for patients from US (i.e., in-groups). Study 3 revealed that Chinese medical students (N = 160) selected a smaller training database for AI when diagnosing diabetic retinopathy among US patients (i.e., out-groups) than for Chinese patients (i.e., in-groups), and this effect was stronger among medical students from higher (vs. lower) socioeconomic backgrounds. This AI design inequity was mediated by individuals’ underestimation of out-group heterogeneity. Overall, our evidence suggests that out-group stereotype shapes the design of medical AI, unwittingly undermining healthcare quality. The current findings underline the need for more robust data on medical AI development and intervention research addressing healthcare inequity.

Section snippets

Medical AI design

The design of AI is contingent on the characteristic of its specific task (Jordan & Mitchell, 2015). The more complex the task is, the more sophisticated it should be—feeding more data, training with more complex algorithms, and so on. For instance, in image recognition, it is easy to construct an AI system to differentiate images of animals from humans. Nevertheless, to distinguish a white wolf and a breed of wolf-like white dog accurately, the AI system should be trained with deep-learning

The present research

Employing an experimental approach is vital in identifying how specific factors shape people's decision-making, like designing a product (Herd & Mehta, 2019) or providing healthcare (Godager & Wiesen, 2013; Schram & Sonnemans, 2011). For example, to identify physicians' altruism toward patients' health benefits, in experimental research conducted by Godager and Wiesen (2013), medical students were asked to adopt the role of physicians and decide the quantity of medical services for a given

Study 1

Study 1 tested how the target group (in-vs. out-groups) affects people's decision to adopt human doctors' assistance when designing medical AI products. Previous research has shown that combining human training in the AI system is an effective way to increase AI's performance (Holzinger, 2016). Studies targeting potential medical AI users also highlighted that robots should be combined with humans, instead of replacing humans (Lehoux & Grimard, 2018). Consumers demonstrated resistance to

Study 2

In Study 2, the specific out-group was changed from American patients to Chinese patients to increase the generalization of this effect. The authors predicted that Americans would perceive out-groups (Chinese patients) as homogeneous. The ignorance of out-group diversity would lead to a lower likelihood of adopting human doctors’ assistance in the medical AI system for out-groups than for in-groups.

Study 3

The goals of Study 3 were several-fold. First, the authors investigated a new form of dealing with the diversity of patients’ medical needs: the size of database. Participants were asked to decide the size of the database. The larger the database, the more diverse the medical samples would be, and, in turn, the better the medical AI service would be at dealing with rare medical cases. The authors sought to document inequity between out- and in-groups.

Second, whereas Studies 1 and 2 involved the

General discussion

Sophisticated AI systems have been designed to deliver worldwide access to healthcare. However, before implementing them in the medical situation, profound questions should be raised. An error system, for example, that makes a minor mistake, could be fatal and disastrous (Synced, 2018). This research sought to explore various designs of medical AI between in- and out-groups. Studies 1 (pre-registered experiment in China; N = 191) and 2 (experiment in the U.S.; N = 190) revealed that people were

Conclusion

Medical AI is gaining tremendous popularity in promoting worldwide healthcare. Current literature lacks a systematic consideration of how individuals perceive out-groups when designing medical AI. This work documented an inequity between in- and out-groups when designing medical AI. People tend to demonstrate an out-group homogeneity effect, perceiving out-groups as similar. This simplified process leads to people's underestimation of the diversity of out-group members' medical needs and

Author contribution

W.L. and X.Z. designed the studies; W.L., X.Z. and Q.Y. conducted the studies and analyzed the data; W.L., X.Z. and Q.Y. discussed conceptual issues and wrote the manuscript.

Funding

This work was supported by the Key Program and General Program of the National Natural Science Foundation of China, China [grant numbers 31871095; 71672169; 71925005; 71974170]; Soft Science Project of Science Technology Department of Zhejiang Province of China [grant number 2019C35025]; Youth Project of the Center for Health Policy and Hospital Management of Zhejiang University, China [grant number 2019WSZ007]; Leading Innovative and Entrepreneur Team Introduction Program of Zhejiang, China

Declaration of competing interest

None.

References (63)

  • H. Kalantarian et al.

    Labeling images with facial emotion and the potential for pediatric healthcare

    Artificial Intelligence in Medicine

    (2019)
  • J. Lamy et al.

    Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach

    Artificial Intelligence in Medicine

    (2019)
  • N.S. Landale et al.

    What does skin color have to do with infant health? An analysis of low birth weight among mainland and island Puerto Ricans

    Social Science & Medicine

    (2005)
  • Y.J. Lee et al.

    Egoistic and altruistic motivation: How to induce users' willingness to help for imperfect AI

    Computers in Human Behavior

    (2019)
  • P. Lehoux et al.

    When robots care: Public deliberations on how technology and humans may support independent living for older adults

    Social Science & Medicine

    (2018)
  • Y. Mou et al.

    The media inequality: Comparing the initial human-human and human-ai social interactions

    Computers in Human Behavior

    (2017)
  • K. Nash et al.

    The bionic blues: Robot rejection lowers self-esteem

    Computers in Human Behavior

    (2018)
  • J.M. Oakes et al.

    The measurement of SES in health research: Current practice and steps toward a new approach

    Social Science & Medicine

    (2003)
  • D.M. Oppenheimer et al.

    Instructional manipulation checks: Detecting satisficing to increase statistical power

    Journal of Experimental Social Psychology

    (2009)
  • A. Schram et al.

    How individuals choose health insurance: An experimental analysis

    European Economic Review

    (2011)
  • D.B. Shank et al.

    Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence

    Computers in Human Behavior

    (2019)
  • J. Stein et al.

    Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology

    Computers in Human Behavior

    (2019)
  • H. Suen et al.

    Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes

    Computers in Human Behavior

    (2019)
  • E.E. Tripoliti et al.

    A supervised method to assist the diagnosis and monitor progression of Alzheimer's disease using data from an fMRI experiment

    Artificial Intelligence in Medicine

    (2011)
  • A. Valmarska et al.

    Symptoms and medications change patterns for Parkinson's disease patients stratification

    Artificial Intelligence in Medicine

    (2018)
  • R. Watanabe et al.

    Horizontal inequity in healthcare access under the universal coverage in Japan; 1986–2007

    Social Science & Medicine

    (2012)
  • M. Ahlert et al.

    Which patients do I treat? An experimental study with economists and physicians

    Health Economics Review

    (2012)
  • T. Anthony et al.

    Cross-racial facial identification: A social cognitive integration

    Personality and Social Psychology Bulletin

    (1992)
  • S. Barocas et al.

    Big data's disparate impact

    California Law Review

    (2016)
  • C. Chan et al.

    Identifiable but not identical: Combining social identity and uniqueness motives in choice

    Journal of Consumer Research

    (2012)
  • S.K. Chugani et al.

    Happily ever after: The effect of identity-consistency on product satiation

    Journal of Consumer Research

    (2015)
  • Cited by (0)

    View full text