Skip to main content

Advertisement

Log in

The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity

  • Original Paper
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as “black teenagers” are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Refer to http://moralmachine.mit.edu/ (accessed July 3, 2017).

  2. For more information refer to https://implicit.harvard.edu/implicit/ (accessed July 3, 2017).

  3. For example, see IEEE PROJECT: P7003—Algorithmic Bias Considerations, https://standards.ieee.org/develop/project/7003.html (accessed August 24, 2017).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jason Borenstein.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Howard, A., Borenstein, J. The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Sci Eng Ethics 24, 1521–1536 (2018). https://doi.org/10.1007/s11948-017-9975-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-017-9975-2

Keywords

Navigation