Skip to main content

Fairness in Design: A Tool for Guidance in Ethical Artificial Intelligence Design

  • Conference paper
  • First Online:
Social Computing and Social Media: Experience Design and Social Network Analysis (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12774))

Included in the following conference series:

  • 3117 Accesses

Abstract

As the artificial intelligence (AI) industry booms and the systems they created are impacting our lives immensely, we begin to realize that these systems are not as impartial as we thought them to be. Even though they are machines that make logical decisions, biases and discrimination are able to creep into the data and model to affect outcomes causing harm. This pushes us to re-evaluate the design metrics for creating such systems and put more focus on integrating human values in the system. However, even when the awareness of the need for ethical AI systems is high, there are currently limited methodologies available for designers and engineers to incorporate human values into their designs. Our methodology tool aims to address this gap by assisting product teams to surface fairness concerns, navigate complex ethical choices around fairness, and overcome blind spots and team biases. It can also help them to stimulate perspective thinking from multiple parties and stakeholders. With our tool, we aim to lower the bar to add fairness into the design discussion so that more design teams can make better and more informed decisions for fairness in their application scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: there’s software used across the country to predict future criminals and it’s biased against blacks. ProPublica (2016)

    Google Scholar 

  2. Ballard, S., Chappell, K.M., Kennedy, K.: Judgment call the game: using value sensitive design and design fiction to surface ethical concerns related to technology. In: Proceedings of the 2019 on Designing Interactive Systems Conference, pp. 421–433 (2019)

    Google Scholar 

  3. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Soc. Meth. Res. 50, 3–44 (2018)

    Article  MathSciNet  Google Scholar 

  4. Binns, R.: Fairness in machine learning: lessons from political philosophy. In: Conference on Fairness, Accountability and Transparency, pp. 149–159. PMLR (2018)

    Google Scholar 

  5. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)

    Article  Google Scholar 

  6. Coglianese, C.: Algorithmic Regulation. The Algorithmic Society, Technology, Power, and Knowledge (2020)

    Google Scholar 

  7. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 797–806 (2017)

    Google Scholar 

  8. Crawford, K.: Artificial intelligence’s white guy problem. The New York Times, 25 June 2016

    Google Scholar 

  9. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  10. Farnadi, G., Babaki, B., Getoor, L.: Fairness in relational domains. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 108–114 (2018)

    Google Scholar 

  11. Flanagan, M., Howe, D.C., Nissenbaum, H.: Values at play: design tradeoffs in socially-oriented game design. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 751–760 (2005)

    Google Scholar 

  12. Friedman, B.: Value-sensitive design. Interactions 3(6), 16–23 (1996)

    Article  Google Scholar 

  13. Friedman, B., Hendry, D.: The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1145–1148 (2012)

    Google Scholar 

  14. Friedman, B., Hendry, D.G., Borning, A.: A survey of value sensitive design methods. Found. Trends Hum. Comput. Interact. 11(2), 63–125 (2017)

    Article  Google Scholar 

  15. Friedman, B., Kahn, P., Borning, A.: Value sensitive design: theory and methods. University of Washington, Technical report 02-12-01 (2002)

    Google Scholar 

  16. Grgic-Hlaca, N., Zafar, M.B., Gummadi, K.P., Weller, A.: The case for process fairness in learning: feature selection for fair decision making. In: NIPS Symposium on Machine Learning and the Law, vol. 1, p. 2 (2016)

    Google Scholar 

  17. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. arXiv preprint arXiv:1610.02413 (2016)

  18. Havens, J.: Heartificial Intelligence: Embracing Our Humanity to Maximize achines. Jeremy P. Tarcher/Penguin (2016)

    Google Scholar 

  19. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., Wallach, H.: Improving fairness in machine learning systems: what do industry practitioners need? In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2019)

    Google Scholar 

  20. Hutchinson, B., Mitchell, M.: 50 years of test (un)fairness: lessons for machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 49–58 (2019)

    Google Scholar 

  21. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016)

  22. Kusner, M.J., Loftus, J.R., Russell, C., Silva, R.: Counterfactual fairness. arXiv preprint arXiv:1703.06856 (2017)

  23. Makridakis, S.: The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90, 46–60 (2017)

    Article  Google Scholar 

  24. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)

  25. Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019)

    Article  Google Scholar 

  26. Schwab, K.: The Fourth Industrial Revolution. Currency (2017)

    Google Scholar 

  27. Verma, S., Rubin, J.: Fairness definitions explained. In: 2018 IEEE/ACM International Workshop on Software Fairness (Fairware), pp. 1–7. IEEE (2018)

    Google Scholar 

  28. Yapo, A., Weiss, J.: Ethical implications of bias in machine learning. In: Proceedings of the 51st Hawaii International Conference on System Sciences (2018)

    Google Scholar 

Download references

Acknowledgements

This research is supported, in part, by Nanyang Technological University, Nanyang Assistant Professorship (NAP); Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore; the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore; and the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying Shu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shu, Y., Zhang, J., Yu, H. (2021). Fairness in Design: A Tool for Guidance in Ethical Artificial Intelligence Design. In: Meiselwitz, G. (eds) Social Computing and Social Media: Experience Design and Social Network Analysis . HCII 2021. Lecture Notes in Computer Science(), vol 12774. Springer, Cham. https://doi.org/10.1007/978-3-030-77626-8_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77626-8_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77625-1

  • Online ISBN: 978-3-030-77626-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics