Skip to main content

Advertisement

Log in

FMEA-AI: AI fairness impact assessment using failure mode and effects analysis

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Recently, there has been a growing demand to address failures in the fairness of artificial intelligence (AI) systems. Current techniques for improving fairness in AI systems are focused on broad changes to the norms, procedures and algorithms used by companies that implement those systems. However, some organizations may require detailed methods to identify which user groups are disproportionately impacted by failures in specific components of their systems. Failure mode and effects analysis (FMEA) is a popular safety engineering method and is proposed here as a vehicle to support the conducting of “AI fairness impact assessments” in organizations. An extension to FMEA called “FMEA-AI” is proposed as a modification to a familiar tool for engineers and manufacturers that can integrate moral sensitivity and ethical considerations into a company’s existing design process. Whereas current impact assessments focus on helping regulators identify an aggregate risk level for an entire AI system, FMEA-AI helps companies identify safety and fairness risk in multiple failure modes of an AI system. It also explicitly identifies user groups and considers an objective definition of fairness as proportional satisfaction of claims in calculating likelihood and severity of fairness-related failures. This proposed method can help industry analysts adapt a widely known safety engineering method to incorporate AI fairness considerations, promote moral sensitivity and overcome resistance to change.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Notes

  1. This work uses the term “impact assessment” rather than “risk assessment”, which past work has defined as an algorithm assessing risk that an individual poses for defaulting on a loan, repeating a criminal offense, etc. (e.g., [5, 6]). For example, COMPAS is a risk assessment system used in US courts to assess the likelihood that a human defendant will have repeat offenses (cf. [6]). In contrast, an impact assessment calculates the risks that an AI system will result in poor performance, breaches in data privacy, bias, etc.

  2. We note that fairness risk is calculated using a probability value for a failure mode, but that formal verification and test methods could be used with AI systems to determine allocations of goods or likelihoods with high certainty.

References

  1. O’Neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New York (2016)

    MATH  Google Scholar 

  2. Shneiderman, B.: Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proc. Natl. Acad. Sci. 113, 13538–13540 (2016). https://doi.org/10.1073/pnas.1618211113

    Article  Google Scholar 

  3. Shneiderman, B.: Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM. Trans. Interact. Intell. Syst. 10, 1–31 (2020). https://doi.org/10.1145/3419764

    Article  Google Scholar 

  4. Bernstein, M.S., Levi, M., Magnus, D., Rajala, B., Satz, D., Waeiss, C.: ESR: ethics and society review of artificial intelligence research. arXiv (2021). https://doi.org/10.48550/arXiv.2106.11521

    Article  Google Scholar 

  5. Wallace, R.: ‘The names have changed, but the game’s the same’: artificial intelligence and racial policy in the USA. AI Ethics 1, 389–394 (2021). https://doi.org/10.1007/s43681-021-00061-4

    Article  Google Scholar 

  6. Benjamins, R.: A choices framework for the responsible use of AI. AI Ethics 1, 49–53 (2021). https://doi.org/10.1007/s43681-020-00012-5

    Article  Google Scholar 

  7. Yeung, L.A.: Guidance for the development of AI risk and impact assessments. UC Berkeley Center for Long-Term Cybersecurity, Berkeley (2021)

    Google Scholar 

  8. Taddeo, M., Floridi, L.: How AI can be a force for good. Science 361, 751–752 (2018). https://doi.org/10.1126/science.aat5991

    Article  MathSciNet  MATH  Google Scholar 

  9. Treasury Board of Canada (2021) Algorithmic impact assessment tool. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html. Accessed 31 Aug 2021

  10. Balogun, J., Hailey, V.H.: Exploring strategic change. Pearson Education, London (2008)

    Google Scholar 

  11. Kang, S.: Change management: term confusion and new classifications. Perform. Improv. 54, 26–32 (2015). https://doi.org/10.1002/pfi.21466

    Article  Google Scholar 

  12. Rogers, E.: Diffusion of innovation. The Free Press, New York (1962)

    Google Scholar 

  13. Rothwell, W.J.: Roles, competencies, and outputs of human performance improvement. In: Rothwell, W.J. (ed.) ASTD models for human performance improvement: roles, competencies, and outputs, 2nd edn., pp. 17–32. The American Society for Training and Development, Alexandria (1999)

    Google Scholar 

  14. Elahi, B.: Safety risk management for medical devices. Academic Press (2018)

    Google Scholar 

  15. Bouti, A., Kadi, D.A.: A state-of-the-art review of FMEA/FMECA. Int. J. Reliab. Qual. Saf. Eng. 1, 515–543 (1994). https://doi.org/10.1142/S0218539394000362

    Article  Google Scholar 

  16. Stamatis, D.H.: Failure mode and effect analysis: FMEA from theory to execution. Quality Press, Welshpool (2003)

    Google Scholar 

  17. Meyer, T., Reniers, G.: Engineering risk management. De Gruyter, Berlin (2013)

    Book  Google Scholar 

  18. Stanojević, D., Ćirović, V.: Contribution to development of risk analysis methods by application of artificial intelligence techniques. Qual. Reliab. Eng. Int. 36, 2268–2284 (2020). https://doi.org/10.1002/qre.2695

    Article  Google Scholar 

  19. Galloway, D.L.: A change management, systems thinking, or organizational development approach to the no child left behind act. Perform. Improv. 46, 10–16 (2007). https://doi.org/10.1002/pfi.128

    Article  Google Scholar 

  20. Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI Ethics 1, 61–65 (2021). https://doi.org/10.1007/s43681-020-00002-7

    Article  Google Scholar 

  21. Eitel-Porter, R.: Beyond the promise: implementing ethical AI. AI Ethics 1, 73–80 (2021). https://doi.org/10.1007/s43681-020-00011-6

    Article  Google Scholar 

  22. Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1, 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

    Article  Google Scholar 

  23. Rescher, N.: Fairness. Routledge, Milton Park (2018)

    Book  Google Scholar 

  24. Broome, J.: Fairness. Proc. Aristot. Soc. 91, 87–101 (1990)

    Article  Google Scholar 

  25. Heilmann, C., Wintein, S.: No envy: jan tinbergen on fairness. Erasmus. J. Philos. Econ. 14, 222–245 (2021). https://doi.org/10.23941/ejpe.v14i1.610

    Article  Google Scholar 

  26. Henin, C., Le Métayer, D.: A framework to contest and justify algorithmic decisions. AI Ethics 1, 463–476 (2021). https://doi.org/10.1007/s43681-021-00054-3

    Article  Google Scholar 

  27. You, J.K.: A critique of the ‘as–if’ approach to machine ethics. AI Ethics 1, 545–552 (2021). https://doi.org/10.1007/s43681-021-00070-3

    Article  Google Scholar 

  28. Lee, M.K.: Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big. Data. Soc. 5, 2053951718756684 (2018). https://doi.org/10.1177/2053951718756684

    Article  Google Scholar 

  29. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11, 1–45 (2021). https://doi.org/10.1145/3387166

    Article  Google Scholar 

  30. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. Philos. Technol. 31, 611–627 (2018). https://doi.org/10.1007/s13347-017-0279-x

    Article  Google Scholar 

  31. Pedreschi, D., Ruggieri, S., Turini, F.: The discovery of discrimination. In: Custers, B., Calders, T., Schermer, B., Zarsky, T. (eds.) Discrimination and privacy in the information society. Springer, Heidelberg (2013)

    Google Scholar 

  32. Federal Laws of Canada: Canadian human rights act: revised statues of Canada (1985, c. H-6). https://laws-lois.justice.gc.ca/eng/acts/H-6/. Accessed 10 Nov 2021 (2021)

  33. Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T. and Prabhakaran, V.: Re-imagining algorithmic fairness in India and beyond. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. ACM, Virtual Event Canada, pp. 315–328 (2021)

  34. BarredoArrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  35. Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P.N., Inkpen, K. and Teevan, J.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI conference on human factors in computing systems. ACM, Glasgow Scotland UK, pp. 1–13 (2019)

  36. Nagbøl, P.R., Müller, O., Krancher, O.: Designing a risk assessment tool for artificial intelligence systems. In: International conference on design science research in information systems and technology (DESRIST 2021), pp. 328–339. Springer, Cham. (2021)

    Google Scholar 

  37. National Science and Technology Council: The national artificial intelligence research and development strategic plan: 2019 Update. https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf. Accessed 1 Sep 2021 (2019)

  38. Mantelero, A.: AI and big data: a blueprint for a human rights, social and ethical impact assessment. Comput. Law Secur. Rev. 34, 754–772 (2018). https://doi.org/10.1016/j.clsr.2018.05.017

    Article  Google Scholar 

  39. European Commission: Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. Accessed 1 Sep 2021 (2019)

  40. Madaio, M.A., Stark, L., Wortman Vaughan, J. and Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: proceedings of the 2020 CHI conference on human factors in computing systems. ACM, Honolulu HI USA, pp. 1–14 (2020)

  41. Kamiran, F., Calders, T. and Pechenizkiy, M.: Discrimination aware decision tree learning. In: 2010 IEEE international conference on data mining, pp. 869–874 (2010)

  42. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-Aware Classifier with Prejudice Remover Regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) Machine learning and knowledge discovery in databases: european conference (ECML PKDD 2012), pp. 35–50. Springer, Berlin (2012)

    Chapter  Google Scholar 

  43. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C. and Venkatasubramanian, S.: Certifying and removing disparate impact. In: proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, Sydney NSW Australia, pp 259–268 (2015)

  44. De Cremer, D., De Schutter, L.: How to use algorithmic decision-making to promote inclusiveness in organizations. AI Ethics 1, 563–567 (2021). https://doi.org/10.1007/s43681-021-00073-0

    Article  Google Scholar 

  45. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807v2 (2016)

  46. Joseph, M., Kearns, M., Morgenstern, J., Neel, S., Roth, A.: Fair algorithms for infinite and contextual bandits. arXiv:1610.09559v4 (2016)

  47. Zhang, X., Khalili, M.M., Liu, M.: Long-term impacts of fair machine learning. Ergon. Des. 28, 7–11 (2020). https://doi.org/10.1177/1064804619884160

    Article  Google Scholar 

  48. Raab, C.D.: Information privacy, impact assessment, and the place of ethics⁎. Comput Law Secur Rev 37, 105404 (2020). https://doi.org/10.1016/j.clsr.2020.105404

    Article  Google Scholar 

  49. Kazim, E., Koshiyama, A.: The interrelation between data and AI ethics in the context of impact assessments. AI Ethics 1, 219–225 (2021). https://doi.org/10.1007/s43681-020-00029-w

    Article  Google Scholar 

  50. Moraes, T.G., Almeida, E.C., de Pereira, J.R.L.: Smile, you are being identified! Risks and measures for the use of facial recognition in (semi-)public spaces. AI Ethics 1, 159–172 (2021). https://doi.org/10.1007/s43681-020-00014-3

    Article  Google Scholar 

  51. Lauer, D.: Facebook’s ethical failures are not accidental; they are part of the business model. AI Ethics 1, 395–403 (2021). https://doi.org/10.1007/s43681-021-00068-x

    Article  Google Scholar 

  52. Kazim, E., Denny, D.M.T., Koshiyama, A.: AI auditing and impact assessment: according to the UK information commissioner’s office. AI Ethics 1, 301–310 (2021). https://doi.org/10.1007/s43681-021-00039-2

    Article  Google Scholar 

  53. Information Commissioner’s Office ICO: Guidance on the AI auditing framework Draft guidance for consultation. https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-artificial-intelligence-and-data-protection/. Accessed 1 Nov 2021 (2020)

  54. Calvo, R.A., Peters, D., Cave, S.: Advancing impact assessment for intelligent systems. Nat. Mach. Intell. 2, 89–91 (2020). https://doi.org/10.1038/s42256-020-0151-z

    Article  Google Scholar 

  55. Mantelero, A., Esposito, M.S.: An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems. Comput. Law. Secur. Rev. 41, 105561 (2021). https://doi.org/10.1016/j.clsr.2021.105561

    Article  Google Scholar 

  56. Bonnefon, J.-F., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science (2016). https://doi.org/10.1126/science.aaf2654

    Article  Google Scholar 

  57. Department of Defense: System safety MIL-STD-882 E. http://everyspec.com/MIL-STD/MIL-STD-0800-0899/MIL-STD-882E_41682/. Accessed 27 Aug 2021 (2012)

  58. Holmes, A., Illowsky, B., Dean, S., Hadley, K.: Introductory business statistics. Rice University, OpenStax College (2017)

    Google Scholar 

  59. Howell, D.C.: Confidence intervals on effect size, p. 11p. University of Vermont, Vermont (2011)

    Google Scholar 

  60. Szczepanek A: t-test Calculator. In: Omni Calc. https://www.omnicalculator.com/statistics/t-test. Accessed 11 Feb 2022 (2021)

  61. Stat Trek: Hypothesis test: difference in means. In: Stat Trek Teach Yours. Stat. https://stattrek.com/hypothesis-test/difference-in-means.aspx. Accessed 10 Feb 2022 (2022)

  62. Automotive Industry Action Group: Potential Failure Mode & Effects Analysis, 4th edn. AIAG, Michigan (2008)

    Google Scholar 

  63. NASA goddard space center standard for performing a failure mode and effects analysis (FMEA) and establishing a critical items list (CIL). NASA

  64. Ostrom, L.T., Wilhelmsen, C.A.: Risk assessment: tools, techniques, and their applications. Wiley, New York (2019)

    Book  Google Scholar 

  65. Joshi, G., Joshi, H.: FMEA and alternatives v/s enhanced risk assessment mechanism. Int. J. Comput. Appl. 93, 33–37 (2014)

    Google Scholar 

  66. Herrmann, A.: The quantitative estimation of IT-related risk probabilities. Risk. Anal. 33, 1510–1531 (2013). https://doi.org/10.1111/risa.12001

    Article  Google Scholar 

Download references

Acknowledgements

This research is funded by the Natural Science and Engineering Council of Canada (NSERC) Grant No: RGPIN-2021-03139.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jamy Li.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Chignell, M. FMEA-AI: AI fairness impact assessment using failure mode and effects analysis. AI Ethics 2, 837–850 (2022). https://doi.org/10.1007/s43681-022-00145-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00145-9

Keywords

Profiles

  1. Mark Chignell