skip to main content
10.1145/3507485.3507508acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicsebConference Proceedingsconference-collections
research-article

A Narrative Exploration of Improper AI Design & Execution and Possible Effects on Human Values

Authors Info & Claims
Published:08 March 2022Publication History

ABSTRACT

Artificial Intelligence (AI) technology is increasingly being adopted in practice for all kinds of tasks. Nowadays, many examples exist of AI being improperly designed or used. This often leads to the violation of human values, e.g., equality or safety. Organizations can properly ground these human values in AI by composing principles. This paper focuses on providing examples of improper design and usage of AI from practice, with regards to a selection of human values. In total, 54 examples were identified using a narrative literature search technique regarding seven human values selected. In conclusion, the examples show that the problems regarding the design and execution can be interrelated but do not have to be. Also, improper design and execution of AI solutions do not have to be intentional by definition due to a lack of understanding of AI technology and implementation. Lastly, examples of principles are provided to ground human values.

References

  1. National Transportation Safety Board, “Preliminary Report Highway Hwy18mh010,” 2018.Google ScholarGoogle Scholar
  2. Z. Zhang, X. Zhou, X. Zhang, L. Wang, and P. Wang, “A Model Based on Convolutional Neural Network for Online Transaction Fraud Detection,” 2018, doi: 10.1155/2018/5680264.Google ScholarGoogle ScholarCross RefCross Ref
  3. S. Alsheibani, Y. Cheung, and C. Messom, “Artificial Intelligence Adoption: AI-readiness at Firm-Level,” 2018.Google ScholarGoogle Scholar
  4. K. Smit, M. Zoet, J. van Meerten, and (Inpress), “A Review of AI Principles in Practice,” 2020.Google ScholarGoogle Scholar
  5. L. Floridi and J. Cowls, “A Unified Framework of Five Principles for AI in Society,” Harvard Data Sci. Rev., Jun. 2019, doi: 10.1162/99608f92.8cd550d1.Google ScholarGoogle ScholarCross RefCross Ref
  6. J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar, “Principled Artificial Intelligence.” https://cyber.harvard.edu/publication/2020/principled-ai (accessed May 14, 2020).Google ScholarGoogle Scholar
  7. W. Knight, “The Dark Secret at the Heart of AI,” 2017. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  8. T. Simonite, “When Government Rules by Software, Citizens Are Left in the Dark,” 2017. https://www.wired.com/story/when-government-rules-by-software-citizens-are-left-in-the-dark/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  9. Google LLC, “Perspectives on Issues in AI Governance,” p. 34, 2019, [Online]. Available: https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf.Google ScholarGoogle Scholar
  10. D. Greefhorst and E. Proper, Architecture principles: the cornerstones of enterprise architecture, 4th ed., vol. 4. Springer Science & Business Media, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  11. The Open Group, “TOGAF v9.1 standard,” 2011.Google ScholarGoogle Scholar
  12. D. Greefhorst, E. Proper, D. Greefhorst, and E. Proper, “The Role of Enterprise Architecture,” in Architecture Principles, Springer Berlin Heidelberg, 2011, pp. 7–29.Google ScholarGoogle ScholarCross RefCross Ref
  13. Z. Shao , “AI 2000: A Decade of Artificial Intelligence,” Jul. 2020, pp. 345–354, doi: 10.1145/3394231.3397925.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Hendler, A. Tate, and M. Drummond, “AI Planning: Systems and Techniques,” Jun. 1990. doi: 10.1609/AIMAG.V11I2.833.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Superior Court of California, “Memorandum of understanding between The Laura and John Arnold Foundation and The Superior Court of Calofornia, County of San Francisco,” 2015. https://cdn.muckrock.com/foia_files/2016/12/28/12-19-16_MR30096_RES.pdf (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  16. A. C. Edmondson and S. E. Mcmanus, “Methodological fit in management field research,” Acad. Manag. Rev., vol. 32, no. 4, pp. 1155–1179, 2007, doi: 10.5465/AMR.2007.26586086.Google ScholarGoogle ScholarCross RefCross Ref
  17. G. Paré, M. C. Trudel, M. Jaana, and S. Kitsiou, “Synthesizing information systems knowledge: A typology of literature reviews,” Inf. Manag., vol. 52, no. 2, pp. 183–199, Mar. 2015, doi: 10.1016/j.im.2014.08.008.Google ScholarGoogle ScholarCross RefCross Ref
  18. T. L. Beauchamp and J. F. Childress, “Principles of Biomedical Ethics, 5th edn.,” J. Med. Ethics, vol. 28, no. 5, pp. 332-a-332, Oct. 2002, doi: 10.1136/jme.28.5.332-a.Google ScholarGoogle Scholar
  19. J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar, “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI,” SSRN Electron. J., Feb. 2020, doi: 10.2139/ssrn.3518482.Google ScholarGoogle ScholarCross RefCross Ref
  20. R. Tatman, “Google's speech recognition has a gender bias,” 2016. https://makingnoiseandhearingthings.com/2016/07/12/googles-speech-recognition-has-a-gender-bias/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  21. F. McMillan, “It's Not You, It's It: Voice Recognition Doesn't Recognize Women,” 2011. https://techland.time.com/2011/06/01/its-not-you-its-it-voice-recognition-doesnt-recognize-women/ (accessed Aug. 05, 2020).Google ScholarGoogle Scholar
  22. J. A. Rodger and P. C. Pendharkar, “A field study of the impact of gender and user's technical experience on the performance of voice-activated medical tracking application,” Int. J. Hum. Comput. Stud., vol. 60, no. 5–6, pp. 529–544, 2004, doi: 10.1016/j.ijhcs.2003.09.005.Google ScholarGoogle ScholarCross RefCross Ref
  23. A. Bilski, “A review of artificial intelligence algorithms in document classification,” International Journal of Electronics and Telecommunications, vol. 57, no. 3. Polish Academy of Sciences, pp. 263–270, Sep. 01, 2011, doi: 10.2478/v10177-011-0035-6.Google ScholarGoogle ScholarCross RefCross Ref
  24. R. D. King, C. Feng, and A. Sutherland, “Statlog: Comparison of classification algorithms on large real-world problems,” Appl. Artif. Intell., vol. 9, no. 3, pp. 289–333, 1995, doi: 10.1080/08839519508945477.Google ScholarGoogle ScholarCross RefCross Ref
  25. P. Perrot, “What about AI in Criminal Intelligence: From Predictive Policing to AI Perspectives,” Eur. Police Sci. Res. Bull., vol. 16, 2017, Accessed: Jul. 24, 2020. [Online]. Available: https://heinonline.org/HOL/Page?handle=hein.journals/elerb16&id=67&div=&collection=.Google ScholarGoogle Scholar
  26. Big Brother Watch Team, “A Closer Look at Experian Big Data and Artificial Intelligence in Durham Police,” 2018. https://bigbrotherwatch.org.uk/2018/04/a-closer-look-at-experian-big-data-and-artificial-intelligence-in-durham-police/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  27. T. Simonite, “When It Comes to Gorillas, Google Photos Remains Blind,” 2018. https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  28. G. Lubin, “‘Facial-profiling’ could be dangerously inaccurate and biased, experts warn,” 2016. https://www.businessinsider.nl/does-faception-work-2016-10?international=true&r=US (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  29. R. Torfason, E. Agustsson, R. Rothe, and R. Timofte, “From face images and attributes to attributes,” 2017. Accessed: Jul. 24, 2020. [Online]. Available: http://people.ee.ethz.ch/∼timofter/publications/Torfason-ACCV-2016.pdf.Google ScholarGoogle Scholar
  30. L. Liu, D. Preot¸iucpreot¸iuc-Pietro, Z. R. Samani, M. E. Moghaddam, and L. Ungar, “Analyzing Personality through Social Media Profile Picture Choice,” 2016. Accessed: Jul. 24, 2020. [Online]. Available: https://www.sas.upenn.edu/∼danielpr/files/persimages16icwsm.pdf.Google ScholarGoogle Scholar
  31. Berkeley, “2019 symposium on contextual integrity the 2 nd symposium on applications of contextual integrity symposium report,” 2019. Accessed: Jul. 24, 2020. Available: http://privaci.info/ci_symposium.html.Google ScholarGoogle Scholar
  32. L. Hornby, “China changes tack on ‘social credit’ scheme plan,” 2017. https://www.ft.com/content/f772a9ce-60c4-11e7-91a7-502f7ee26895 (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  33. X. Shen, “‘Skynet’, China's massive video surveillance network,” 2018. https://www.scmp.com/abacus/who-what/what/article/3028246/skynet-chinas-massive-video-surveillance-network (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  34. R. Lakshmanan, “China's new 500-megapixel ‘super camera’ can instantly recognize you in a crowd,” 2019. https://thenextweb.com/security/2019/09/30/chinas-new-500-megapixel-super-camera-can-instantly-recognize-you-in-a-crowd/ (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  35. C. Metz, “We Teach A.I. Systems Everything, Including Our Biases,” 2019. https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  36. N. Kobia, “The complicated truth about China's social credit system,” 2019. https://www.wired.co.uk/article/china-social-credit-system-explained (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  37. A. Ma, “China has started ranking citizens with a creepy ‘social credit’ system,” 2018. https://www.businessinsider.nl/china-social-credit-system-punishments-and-rewards-explained-2018-4?international=true&r=US (accessed Aug. 05, 2020).Google ScholarGoogle Scholar
  38. Webroot, “Game Changers: AI and Machine Learning in Cybersecurity,” 2017.Google ScholarGoogle Scholar
  39. B. Paris, “From Panic to Profit. AI propaganda machines,” 2019. https://points.datasociety.net/from-panic-to-profit-cf738c9a5bfd (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  40. G. Beard and A. Castro, “Palantir has secretly been using New Orleans to test its predictive policing technology,” 2018. https://www.theverge.com/2018/2/27/17054740/palantir-predictive-policing-tool-new-orleans-nopd (accessed Jul. 24, 2020).Google ScholarGoogle Scholar
  41. K. Bakhtiyari, “Why Did Artificial Intelligence Fail in the FIFA World Cup 2018?,” 2018. https://medium.com/futuristone/artificial-intelligence-failed-in-world-cup-2018-6af10602206a (accessed Jul. 24, 2020).Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICSEB '21: Proceedings of the 2021 5th International Conference on Software and e-Business
    December 2021
    153 pages
    ISBN:9781450385831
    DOI:10.1145/3507485

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 March 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format