Skip to main content

Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

  • Chapter
  • First Online:
From Software Engineering to Formal Methods and Tools, and Back

Abstract

Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The following brief description of the prototype system is provided in order to outline what has been used to produce the analysis below. The system is at present not available for external testing; furthermore, as it is under development, no claims are made here to its coverage or efficiency with respect to other NLP systems.

References

  1. LeCun, Y., Yoshua, B., Geoffry, H.: Deep learning. Nature 521(7553), 436 (2015)

    Article  Google Scholar 

  2. Davis, A.: How artificial intelligence has crept into our everyday lives. IEEE Special Report (2016). http://theinstitute.ieee.org/static/special-report-artificial-intelligence

  3. LinkedIn: Global recruiting trends 2018 (2018). https://business.linkedin.com/talent-solutions/recruiting-tips/2018-global-recruiting-trends?trk=bl-ba_global-recruiting-trends-launch_maria-ignatova_011018

  4. Kaplan, J.: Artificial intelligence: think again. Commun. ACM 60(1), 36–38 (2016). https://doi.org/10.1145/2950039

    Article  Google Scholar 

  5. Internet Society: Artificial Intelligence and Machine Learning: Policy paper (2017). https://www.internetsociety.org/resources/doc/2017/artificial-intelligence-and-machine-learning-policy-paper

  6. Parnas, D.L.: The real risks of artificial intelligence. Commun. ACM 60(10), 27–31 (2017). https://doi.org/10.1145/3132724

    Article  Google Scholar 

  7. Shanahan, M.: The Technological Singularity. MIT Press, New York (2015)

    Book  Google Scholar 

  8. Anthes, G.: Artificial intelligence poised to ride a new wave. Commun. ACM 60(7), 19–21 (2017). https://doi.org/10.1145/3088342

    Article  Google Scholar 

  9. Guidotti, R., Monreale, A., Pedreschi, D.: The AI black box explanation problem. ERCIM News 116, 12–13 (2019)

    Google Scholar 

  10. ACM U.S. Public Policy Council, ACM Europe Policy Committee: Statement on algorithmic transparency and accountability (2017). https://www.acm.org/binaries/content/assets/public-policy/2017_joint_statement_algorithms.pdf

  11. Doshi-Velez, F., et al.: Accountability of AI under the law: the role of explanation. CoRR abs/1711.01134 (2017)

    Google Scholar 

  12. O’Neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. New York Time (2016)

    Google Scholar 

  13. Pell, D.: The 10 algorithms that rule the world and other fascinating news on the web. Time (2014). http://time.com/111313/the-10-algorithms-that-rule-the-world-and-other-fascinating-news-on-the-web

  14. Barry-Jester, A.M., Casselman, B., Goldestein, D.: The Marshall Project. The new science of sentencing (2015). https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing#.bwuhXcwqn

  15. Buregess, M.: Holding AI to account: will algorithms ever be free from bias if they’re created by humans? Wired, UK (2016). http://www.wired.co.uk/article/creating-transparent-ai-algorithms-machine-learning

  16. Kirkpatrick, K.: Battling algorithmic bias: how do we ensure algorithms treat us fairly? Commun. ACM 59(10), 16–17 (2016). https://doi.org/10.1145/2983270

    Article  Google Scholar 

  17. Gaines, B.: Designing expert systems for usability. In: Shackel, B., Richardson, S.J. (eds.) Human Factors for Informatics Usability, pp. 207–246. Cambridge University Press, New York (1991)

    Google Scholar 

  18. Johnson-Laird, P.: How We Reason. Oxford University Press, New York (2006)

    Google Scholar 

  19. Wong, S.: Google Translate AI invents its own language to translate with. Daily News (2016). https://www.newscientist.com/article/2114748-google-translate-ai-invents-its-own-language-to-translate-with

  20. Reynolds, M.: Google uses neural networks to translate without transcribing. Daily News (2017). https://www.newscientist.com/article/2126738-google-uses-neural-networks-to-translate-without-transcribing

  21. Coldewey, D., Lardinois, F.: DeepL schools other online translators with clever machine learning. Techcrunch (2017). https://techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning

  22. Mitchell, M.P., Santorini, B., Marcinkiewicz, M.A., Taylor, A.: Treebank-3 LDC99T42 Web Download. Linguistic Data Consortium, Philadelphia (1999)

    Google Scholar 

  23. Schank, S., Abelson, R.: Scripts, Plans, Goals and Understanding. An Inquiry into Human Knowledge Structures. Lawrence Erlbaum, Hillsdale (1997)

    MATH  Google Scholar 

  24. He, L., Lee, K., Lewis, M., Zettlemoyer, L.: Deep semantic role labeling: what works and what’s next. In: Proceedings of the 55th Annual Meeting Association for Computational Linguistics, vol. 1, pp. 473–483 (2017)

    Google Scholar 

  25. Jordan, M.: Artificial Intelligence. The revolution hasn’t happened yet. Medium (2018). https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

  26. Hutson, M.: AI researchers allege that machine learning is alchemy. Science 360(6388), 861 (2018). http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

    Article  Google Scholar 

Download references

Acknowledgments

As researchers in natural language processing and requirements engineering, authors shared a number of papers with Stefania Gnesi and her research group since the early 1990s. She is a passionate scientist, and these exchanges resulted in a fruitful and enriching relationship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luisa Mich .

Editor information

Editors and Affiliations

Appendix A

Appendix A

Representation of the Meaning of the Sentence: “A 59-year-old man from York has been arrested on suspicion of murdering missing chef Claudia Lawrence”.

figure a
figure b
figure c

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Garigliano, R., Mich, L. (2019). Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence. In: ter Beek, M., Fantechi, A., Semini, L. (eds) From Software Engineering to Formal Methods and Tools, and Back. Lecture Notes in Computer Science(), vol 11865. Springer, Cham. https://doi.org/10.1007/978-3-030-30985-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30985-5_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30984-8

  • Online ISBN: 978-3-030-30985-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics