Skip to main content

Explainable AI and Fuzzy Logic Systems

  • Conference paper
  • First Online:
Theory and Practice of Natural Computing (TPNC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11324))

Included in the following conference series:

Abstract

The recent advances in computing power coupled with the rapid increases in the quantity of available data has led to a resurgence in the theory and applications of Artificial Intelligence (AI). However, the use of complex AI algorithms like Deep Learning, Random Forests, etc., could result in a lack of transparency to users which is termed as black/opaque box models. Thus, for AI to be trusted and widely used by governments and industries, there is a need for greater transparency through the creation of explainable AI (XAI) systems. In this paper, we introduce the concepts of XAI and give an overview of hybrid systems which employ fuzzy logic systems which can hold great promise for creating trusted and explainable AI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Purdy, M., Daugherty, P.: Why artificial intelligence is the future of growth. In: Remarks at AI Now: The Social and Economic Implications of Artificial Intelligence Technologies in the Near Term, pp. 1–72 (2016)

    Google Scholar 

  2. Nott, G.: Explainable artificial intelligence’: cracking open the black box of AI. Computer World (2017). https://www.computerworld.com.au/article/617359

  3. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 30 (2018)

    Article  Google Scholar 

  4. Board, F.S.: Artificial intelligence and machine learning in financial services. November (2017). http://www.fsb.org/2017/11/artificialintelligence-and-machine-learning-in-financialservice/. Accessed 30 Jan 2018

  5. AI in the UK: ready, willing and able?. UK Parliament (House of Lords) Aritificial Intelligence Committee, 16 April 2017. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

  6. Thelisson, E., Padh, K., Celis, L.E.: Regulatory mechanisms and algorithms towards trust in AI/ML (2017)

    Google Scholar 

  7. Gunning, D.: Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web (2017)

    Google Scholar 

  8. Wierzynski, C.: The Challenges and Opportunities of Explainable AI (2017). https://ai.intel.com/the-challenges-and-opportunities-of-explainable-ai/

  9. Weller, A.: Challenges for transparency. arXiv preprint arXiv:1708.01870 (2017)

  10. Goodman, S.B.F.: European Union regulations on algorithmic decision-making and a “right to explanation”. In: 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY (2016)

    Google Scholar 

  11. Holzinger, A., Plass, M., Holzinger, K., Crisan, G.C., Pintea, C.-M., Palade, V.: A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv preprint arXiv:1708.01104 (2017)

  12. Montavon, G., Samek, W., Müller, K.-R.: Methods for interpreting and understanding deep neural networks. Digital Signal Processing (2017)

    Google Scholar 

  13. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  Google Scholar 

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier, pp. 1135–1144 (ACM)

    Google Scholar 

  15. Ribeiro, M.T., Singh, S., Guestrin, C.: Nothing else matters: model-agnostic explanations by identifying prediction invariance. arXiv preprint arXiv:1611.05817 (2016)

  16. Merentitis, A., Debes, C.: Automatic fusion and classification using random forests and features extracted with deep learning, pp. 2943–2946. IEEE (2015)

    Google Scholar 

  17. Yang, Y., Morillo, I.G., Hospedales, T.M.: Deep neural decision trees. arXiv preprint arXiv:1806.06988 (2018)

  18. Mendel, J.: Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions. Prentice Hall, Upper Saddle River (2001)

    MATH  Google Scholar 

  19. Sanz, J.A., Bernardo, D., Herrera, F., Bustince, H., Hagras, H.: A compact evolutionary interval-valued fuzzy rule-based classification system for the modeling and prediction of real-world financial applications with imbalanced data. IEEE Trans. Fuzzy Syst. 23(4), 973–990 (2015)

    Article  Google Scholar 

  20. Antonelli, M., Bernardo, D., Hagras, H., Marcelloni, F.: Multiobjective evolutionary optimization of Type-2 fuzzy rule-based systems for financial data classification. IEEE Trans. Fuzzy Syst. 25(2), 249–264 (2017)

    Article  Google Scholar 

  21. Lv, Y., Duan, Y., Kang, W., Li, Z., Wang, F.-Y.: Traffic flow prediction with big data: a deep learning approach. IEEE Trans. Intell. Transp. Syst. 16(2), 865–873 (2015)

    Google Scholar 

  22. Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Google Scholar 

  23. Shin, H., Orton, M., Collins, D., Doran, S., Leach, M.: Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1930–1943 (2013)

    Article  Google Scholar 

  24. Koza, J.R.: EBSCOhost eBook Collection, Genetic Programming on the Programming of Computers by Means of Natural Selection (Complex adaptive systems), pp. xiv. MIT Press, Cambridge, Mass (1992). 819 p

    Google Scholar 

  25. Cordón, O.: A historical review of evolutionary learning methods for Mamdani-type fuzzy rule-based systems: designing interpretable genetic fuzzy systems. Int. J. Approx. Reasoning 52(6), 894–913 (2011)

    Article  Google Scholar 

  26. Chen, P., Zhang, C., Chen, L., Gan, M.: Fuzzy restricted Boltzmann machine for the enhancement of deep learning. IEEE Trans. Fuzzy Syst. 23(6), 2163–2173 (2015)

    Article  Google Scholar 

  27. Deng, Y., Ren, Z., Kong, Y., Bao, F., Dai, Q.: A hierarchical fused fuzzy deep neural network for data classification. IEEE Trans. Fuzzy Syst. 25(4), 1006–1012 (2017)

    Article  Google Scholar 

  28. Park, S., Lee, S.J., Weiss, E., Motai, Y.: Intra-and inter-fractional variation prediction of lung tumors using fuzzy deep learning. IEEE J. Trans. Eng. Health Med. 4, 1–12 (2016)

    Article  Google Scholar 

  29. Rajurkar, S., Verma, N.K.: Developing deep fuzzy network with takagi sugeno fuzzy inference system, pp. 1–6. IEEE (2017)

    Google Scholar 

  30. Zhou, S., Chen, Q., Wang, X.: Fuzzy deep belief networks for semi-supervised sentiment classification. Neurocomputing 131, 312–322 (2014)

    Article  Google Scholar 

  31. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, pp. 153–160 (2007)

    Google Scholar 

  32. Wang, M., Hua, X.-S.: Active learning in multimedia annotation and retrieval: a survey. ACM Trans. Intell. Syst. Technol. (TIST) 2(2), 10 (2011)

    Google Scholar 

  33. Zheng, Y., Sheng, W., Sun, X., Chen, S.: Airline passenger profiling based on fuzzy deep machine learning. IEEE Trans. Neural Netw. Learn. Syst. 28(12), 2911–2923 (2017)

    Article  MathSciNet  Google Scholar 

  34. Yager, R.R.: Pythagorean fuzzy subsets, pp. 57–61. IEEE

    Google Scholar 

  35. Yager, R.R.: Pythagorean membership grades in multicriteria decision making. IEEE Trans. Fuzzy Syst. 22(4), 958–965 (2014)

    Article  Google Scholar 

  36. Chimatapu, R., Hagras, H., Starkey, A., Owusu, G.: Interval type-2 fuzzy logic based stacked autoencoder deep neural network for generating explainable ai models in workforce optimization. In: presented at the 2018 IEEE international conference on fuzzy systems (FUZZ), in press

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hani Hagras .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chimatapu, R., Hagras, H., Starkey, A., Owusu, G. (2018). Explainable AI and Fuzzy Logic Systems. In: Fagan, D., Martín-Vide, C., O'Neill, M., Vega-Rodríguez, M.A. (eds) Theory and Practice of Natural Computing. TPNC 2018. Lecture Notes in Computer Science(), vol 11324. Springer, Cham. https://doi.org/10.1007/978-3-030-04070-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04070-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04069-7

  • Online ISBN: 978-3-030-04070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics