Skip to main content
Log in

Research on a Kind of Multi-objective Evolutionary Fuzzy System with a Flowing Data Pool and a Rule Pool for Interpreting Neural Networks

  • Published:
International Journal of Fuzzy Systems Aims and scope Submit manuscript

Abstract

Using fuzzy rules to explain the neural network conforms with people’s way of thinking. Multi-objective evolutionary fuzzy systems can obtain the fuzzy rules with high accuracy and strong interpretability, but the efficiency is low because they are often limited by the complexity of the problem. Therefore, a multi-objective evolutionary fuzzy system algorithm with a flowing data pool and a rule pool (FPs-MOEFS) is proposed in this paper. Based on the multi-objective evolutionary learning of fuzzy rules, an iteratively updated data pool is introduced, so that the next evolution iteration can focus on the current unpredictable data and improve the ability of the algorithm to find the global optimum; a fixed-size flowing rule pool is introduced to guarantee the diversity of candidate rules during the evolution while reducing the encoding length. To further improve the level of evolution, the co-evolution algorithm is integrated into the multi-objective framework, and a set of fuzzy rules with a strong fitting ability and high interpretability are obtained. The interpretations of neural networks by fuzzy rules are visualized on the artificial data. Comparative studies with other algorithms on UCI datasets indicate the effectiveness of the proposed algorithm. The proposed method can provide interpretations in the form of fuzzy rules for any kind of neural network with high interpretability and high accuracy. Therefore, it has good application prospects in the fields that need to understand the decision-making process and reasons of neural networks, especially in the high-risk decision-making fields.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26

Similar content being viewed by others

Data Availability

Not applicable.

Code Availability

Not applicable.

References

  1. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. CAAI Trans. Intell. Technol. 6(1), 25–45 (2021)

    Article  Google Scholar 

  2. Wang, L., Huang, Y., Lin, B., Wu, W., Chen, H., Pu, J.: Automatic classification of exudates in color fundus images using an augmented deep learning procedure. In: Proc. of the Third Int Symp on Image Comput and Digit Med. pp. 31–35 (2019)

    Google Scholar 

  3. Li, G., Liang, S., Nie, S., Liu, W., Yang, Z.: Deep neural network-based generalized sidelobe canceller for dual-channel far-field speech recognition. Neural Netw. 141, 225–237 (2021)

    Article  Google Scholar 

  4. Wu, J., Ylmaz, E., Zhang, M., Li, H., Tan, K.C.: Deep spiking neural networks for large vocabulary automatic speech recognition. Front. Neurosci. 14, 199 (2020)

    Article  Google Scholar 

  5. Ahmad, F., Abbasi, A., Li, J., Dobolyi, D.G., Netemeyer, R.G., Clifford, G.D., Chen, H.: A deep learning architecture for psychometric natural language processing. ACM Trans. Inf. Syst. 38(1), 1–29 (2020)

    Article  Google Scholar 

  6. Gu, M., Zhao, Z., Jin, W., Hong, R., Wu, F.: Graph-based multi-interaction network for video question answering. IEEE Trans. Image Process. 30, 2758–2770 (2021)

    Article  Google Scholar 

  7. Zhang, Y., Tiňo, P., Leonardis, A., Tang, K.: A survey on neural network interpretability. arXiv:2012.14261 (2020)

  8. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv:1702.08608 (2017)

  9. Samek, W. Wiegand, T., Müller, K.R.: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. Int. Telecommun. Union. arXiv:1708.08296 (2017)

  10. Castelvecchi, D.: Can we open the black box of AI? Nature 538(7623), 20–23 (2016)

    Article  Google Scholar 

  11. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2020)

    Article  Google Scholar 

  12. Boz, O.: Extracting decision trees from trained neural networks. In: Proc. of the Eighth ACM SIGKDD Int Conf on Knowl Discov and Data Min. 32 (12), 1999–2009 (2002)

  13. Wu, M., Parbhoo, S., Hughes, M.C., Kindle, R., Celi, L.A., Zazzi, M., Roth, V., Doshi-Velez, F.: Regional tree regularization for interpretability in black box models. In: Proc. of the AAAI Conf on Artif Intell 34(4) 6413–6421 (2020)

  14. Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., DoshiVelez, F.: Beyond sparsity: tree regularization of deep models for interpretability. In: Proc of 32nd AAAI Conf on Artif Intell, New Orleans, LA, pp. 1670–1678, (2018)

  15. Odajima, K., Hayashi, Y., Tianxia, G., Setiono, R.: Greedy rule generation from discrete data and its use in neural network rule extraction. Neural Netw. 21(7), 1010–1028 (2008)

    Article  MATH  Google Scholar 

  16. Nayak, R.: Generating rules with predicates, terms and variables from the pruned neural networks. Neural Netw. 22(4), 405–414 (2009)

    Article  MATH  Google Scholar 

  17. Benitez, J.M., Castro, J.L., Requena, I.: Are artificial neural networks black boxes? IEEE Trans. Neural Netw. 8(5), 1156–1164 (1997)

    Article  Google Scholar 

  18. Castro, J.L., Mantas, C.J., Benitez, J.M.: Interpretation of artificial neural networks by means of fuzzy rules. IEEE Trans. Neural Netw. 13(1), 101–116 (2002)

    Article  Google Scholar 

  19. Wang, T.: Gaining free or low-cost transparency with interpretable partial substitute. In Proc. of the 36th Int Conf on Mach Learn, PMLR, vol. 97, pp. 6505–6514, 2019.

  20. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Proc. of the IEEE Conf on Comput Vis and Pattern Recognit, (CVPR), pp. 3319–3327 (2017)

  21. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv:1312.6034 (2013)

  22. Nguyen, A.M., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Proc. of the 30th Int Conf on Neural Inf Process Syst, pp. 3395–3403 (2016)

  23. Dalvi, F., Durrani, N., Sajjad, H., Belinkov, Y., Bau, D.A., Glass, J.: What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. In: Proc. of the AAAI Conf on Artif Intell (AAAI), vol. 33, pp. 6309–6317 (2019)

  24. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2020)

    Article  Google Scholar 

  25. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proc. of the IEEE Conf on Comput Vis and Pattern Recognit (CVPR), pp. 2921–2929 (2016)

  26. Shrikumar, A., Greenside, P., Kundaje, A.: Not Just a Black Box: Learning important features through propagating activation differences. arXiv:1605.01713 (2016)

  27. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. arXiv:1810.03292 (2018)

  28. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: Proc. of the 36th Int Conf on Mach Learn. 97 2376–2384 (2019)

  29. Zintgraf, L.M., Cohen, T.S., Adel, T., Welling, M.: Visualizing deep neural network decisions: prediction difference analysis. arXiv:1702.04595 (2017)

  30. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. arXiv:1705.07874v2 (2017)

  31. Heskes, T., Sijben, E., Bucur, I.G., Claassen, T.: Causal Shapley values: exploiting causal knowledge to explain individual predictions of complex models. In: Proc. of the 34th Conf on Neural Inf Process Syst (NeurIPS), 33, (2020)

  32. Chen, J., Song, L., Wainwright, M., Jordan, M.: Learning to explain: an information-theoretic perspective on model interpretation. In: Proc. of the 35th Int Conf on Mach Learn, Stockholm, Swede. 80, (2018)

  33. Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Proc. of the 33th Conf on Neural Inf Process Syst (NeurIPS), pp. 9277–9286 (2019)

  34. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proc. of the 34th Int Conf on Mach Learn, 70, 1885–1894 (2017)

  35. Yeh, C.K., Kim, J., Yen, I.E.-H., Ravikumar, P.K.: Representer point selection for explaining deep neural networks. In: Proc. of the 32nd Int Confe on Neural Inf Process Syst, pp. 9311–9321 (2018)

  36. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proc. of the 32nd AAAI Conf on Artif Intel, 32(1), 3530–3537 (2018)

  37. Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. arXiv:1806.10574 (2018)

  38. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)

    Article  MATH  Google Scholar 

  39. Kaczmar, U.M., Trelak, W.: Fuzzy logic and evolutionary algorithm—two techniques in rule extraction from neural networks. Neurocomputing 63, 359–379 (2005)

    Article  Google Scholar 

  40. Craven, M.W., Shavlik, J.W.: Using sampling and queries to extract rules from trained neural networks. In: Proc. of the Eleventh Int Conf, pp. 37–45 (1994)

  41. Saito, K., Nakano, R.: Medical diagnostic expert system based on PDP model. In: Proc. IEEE Int Conf on Neural Netwo, pp. 255–262 (2002)

  42. Kerk, Y.W., Tay, K.M., Lim, C.P.: Monotone interval fuzzy inference systems. IEEE Trans. Fuzzy Syst. 27(11), 2255–2264 (2019)

    Article  Google Scholar 

  43. Sheng, Y., Lewis, F.L., Zeng, Z., Huang, T.: Stability and stabilization of Takagi-Sugeno fuzzy systems with hybrid time-varying delays. IEEE Trans. Fuzzy Syst. 27(10), 2067–2078 (2019)

    Article  Google Scholar 

  44. Mendel, J.M., Chimatapu, R., Hagras, H.: Comparing the performance potentials of singleton and non-singleton type-1 and interval type-2 fuzzy systems in terms of sculpting the state space. IEEE Trans. Fuzzy Syst. 28(4), 783–794 (2020)

    Article  Google Scholar 

  45. Mazandarani, M., Li, X.: Fractional fuzzy inference system: the new generation of fuzzy inference systems. IEEE Access 8, 126066–126082 (2020)

    Article  Google Scholar 

  46. Smith, S.F.: A learning system based on genetic adaptive algorithms. University of Pittsburgh. ProQuest Dissertations Publishing (1980)

  47. Holland, J.H., Reitman, J.S.: Cognitive systems based on adaptive algorithms. In: Pattern-Directed Inference Syst, pp. 313–329 (1978)

  48. Ishibuchi, H., Nojima, Y.: Multiobjective genetic fuzzy systems. Comput. Intell. 1, 131–173 (2009)

    Article  MATH  Google Scholar 

  49. Xing, Z.Y., Yong, Z., Hou, Y.L., Jia, L.M.: On generating fuzzy systems based on Pareto multi-objective cooperative coevolutionary algorithm. Int. J. Control Autom. Syst. 5(4), 444–455 (2007)

    Google Scholar 

  50. Hoffmann, F.: Combining boosting and evolutionary algorithms for learning of fuzzy classification rules. Fuzzy Sets Syst. 141(1), 47–58 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  51. Castillo, L., Gonzalez, A., Perez, R.: Including a simplicity criterion in the selection of the best rule in a genetic algorithm. Fuzzy Sets Syst. 120(2), 309–321 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  52. Hong, T.P., Chen, J.-B.: Processing individual fuzzy attributes for fuzzy rule induction. Fuzzy Sets Syst. 112(1), 127–140 (2000)

    Article  Google Scholar 

  53. Wang, L.X.: A Course in Fuzzy Systems and Control. Prentice Hall, Englewood Cliffs (1997)

    MATH  Google Scholar 

  54. Wang, L.X., Mendel, J.M.: Generating fuzzy rules by learning from examples. IEEE Trans. Syst. Man Cybern. 22(6), 1414–1427 (1992)

    Article  MathSciNet  Google Scholar 

  55. Zhang, K., Hao, W.N., Yu, X.H., Jin, D.W., Zhang, Z.H.: A multitasking genetic algorithm for Mamdani fuzzy system with fully overlapping triangle membership functions. Int. J. Fuzzy Syst. 22(8), 1–17 (2020)

    Article  Google Scholar 

  56. Wang, L.X.: The WM method completed: a flexible fuzzy system approach to data mining. IEEE Trans. Fuzzy Syst. 11(6), 768–782 (2003)

    Article  Google Scholar 

  57. Huang, Y., Chen, D., Zhao, W., Mo, H.: Deep fuzzy system algorithms based on deep learning and input sharing for regression application. Int. J. Fuzzy Syst. 23, 727–742 (2021). https://doi.org/10.1007/s40815-020-00998-4

    Article  Google Scholar 

  58. Chang, P.C., Liu, C.H., Lai, R.K.: A fuzzy case-based reasoning model for sales forecasting in print circuit board industries. Expert Syst. Appl. 34(3), 2049–2058 (2008)

    Article  Google Scholar 

  59. Chen, D., Tong, W., Huang, Y., Zhang, J.: FLOWFS: fast learning-algorithm with optimal weights for fuzzy systems. Int. J. Fuzzy Syst. (2022). https://doi.org/10.1007/s40815-022-01329-5

    Article  Google Scholar 

  60. Yufei, Y., Huijun, Z.: A genetic algorithm for generating fuzzy classification rules. Fuzzy Sets Syst. 84(1), 1–19 (1996)

    Article  MATH  Google Scholar 

  61. Ishibuchi, H., Nojima, Y., Kuwajima, I.: Fuzzy data mining by heuristic rule extraction and multiobjective genetic rule selection. In: Proc. IEEE Int Conf on IEEE Fuzzy Syst, pp. 1633–1640 (2006)

  62. Zhang, P., Shen, Q.: A novel framework of fuzzy rule interpolation for Takagi Sugeno-Kang inference systems. In: Proc. 2019 IEEE Int Conf on Fuzzy Syst (FUZZ-IEEE), pp. 1–6 (2019)

  63. Fazzolari, M., Alcala, R., Nojima, Y., Ishibuchi, H., Herrera, F.: A review of the application of multi-objective evolutionary systems: current status and further directions. IEEE Trans. Fuzzy Syst. 21(1), 45–65 (2013)

    Article  Google Scholar 

  64. Gacto, M.J., Alcalá, R., Herrera, F.: Interpretability of linguistic fuzzy rule-based systems: an overview of interpretability measures. Inf. Sci. 181(20), 4340–4360 (2011)

    Article  Google Scholar 

  65. Alonso, J.M., Magdalena, L.: HILK++: an interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers. Soft. Comput. 15(10), 1959–1980 (2011)

    Article  Google Scholar 

  66. Biedma-Rdguez, C., Gacto, M.J., Anguita-Ruiz, A., Alcalá-Fdez, J., Alcal, R.: Transparent but accurate evolutionary regression combining new linguistic fuzzy grammar and a novel interpretable linear extension. Int. J. Fuzzy Syst. (2022). https://doi.org/10.1007/s40815-022-01324-w

    Article  Google Scholar 

  67. Marquez, A., Márquez, F., Peregrin, A.: A multi-objective evolutionary algorithm with an interpretability improvement mechanism for linguistic fuzzy systems with adaptive defuzzification. In: Proc. of IEEE World Congress on Comput Intell, pp. 277–283 (2010)

  68. Mencar, C., Castiello, C., Cannone, R., Fanelli, A.M.: Interpretability assessment of fuzzy knowledge bases: a cointension based approach. Int. J. Approx. Reason. 52(4), 501–518 (2011)

    Article  MathSciNet  Google Scholar 

  69. Kalia, H., Dehuri, S., Ghosh, A., Cho, S.B.: Surrogate-assisted multi-objective genetic algorithms for fuzzy rule-based classification. Int. J. Fuzzy Syst. 20, 1938–1955 (2018). https://doi.org/10.1007/s40815-018-0478-3

    Article  Google Scholar 

  70. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  71. Murphy, P.M., Aha, D.W.: UCI repository of machine learning databases. Ph.D. Thesis, Department of Information and Computer Science, University of California, Irvine, CA (1998)

Download references

Funding

The work was supported in part by the Defense Industrial Technology Development Program, JCKY2020601B018.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding authors

Correspondence to Wen-Ning Hao or Xiao-Han Yu.

Ethics declarations

Conflict of interest

Not applicable.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, K., Hao, WN., Yu, XH. et al. Research on a Kind of Multi-objective Evolutionary Fuzzy System with a Flowing Data Pool and a Rule Pool for Interpreting Neural Networks. Int. J. Fuzzy Syst. 25, 575–600 (2023). https://doi.org/10.1007/s40815-022-01392-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40815-022-01392-y

Keywords

Navigation