Skip to main content

Accurate decomposition of standard MLP classification responses into symbolic rules

  • Cognitive Science and AI
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

Abstract

In this work we determine hyper-plane equations from three MLP models. The first one is the standard MLP model, the second one is called OMLP (oblique MLP) and the last one is called IMLP (Interpretable MLP). From OMLP and IMLP, hyper-plane equations are determined easily, whereas for MLP we just give a sufficient condition for the detection of potential hyper-plane discriminators.

Our goal is to justify MLP classification responses in terms of symbolic rules. For this, we use a standard MLP network for classification and an IMLP network for justification of MLP responses. The system consists in the training of the IMLP network with MLP responses and in the extraction of symbolic rules from IMLP. The approach is sufficiently general to work even when input variables are continuous. Moreover, the justification provided by IMLP is accurate because if MLP and IMLP responses are contradictory with respect to a new unknown example, IMLP is retrained with the addition of the new example until the system becomes coherent. Finally, we show results given by a medical diagnosis application with continuous input variables.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. White H.: Connectionist non-parametric regression. multi-layer feedforward networks can learn arbitrary mappings. Neural Networks. 3 (1990) 535–551.

    Google Scholar 

  2. Breimann L., Friedmann J.H., Olshen R.A., Stone J.: Classification and Regression Trees. Wadsworth and Brooks, Monterey, Calif (1984).

    Google Scholar 

  3. Quinlan J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993).

    Google Scholar 

  4. Mooney R., Shavlik J., Towell G., Gove A.: An experimental Comparison of Symbolic and Connectionist Learning Algorithms. Proc. IJCAI-89 Morgan Kaufmann Los Altos. (1989) 775–780.

    Google Scholar 

  5. Atlas L., Cole R., Connor J., El-Sharkawi M., Marks R.J., Muthusumi Y., Barnard E.: Performance Comparison Between Backpropagation Networks and Classification Trees on Three Real-World Applications. NIPS 2, Morgan Kaufmann, San Mateo, CA. (1990) 622–629.

    Google Scholar 

  6. Tsoi A.C., Pearson R.A.: Comparison of Three Classification Techniques, CART, C4.5 and MLP. NIPS 3, Morgan Kaufmann, San Mateo CA. (1991) 963–969.

    Google Scholar 

  7. Mitchell T.M., Thsun S.B.: Explanation Based Learning. A comparison of symbolic and connectionist Learning Algorithms. Proc. 10th Int. Conf. on Machine Learning, Morgan Kaufmann San Mateo CA. (1993) 197–204.

    Google Scholar 

  8. Feng G., Sutherland A., King R., Muggleton S., Henery R.: Comparison of Machine Learning Classifiers to Statistics and Neural Networks. Proc. 4th Int. Workshop on Artificial Intelligence and Statistics, Florida (1993).

    Google Scholar 

  9. Quinlan J.R.: Comparing Connectionist and Symbolic Learning Methods. Hanson et al, (1994) 445–456.

    Google Scholar 

  10. Towell G.G., Shawlik J.W.: Extracting Refined Rules from Knowledge-Based Neural Networks. Machine Learning. 13 (1993) (1).

    Google Scholar 

  11. Setiono R., Liu H.: Understanding Neural Networks via Rule Extraction. Proc. IJCAI. 1 (1995) 480–485.

    Google Scholar 

  12. Gorman R.P., Sejnowski T.J.: Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets. Neural Networks. 1 (1988) 75–88.

    Google Scholar 

  13. Andrews R., Geva S.: Extracting Rules From a Constrained Error Backpropagation Network. Proc. of the 5th Australian Conference on Neural Networks, Brisbane (1994).

    Google Scholar 

  14. Bologna G., Pellegrini C.: Extraction de Règles d'un réseau PMC à Valeurs d'Entrée Continues utilisant des Unités Cachées à Seuil dans la Couche Cachée. Proc. des Huitièmes journées Neurosciences et Sciences de l'Ingénieur, (1996) 199–204.

    Google Scholar 

  15. Bologna G.: Rule Extraction from the IMLP Neural Network: a Comparative Study. Proc. of the NIPS-96 Workshop of Rule Extraction from Trained Artificial Neural Network, (1996) 13–19.

    Google Scholar 

  16. Huang H.H., Zhang C., Lee S.: Implementation and Comparison of Neural Network Learning Paradigms: Back Propagation, Simulated Annealing and Tabu Search. Artificial Neural Networks in Engineering 1991: ASME Press, New York (1991) 95–100.

    Google Scholar 

  17. Amendolia S.R., Bertolucci E., Biadi O., Bottigli U., Caravelli P., Fantacci M.E., Fidecaro E., Mariani M., Messineo A., Rosso V., Stefanini A.: Neural Network Expert System for Screening Coronary Heart Disease. Physica Medica 1993: 9 (1); 13–17.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bologna, G., Pellegrini, C. (1997). Accurate decomposition of standard MLP classification responses into symbolic rules. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032521

Download citation

  • DOI: https://doi.org/10.1007/BFb0032521

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics