Abstract
The representation and learning of a first-order theory using neural networks is still an open problem. We define a propositional theory refinement system which uses min and max as its activation functions, and extend it to the first-order case. In this extension, the basic computational element of the network is a node capable of performing complex symbolic processing. Some issues related to learning in this hybrid model are discussed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Botta, M., Giordana, A., Piola, R.: FONN: Combining first order logic with connectionist learning. In: Proceedings of the International Conference on Machine Learning-1997, pp. 48–56 (1997)
Optiz, D.W., Shavlik, J.W.: Dynamically adding symbolically meaningful nodes to knowledge-based neural networks. Knowledge-Based Systems. 8, 301–311 (1995)
Wogulis, J.: Revising relational domain theories. In: Proceedings of the Eighth In- ternational Workshop on Machine Learning., pp. 462–466 (1991)
Towell, G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Machine Learning. 13, 71–101 (1993)
Jordan, M.I.: Attractor dynamics and parallelism in a connectionist sequential machine. In: Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pp. 531–546 (1986)
Mahoney, J.J.: Combining symbolic and connectionist learning methods to refine certainty-factor rule-bases. Ph.D. Thesis. University of Texas at Austin (1996)
Mahoney, J.J., Mooney, R.J.: Combining connectionist and symbolic learning methods to refine certainty-factor rule-bases. In: Connection Science. 5 (special issue on architectures for integrating neural and symbolic processing), pp. 339–364 (1993)
Towell, G., Shavlik, J.: Knowledge-based artificial neural networks. Artificial Intelligence 69, 119–165 (1994)
Towell, G.: Symbolic knowledge and neural networks: Insertion, refinement and extraction. Ph.D. Thesis. Computer Science Department, University of Wisconsin, Madison (1992)
Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RProp algorithm. In: Proceedings of the International Conference on Neural Networks, pp. 586–591 (1993)
Rummelhart, D.E., Durbin, R., Golden, R., Chauvin, Y.: Backpropagation: The basic theory. In: Rummelhart, D.E., Chauvin, Y. (eds.) Backpropagation: Theory, Architectures, and Applications., pp. 1–34. Lawrence Erlbaum Associates, Hillsdale NJ (1995)
Machado, R.J., Rocha, A.F.: The combinatorial neural network: A connectionist model for knowledge-based systems. In: Bouchon, B., Zadeh, L., Yager, R. (eds.) Uncertainty in Knowledge Bases, pp. 578–587. Springer, Berlin (1991)
Setiono, R., Liu, H.: Improving backpropagation learning with feature selection. Applied Intelligence. 6(2), 129–140 (1996)
Machado, R.J., Barbosa, V.C., Neves, P.A.: Learning in the combinatorial neural model. IEEE Transactions on Neural Networks 9(5), 831–847 (1998)
Garcez, A., Zaverucha, G., Carvalho, L.A.: Logic programming and inductive learning in artificial neural networks. In: Workshop on Knowledge Representation in Neural Networks (KI 1996), Budapest, pp. 9–18 (1996)
Garcez, A., Zaverucha, G.: The connectionist inductive learning and logic programming system. Applied Intelligence Journal (special issue on neural networks and structured knowledge: representation and reasoning) 11(1), 59–77 (1999)
Pinkas, G.: Logical inference in symmetric connectionist networks. Doctoral thesis. Sever Institute of Technology, Washington University (1992)
Shastri, L., Ajjanagadde, V.: From simple associations to systematic reasoning. Behavioral and Brain Sciences. 16(3), 417–494 (1993)
Holldobler, S.: Automated inferencing and connectionist models. Postdoctoral thesis. Intellektik, Informatik, TH Darmstadt (1993)
Kalinke, Y.: Using connectionist term representations for first-order deduction – a critical view. In: CADE-14, Workshop on Connectionist Systems for Knowledge Representation and Deduction., Townsville, Australia (1997) http://pikas.inf.tudresden.de/~yve/publ.html
Sun, R.: Robust reasoning: integrating rule-based and similarity-based reasoning. Artificial Intelligence. 75, 241–295 (1995)
Gelfond, M., Lifschitz, V.: Classical negation in logic programs and disjunctive databases. New Generation Computing. 9, 365–385 (1991)
Lavrac, N., Dzeroski, S.: Inductive Logic Programming: techniques and applications. Ellis Horwood series in Artificial Intelligence 44 (1994)
Quinlan, J.R.: Learning logical definitions from relations. Machine Learning 5, 239–266 (1990)
Menezes, R., Zaverucha, G., Barbosa, V.C.: A penalty-function approach to rule extraction from knowledge-based neural networks. In: International Conference on Neural Information Processing (ICONIP 1998), Kitakyushu, Japan, pp. 1497–1500 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hallack, N.A., Zaverucha, G., Barbosa, V.C. (2000). Towards a Hybrid Model of First-Order Theory Refinement. In: Wermter, S., Sun, R. (eds) Hybrid Neural Systems. Hybrid Neural Systems 1998. Lecture Notes in Computer Science(), vol 1778. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10719871_7
Download citation
DOI: https://doi.org/10.1007/10719871_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67305-7
Online ISBN: 978-3-540-46417-4
eBook Packages: Springer Book Archive