Abstract
A first-order Bayesian network (FOBN) is an extension of first-order logic in order to cope with uncertainty problems. Therefore, learning an FOBN might be a good idea to build an effective classifier. However, because of a complication of the FOBN, directly learning it from relational data is difficult. This paper proposes another way to learn FOBN classifiers. We adapt Inductive Logic Programming (ILP) and a Bayesian network learner to construct the FOBN. To do this, we propose a feature extraction algorithm to generate the significant parts (features) of ILP rules, and use these features as a main structure of the induced the FOBN. Next, to learn the remaining parts of the FOBN structure and its conditional probability tables by a standard Bayesian network learner, we also propose an efficient propositionalisation algorithm for translating the original data into the single table format. In this work, we provide a preliminary evaluation on the mutagenesis problem, a standard dataset for relational learning problem. The results are compared with the state-of-the-art ILP learner, the PROGOL system.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
E. Alphonse and C. Rouveirol. Lazy Propositionalisation for Relational Learning, In Horn dW., editor, Proc. of 14th European Conference on Artificial Intelligence, Berlin, Allemagne, pages 256–260, IOS Press, 2000.
M. Botta and A. Giordana and L. Saitta and M. Sebag. Relational learning: Hard Problems and Phase transition. Selected papers from AIIA’99, Springer-Verlag, 2000.
D. M. Chickering. Learning Bayesian Networks is NP-Complete. In D. Fisher and H. J. Lenz, editors, Learning from Data: Artificial Intelligence and Statistics V, 1996.
D.M. Chickering. The WinMine Toolkit. Technical Report MSR-TR-2002-103, Microsoft, 2002
L. De Raedt. Attribute value learning versus inductive logic programming: The missing links (extended abstract). In D. Page, editor, Proc. of the 8th Int. Conference on Inductive Logic Programming, LNAI 1446, pages 1–8. Springer-Verlag, 1998.
S. Dzeroski. Relational Data Mining Applications: An Overview. Relational Data Mining, S. Dzeroski and N. Lavrac, editors, Springer-Verlag, 2001.
D. Fensel, M. Zickwolff, and M. Weise. Are substitutions the better examples ? In L. De Raedt, editor, Proc. of the 5 th International Workshop on ILP, 1995.
P. A. Flach and N. Lachiche. 1BC: A first-order Bayesian classifier. In S. Dzeroski and P. A. Flach, editors, Proc. of the 9th International Workshop on Inductive Logic Programming, LNAI 1634, pages 92–103. Springer-Verlag, 1999.
L. Getoor, N. Friedman, D. Koller, and A. Pfeffer. Learning Probabilistic Relational Models. Relational Data Mining, S. Dzeroski and N. Lavrac, editors, 2001
K. Kersting, L. De Raedt. Basic Principles of Learning Bayesian Logic Programs. Technical Report No. 174, Institute for Computer Science, University of Freiburg, Germany, June 2002
K. Kersting, L. De Raedt. Bayesian Logic Programs. In J. Cussens and A. Frisch, editors, Work-in-Progress Reports of the Tenth International Conference on Inductive Logic Programming (ILP-2000), London, U. K., 2000.
B. Kijsirikul, S. Sinthupinyo, and K. Chongkasemwongse. Approximate Match of Rules Using Backpropagation Neural Networks. Machine Learning Journal, Volume 44, Issue 3, September, 2001
D. Koller and A. Pfeffer. Object-Oriented Bayesian Networks. Proc. of UAI, 1997.
S. Kramer, N. Lavrac and P. Flach. Propositionalization Approaches to Relational Data Mining, in: Dzeroski S., Lavrac N, editors, Relational Data Mining, 2001.
N. Lavrac and S. Dzeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, New York, 1994
E. McCreath and A. Sharma. ILP with Noise and fixed Example Size: a Bayesian Approach. Proc. of the 15th International Joint Conference on Artificial Intelligence IJCAI), Nagoya, Japan, August 1997
S. Muggleton and L. De Raedt. Inductive Logic Programming: Theory and Methods. Journal of Logic Programming, 12:1–80, 1994.
S. Muggleton. Inverse entailment and Progol. New Generation Computing, Special issue on Inductive Logic Programming, 13(3–4):245–286, 1995.
D. Poole. The Independent Choice Logic for modeling multiple agents under uncertainty. Artificial Intelligence, 94(1–2), special issue on economic principles of multi-agent systems: 7–56, 1997.
K. H. Rosen. Discrete Mathematics and its Applications. 4th Edition, Mcgraw-Hill, 1998.
M. Sebag and C. Rouveirol. Constraint Inductive Logic Programming. In L. De Raedt, editor, Advances in Inductive Logic Programming, 277–294, IOS-Press, 1996.
Srinivasan and R.D. King. Feature construction with Inductive Logic Programming: a study of quantitative predictions of biological activity aided by structural attributes. Data Mining and Knowledge Discovery, 3(1): 37–57, 1999
Srinivasan, R.D. King, and S. Muggleton. The role of background knowledge: using a problem from chemistry to examine the performance of an ILP program. Technical Report PRG-TR-08-99, Oxford University Computing Laboratory, Oxford, 1999.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chatpatanasiri, R., Kijsirikul, B. (2003). Learning First-Order Bayesian Networks. In: Xiang, Y., Chaib-draa, B. (eds) Advances in Artificial Intelligence. Canadian AI 2003. Lecture Notes in Computer Science, vol 2671. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44886-1_24
Download citation
DOI: https://doi.org/10.1007/3-540-44886-1_24
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40300-5
Online ISBN: 978-3-540-44886-0
eBook Packages: Springer Book Archive