Abstract
Evaluations of advantages of Probabilistic Inductive Logic Programming (PILP) against ILP have not been conducted from a computational learning theory point of view. We propose a PILP framework, projection-based PILP, in which surjective projection functions are used to produce a “lossy” compression dataset from an ILP dataset. We present sample complexity results including conditions when projection-based PILP needs fewer examples than PAC. We experimentally confirm the theoretical bounds for the projection-based PILP in the Blackjack domain using Cellist, a system which machine learns Probabilistic Logic Automata. In our experiments projection-based PILP shows lower predictive error than the theoretical bounds and achieves substantially lower predictive error than ILP. To the authors’ knowledge this is the first paper describing both a computer learning theory and related empirical results on an advantage of PILP against ILP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples needed for learning. Informution and Computation 82, 247–261 (1989)
Kearns, M.J., Schapire, R.E.: Efficient distribution-free learning of probabilistic concepts. J. Comput. Syst. Sci. 48, 464–497 (1994)
Mitchell, T.M.: Machine learning. McGraw-Hill (1997)
Muggleton, S., David Page Jr., C.: A learnability model for universal representations. In: Proceedings of the 4th International Workshop of Inductive Logic Programming, pp. 139–160. GMD (1997)
Plotkin, G.: A note on inductive genralization. Machine Intelligence 5, 153–163 (1970)
De Raedt, L., Kersting, K.: Probabilistic Inductive Logic Programming. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS (LNAI), vol. 3244, pp. 19–36. Springer, Heidelberg (2004)
Russell, S.J., Norvig, P.: Artifical intelligence: A modern approach, 2nd edn. Prentice Hall (2003)
Valiant, L.G.: A theory of the learnable. Commun. ACM 27, 1134–1142 (1984)
Watanabe, H., Muggleton, S.: Can ILP Be Applied to Large Datasets? In: De Raedt, L. (ed.) ILP 2009. LNCS, vol. 5989, pp. 249–256. Springer, Heidelberg (2010)
Watanabe, O.: Sequential sampling techniques for algorithmic learning theory. Theoretical Computer Science 2348(1,2), 3–14 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Watanabe, H., Muggleton, S.H. (2012). Projection-Based PILP: Computational Learning Theory with Empirical Results. In: Muggleton, S.H., Tamaddoni-Nezhad, A., Lisi, F.A. (eds) Inductive Logic Programming. ILP 2011. Lecture Notes in Computer Science(), vol 7207. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31951-8_30
Download citation
DOI: https://doi.org/10.1007/978-3-642-31951-8_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31950-1
Online ISBN: 978-3-642-31951-8
eBook Packages: Computer ScienceComputer Science (R0)