Abstract
Real world data are often noisy and fuzzy. Most traditional logical machine learning methods require the data to be first discretized or pre-processed before being able to produce useful output. Such short-coming often limits their application to real world data. On the other hand, neural networks are generally known to be robust against noisy data. However, a fully trained neural network does not provide easily understandable rules that can be used to understand the underlying model. In this paper, we propose a Differentiable Learning from Interpretation Transition (\(\delta \)-LFIT) algorithm, that can simultaneously output logic programs fully explaining the state transitions, and also learn from data containing noise and error.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., Agarwal, A., Barham, P., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/, software available from tensorflow.org
Evans, R., Grefenstette, E.: Learning explanatory rules from noisy data. CoRR abs/1711.04574 (2017). http://arxiv.org/abs/1711.04574
d’Avila Garcez, A.S., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1), 155–207 (2001)
d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive learning and logic programming system. Appl. Intell. 11(1), 59–77 (1999)
Gentet, E., Tourret, S., Inoue, K.: Learning from interpretation transition using feed-forward neural network. In: Proceedings of ILP 2016, CEUR Proceedings 1865, pp. 27–33 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Inoue, K., Ribeiro, T., Sakama, C.: Learning from interpretation transition. Mach. Learn. 94(1), 51–79 (2013). https://doi.org/10.1007/s10994-013-5353-8
Martínez, D., Alenyà, G., Ribeiro, T., Inoue, K., Torras, C.: Relational reinforcement learning for planning with exogenous effects. J. Mach. Learn. Res. 18(78), 1–44 (2017). http://jmlr.org/papers/v18/16-326.html
Ribeiro, T., Magnin, M., Inoue, K., Sakama, C.: Learning multi-valued biological models with delayed influence from time-series observations. In: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pp. 25–31, December 2015. https://doi.org/10.1109/ICMLA.2015.19.
Ribeiro, T., Inoue, K.: Learning prime implicant conditions from interpretation transition. In: Davis, J., Ramon, J. (eds.) ILP 2014. LNCS (LNAI), vol. 9046, pp. 108–125. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23708-4_8
Rolnick, D., Veit, A., Belongie, S., Shavit, N.: Deep learning is robust to massive label noise (2018). https://openreview.net/forum?id=B1p461b0W
Seltzer, M.L., Yu, D., Wang, Y.: An investigation of deep neural networks for noise robust speech recognition. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7398–7402. IEEE (2013)
Streck, A., Siebert, H., Klarner, H.: PyBoolNet: a python package for the generation, analysis and visualization of boolean networks. Bioinformatics 33(5), 770–772 (2016). https://doi.org/10.1093/bioinformatics/btw682
Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. CoRR abs/1707.02968 (2017). http://arxiv.org/abs/1707.02968
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Phua, Y.J., Inoue, K. (2020). Learning Logic Programs from Noisy State Transition Data. In: Kazakov, D., Erten, C. (eds) Inductive Logic Programming. ILP 2019. Lecture Notes in Computer Science(), vol 11770. Springer, Cham. https://doi.org/10.1007/978-3-030-49210-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-49210-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49209-0
Online ISBN: 978-3-030-49210-6
eBook Packages: Computer ScienceComputer Science (R0)