Skip to main content

Learning Logic Programs from Noisy State Transition Data

  • Conference paper
  • First Online:
Book cover Inductive Logic Programming (ILP 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11770))

Included in the following conference series:

Abstract

Real world data are often noisy and fuzzy. Most traditional logical machine learning methods require the data to be first discretized or pre-processed before being able to produce useful output. Such short-coming often limits their application to real world data. On the other hand, neural networks are generally known to be robust against noisy data. However, a fully trained neural network does not provide easily understandable rules that can be used to understand the underlying model. In this paper, we propose a Differentiable Learning from Interpretation Transition (\(\delta \)-LFIT) algorithm, that can simultaneously output logic programs fully explaining the state transitions, and also learn from data containing noise and error.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., Agarwal, A., Barham, P., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/, software available from tensorflow.org

  2. Evans, R., Grefenstette, E.: Learning explanatory rules from noisy data. CoRR abs/1711.04574 (2017). http://arxiv.org/abs/1711.04574

  3. d’Avila Garcez, A.S., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1), 155–207 (2001)

    Article  MathSciNet  Google Scholar 

  4. d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive learning and logic programming system. Appl. Intell. 11(1), 59–77 (1999)

    Article  Google Scholar 

  5. Gentet, E., Tourret, S., Inoue, K.: Learning from interpretation transition using feed-forward neural network. In: Proceedings of ILP 2016, CEUR Proceedings 1865, pp. 27–33 (2016)

    Google Scholar 

  6. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  7. Inoue, K., Ribeiro, T., Sakama, C.: Learning from interpretation transition. Mach. Learn. 94(1), 51–79 (2013). https://doi.org/10.1007/s10994-013-5353-8

    Article  MathSciNet  MATH  Google Scholar 

  8. Martínez, D., Alenyà, G., Ribeiro, T., Inoue, K., Torras, C.: Relational reinforcement learning for planning with exogenous effects. J. Mach. Learn. Res. 18(78), 1–44 (2017). http://jmlr.org/papers/v18/16-326.html

    MathSciNet  MATH  Google Scholar 

  9. Ribeiro, T., Magnin, M., Inoue, K., Sakama, C.: Learning multi-valued biological models with delayed influence from time-series observations. In: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pp. 25–31, December 2015. https://doi.org/10.1109/ICMLA.2015.19.

  10. Ribeiro, T., Inoue, K.: Learning prime implicant conditions from interpretation transition. In: Davis, J., Ramon, J. (eds.) ILP 2014. LNCS (LNAI), vol. 9046, pp. 108–125. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23708-4_8

    Chapter  Google Scholar 

  11. Rolnick, D., Veit, A., Belongie, S., Shavit, N.: Deep learning is robust to massive label noise (2018). https://openreview.net/forum?id=B1p461b0W

  12. Seltzer, M.L., Yu, D., Wang, Y.: An investigation of deep neural networks for noise robust speech recognition. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7398–7402. IEEE (2013)

    Google Scholar 

  13. Streck, A., Siebert, H., Klarner, H.: PyBoolNet: a python package for the generation, analysis and visualization of boolean networks. Bioinformatics 33(5), 770–772 (2016). https://doi.org/10.1093/bioinformatics/btw682

    Article  Google Scholar 

  14. Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. CoRR abs/1707.02968 (2017). http://arxiv.org/abs/1707.02968

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yin Jun Phua .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Phua, Y.J., Inoue, K. (2020). Learning Logic Programs from Noisy State Transition Data. In: Kazakov, D., Erten, C. (eds) Inductive Logic Programming. ILP 2019. Lecture Notes in Computer Science(), vol 11770. Springer, Cham. https://doi.org/10.1007/978-3-030-49210-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49210-6_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49209-0

  • Online ISBN: 978-3-030-49210-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics