Abstract
The integration of knowledge representation, reasoning and learning into a robust and computationally effective model is a key challenge in Artificial Intelligence. Temporal models are fundamental to describe the behaviour of computing and information systems. In addition, acquiring the description of the desired behaviour of a system is a complex task in several AI domains. In this paper, we evaluate a neural framework capable of adapting temporal models according to properties, and also learning through observation of examples. In this framework, a symbolically described model is translated into a recurrent neural network, and algorithms are proposed to integrate learning, both from examples and from properties. In the end, the knowledge is again symbolically represented, incorporating both initial model and learned specification, as shown by our case study. The case study illustrates how the integration of methodologies and principles from distinct AI areas can be relevant to build robust intelligent systems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Alrajeh, D., Ray, O., Russo, A., Uchitel, S.: Using abduction and induction for operational requirements elaboration. Journal of Applied Logic 7(3), 275–288 (2009)
Andrews, R., Diederich, J., Tickle, A.B.: A survey and critique of techniques for extracting rules from neural networks. Knowledge-based Systems 8(6), 373–389 (1995)
Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and debugging. Commun. ACM 52(11), 74–84 (2009)
Feigenbaum, E.A.: Some challenges and grand challenges for computational intelligence. Journal of ACM 50(1), 32–40 (2003)
Fisher, M., Gabbay, D., Vila, L. (eds.): Handbook of temporal reasoning in artificial intelligence. Elsevier, Amsterdam (2005)
d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive learning and logic programming system. Applied Intelligence 11(1), 59–77 (1999)
Groce, A., Peled, D., Yannakakis, M.: Adaptive model checking. In: Katoen, J.-P., Stevens, P. (eds.) TACAS 2002. LNCS, vol. 2280, pp. 357–370. Springer, Heidelberg (2002)
Haykin, S.: Neural Networks: A Compreensive Foundation, 2nd edn. Prentice Hall, Englewood Cliffs (1999)
Lamb, L.C., Borges, R.V., d’Avila Garcez, A.S.: A connectionist cognitive model for temporal synchronization and learning. In: AAAI 2007, pp. 827–832 (2007)
Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)
Muggleton, S., Raedt, L.: Inductive logic programming: Theory and methods. J. Logic Programming 19-20, 629–679 (1994)
Peled, D., Vardi, M.Y., Yannakakis, M.: Black box checking. J. Autom. Lang. Comb. 7(2), 225–246 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Borges, R.V., d’Avila Garcez, A., Lamb, L.C. (2010). Representing, Learning and Extracting Temporal Knowledge from Neural Networks: A Case Study. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds) Artificial Neural Networks – ICANN 2010. ICANN 2010. Lecture Notes in Computer Science, vol 6353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15822-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-642-15822-3_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15821-6
Online ISBN: 978-3-642-15822-3
eBook Packages: Computer ScienceComputer Science (R0)