Abstract
This paper discusses learning in hybrid models that goes beyond simple classification rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, we need to further develop hybrid learning models that learn autonomously and acquire both symbolic and subsymbolic knowledge. It is also necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper will describe planning knowledge extraction from neural reinforcement learning that goes beyond extracting simple rules. It includes two approaches towards extracting planning knowledge: the extraction of symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
L.M. Fu, (1991). Rule learning by searching on adapted nets, Proc.of AAAI’91, pp.590–595.
N. Lavrac and S. Dzeroski, (1994). Inductive Logic Programming. Ellis Horword, New York.
L. Lin, (1992). Self-improving reactive agents based on reinforcement learning, planning, and teaching. Machine Learning. Vol.8, pp.293–321.
R. Maclin and J. Shavlik, (1994). Incorporating advice into agents that learn from reinforcements. Proc.of AAAI-94. Morgan Kaufmann, San Meteo, CA.
R. Sun, (1992). On variable binding in connectionist networks. Connection Science, Vol.4, No.2, pp.93–124. 1992.
R. Sun, (1997). Learning, action, and consciousness: a hybrid approach towards modeling consciousness. Neural Networks, 10 (7), pp.1317–1331
R. Sun, (2000). Symbol grounding: a new look at an old idea. Philosophical Psychology, Vol.13, No.2, pp.149–172.
R. Sun, E. Merrill, and T. Peterson, (2000). Prom implicit skills to explicit knowledge: a bottom-up model of skill learning. Cognitive Science, in press.
R. Sun and T. Peterson, (1997). A hybrid model for learning sequential navigation. Proc. of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA’97). Monterey, CA. pp.234–239. IEEE Press.
R. Sun and T. Peterson, (1998). Autonomous learning of sequential tasks: experiments and analyses. IEEE Transactions on Neural Networks, Vol.9, No.6, pp. 1217–1234.
R. Sun and C. Sessions, (1998). Extracting plans from reinforcement learners. Proceedings of the 1998 International Symposium on Intelligent Data Engineering and Learning (IDEAL’98). pp.243–248. eds. L. Xu, L. Chan, I. King, and A. Fu. Springer-Verlag.
R. Sutton, (1996). Generalization in reinforcement learning. Advances in Neural Information Processing Systems 8. MIT Press. Cambridge, MA.
T. Tesauro, (1992). Practical issues in temporal difference learning. Machine Learning. Vol.8, 257–277.
A. Tickle, J. Diederich, et al, (2000). Lessons from past, current issues, and future research directions in extracting knowledge embedded in artificial neural networks. In: S. Wermter and R. Sun, (eds.) Hybrid Neural Systems, Springer-Verlag, Berlin.
G. Towell and J. Shavlik, (1993). Extracting refined rules from Knowledge-Based Neural Networks, Machine Learning. 13 (1), 71–101.
C. Watkins, (1989). Learning with Delayed Rewards. Ph.D Thesis, Cambridge University, Cambridge, UK.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag London Limited
About this paper
Cite this paper
Sun, R., Peterson, T., Sessions, C. (2002). Beyond Simple Rule Extraction: Acquiring Planning Knowledge from Neural Networks. In: Tagliaferri, R., Marinaro, M. (eds) Neural Nets WIRN Vietri-01. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0219-9_32
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0219-9_32
Publisher Name: Springer, London
Print ISBN: 978-1-85233-505-2
Online ISBN: 978-1-4471-0219-9
eBook Packages: Springer Book Archive