Abstract
Several different ways of using symbolic methods to enhance reinforcement learning are identified and discussed in some detail. Each demonstrates to some extent the potential advantages of combining RL and symbolic methods. Different from existing work, in combining RL and symbolic methods, we focus on autonomous learning from scratch without a priori domain-specific knowledge. Thus the role of symbolic methods lies truly in enhancing learning, not in providing a priori domain-specific knowledge. These discussed methods point to the possibilities and the challenges in this line of research.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Boyan, J., Moore, A.: Generalization in reinforcement learning: safely approximating the value function. In: Tesauro, J., Touretzky, D., Leen, T. (eds.) Neural Information Processing Systems, pp. 369–376. MIT Press, Cambridge (1995)
Breiman, L., Friedman, L., Stone, P.: Classification and Regression. Wadsworth, Belmont (1984)
Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)
Jordan, M., Jacobs, R.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)
Lavrac, N., Dzeroski, S.: Inductive Logic Programming. Ellis Horword, New York (1994)
Lin, L.: Self-improving reactive agents based on reinforcement learning, planning, and teaching. Machine Learning 8, 293–321 (1992)
Maclin, R., Shavlik, J.: Incorporating advice into agents that learn from reinforcements. In: Proc. of the National Conference on Artificial Intelligence (AAAI 1994). Morgan Kaufmann, San Meteo (1994)
Singh, S.: Learning to Solve Markovian Decision Processes. Ph.D Thesis, University of Massachusetts, Amherst (1994)
Sun, R.: On variable binding in connectionist networks. Connection Science 4(2), 93–124 (1992)
Sun, R.: Learning, action, and consciousness: a hybrid approach towards modeling consciousness. Neural Networks 10(7), 1317–1331 (1997)
Sun, R., Peterson, T.: A hybrid model for learning sequential navigation. In: Proc. of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 1997), Monterey, CA, pp. 234–239. IEEE Press, Piscateway (1997)
Sun, R., Peterson, T.: Autonomous learning of sequential tasks: experiments and analyses. IEEE Transactions on Neural Networks 9(6), 1217–1234 (1998)
Sun, R., Peterson, T.: Multi-agent reinforcement learning: weighting and partitioning. Neural Networks 12(4-5), 127–153 (1999)
Sun, R., Peterson, T., Merrill, E.: A hybrid architecture for situated learning of reactive sequential decision making. Applied Intelligence (1999) (in press)
Sun, R., Sessions, C.: Extracting plans from reinforcement learners. In: Xu, L., Chan, L., King, I., Fu, A. (eds.) Proceedings of the 1998 International Symposium on Intelligent Data Engineering and Learning (IDEAL 1998), pp. 243–248. Springer, Heidelberg (1998a)
Sun, R., Sessions, C.: Learning to plan probabilistically from neural networks. In: Proceedings of IEEE International Joint Conference on Neural Networks, pp. 1–6. IEEE Press, Piscataway (1998b)
Sun, R., Sessions, C.: Self segmentation of sequences. In: Proceedings of IEEE International Joint Conference on Neural Networks. IEEE Press, Piscataway (1999)
Sutton, R.: Integrated architectures for learning, planning, and reacting ba- sed on approximating dynamic programming. In: Proc. of Seventh International Conference on Machine Learning. Morgan Kaufmann, San Meteo (1990)
Tesauro, T.: Practical issues in temporal di_erence learning. Machine Learning 8, 257–277 (1992)
Towell, G., Shavlik, J.: Extracting refined rules from Knowledge-Based Neural Networks. Machine Learning 13(1), 71–101 (1993)
Watkins, C.: Learning with Delayed Rewards. Ph.D Thesis, Cambridge University, Cambridge, UK (1989)
Whitehead, S.: A complexity analysis of cooperative mechanisms in rein-forcement learning. In: Proc. of the National Conference on Artificial Intelligence (AAAI 1993), pp. 607–613. Morgan Kaufmann, San Francisco (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sun, R. (2000). Supplementing Neural Reinforcement Learning with Symbolic Methods. In: Wermter, S., Sun, R. (eds) Hybrid Neural Systems. Hybrid Neural Systems 1998. Lecture Notes in Computer Science(), vol 1778. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10719871_23
Download citation
DOI: https://doi.org/10.1007/10719871_23
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67305-7
Online ISBN: 978-3-540-46417-4
eBook Packages: Springer Book Archive