Skip to main content

Supplementing Neural Reinforcement Learning with Symbolic Methods

  • Conference paper
Hybrid Neural Systems (Hybrid Neural Systems 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1778))

Included in the following conference series:

Abstract

Several different ways of using symbolic methods to enhance reinforcement learning are identified and discussed in some detail. Each demonstrates to some extent the potential advantages of combining RL and symbolic methods. Different from existing work, in combining RL and symbolic methods, we focus on autonomous learning from scratch without a priori domain-specific knowledge. Thus the role of symbolic methods lies truly in enhancing learning, not in providing a priori domain-specific knowledge. These discussed methods point to the possibilities and the challenges in this line of research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Boyan, J., Moore, A.: Generalization in reinforcement learning: safely approximating the value function. In: Tesauro, J., Touretzky, D., Leen, T. (eds.) Neural Information Processing Systems, pp. 369–376. MIT Press, Cambridge (1995)

    Google Scholar 

  2. Breiman, L., Friedman, L., Stone, P.: Classification and Regression. Wadsworth, Belmont (1984)

    MATH  Google Scholar 

  3. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  4. Jordan, M., Jacobs, R.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  5. Lavrac, N., Dzeroski, S.: Inductive Logic Programming. Ellis Horword, New York (1994)

    MATH  Google Scholar 

  6. Lin, L.: Self-improving reactive agents based on reinforcement learning, planning, and teaching. Machine Learning 8, 293–321 (1992)

    Google Scholar 

  7. Maclin, R., Shavlik, J.: Incorporating advice into agents that learn from reinforcements. In: Proc. of the National Conference on Artificial Intelligence (AAAI 1994). Morgan Kaufmann, San Meteo (1994)

    Google Scholar 

  8. Singh, S.: Learning to Solve Markovian Decision Processes. Ph.D Thesis, University of Massachusetts, Amherst (1994)

    Google Scholar 

  9. Sun, R.: On variable binding in connectionist networks. Connection Science 4(2), 93–124 (1992)

    Article  Google Scholar 

  10. Sun, R.: Learning, action, and consciousness: a hybrid approach towards modeling consciousness. Neural Networks 10(7), 1317–1331 (1997)

    Article  Google Scholar 

  11. Sun, R., Peterson, T.: A hybrid model for learning sequential navigation. In: Proc. of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 1997), Monterey, CA, pp. 234–239. IEEE Press, Piscateway (1997)

    Google Scholar 

  12. Sun, R., Peterson, T.: Autonomous learning of sequential tasks: experiments and analyses. IEEE Transactions on Neural Networks 9(6), 1217–1234 (1998)

    Article  Google Scholar 

  13. Sun, R., Peterson, T.: Multi-agent reinforcement learning: weighting and partitioning. Neural Networks 12(4-5), 127–153 (1999)

    Article  Google Scholar 

  14. Sun, R., Peterson, T., Merrill, E.: A hybrid architecture for situated learning of reactive sequential decision making. Applied Intelligence (1999) (in press)

    Google Scholar 

  15. Sun, R., Sessions, C.: Extracting plans from reinforcement learners. In: Xu, L., Chan, L., King, I., Fu, A. (eds.) Proceedings of the 1998 International Symposium on Intelligent Data Engineering and Learning (IDEAL 1998), pp. 243–248. Springer, Heidelberg (1998a)

    Google Scholar 

  16. Sun, R., Sessions, C.: Learning to plan probabilistically from neural networks. In: Proceedings of IEEE International Joint Conference on Neural Networks, pp. 1–6. IEEE Press, Piscataway (1998b)

    Google Scholar 

  17. Sun, R., Sessions, C.: Self segmentation of sequences. In: Proceedings of IEEE International Joint Conference on Neural Networks. IEEE Press, Piscataway (1999)

    Google Scholar 

  18. Sutton, R.: Integrated architectures for learning, planning, and reacting ba- sed on approximating dynamic programming. In: Proc. of Seventh International Conference on Machine Learning. Morgan Kaufmann, San Meteo (1990)

    Google Scholar 

  19. Tesauro, T.: Practical issues in temporal di_erence learning. Machine Learning 8, 257–277 (1992)

    MATH  Google Scholar 

  20. Towell, G., Shavlik, J.: Extracting refined rules from Knowledge-Based Neural Networks. Machine Learning 13(1), 71–101 (1993)

    Google Scholar 

  21. Watkins, C.: Learning with Delayed Rewards. Ph.D Thesis, Cambridge University, Cambridge, UK (1989)

    Google Scholar 

  22. Whitehead, S.: A complexity analysis of cooperative mechanisms in rein-forcement learning. In: Proc. of the National Conference on Artificial Intelligence (AAAI 1993), pp. 607–613. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sun, R. (2000). Supplementing Neural Reinforcement Learning with Symbolic Methods. In: Wermter, S., Sun, R. (eds) Hybrid Neural Systems. Hybrid Neural Systems 1998. Lecture Notes in Computer Science(), vol 1778. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10719871_23

Download citation

  • DOI: https://doi.org/10.1007/10719871_23

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67305-7

  • Online ISBN: 978-3-540-46417-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics