Skip to main content

Feature-Discovering Approximate Value Iteration Methods

  • Conference paper
Abstraction, Reformulation and Approximation (SARA 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3607))

  • 1049 Accesses

Abstract

Sets of features in Markov decision processes can play a critical role in approximately representing value and in abstracting the state space. Selection of features is crucial to the success of a system and is most often conducted by a human. We study the problem of automatically selecting problem features, and propose and evaluate a simple approach reducing the problem of selecting a new feature to standard classification learning. We learn a classifier that predicts the sign of the Bellman error over a training set of states. By iteratively adding new classifiers as features with this method, training between iterations with approximate value iteration, we find a Tetris feature set that outperforms randomly constructed features significantly, and obtains a score of about three-tenths of the highest score obtained by using a carefully hand-constructed feature set. We also show that features learned with this method outperform those learned with the previous method of Patrascu et al. [4] on the same SysAdmin domain used for evaluation there.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bellman, R., Kalaba, R., Kotkin, B.: Polynomial approximation – a new computational technique in dynamic programming. Math. Comp. 17(8), 155–161 (1963)

    MATH  MathSciNet  Google Scholar 

  2. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont (1996)

    MATH  Google Scholar 

  3. Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)

    MATH  Google Scholar 

  4. Patrascu, R., Poupart, P., Schuurmans, D., Boutilier, C., Guestrin, C.: Greedy linear value-approximation for factored markov decision processes. In: AAAI (2002)

    Google Scholar 

  5. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  6. Sutton, R.S.: Learning to predict by the methods of temporal differences. MLJ 3, 9–44 (1988)

    Google Scholar 

  7. Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)

    Google Scholar 

  8. Tesauro, G.: Temporal difference learning and td-gammon. Comm. ACM 38(3), 58–68 (1995)

    Article  Google Scholar 

  9. Utgoff, P.E., Precup, D.: Constuctive function approximation. In: Motoda, H., Liu, H. (eds.) Feature extraction, construction, and selection: A data-mining perspective, pp. 219–235. Kluwer, Dordrecht (1998)

    Google Scholar 

  10. Widrow, B., Hoff Jr., M.E.: Adaptive switching circuits. IRE WESCON Convention Record, 96–104 (1960)

    Google Scholar 

  11. Williams, R.J., Baird, L.C.: Tight performance bounds on greedy policies based on imperfect value functions. Technical report, Northeastern University (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wu, JH., Givan, R. (2005). Feature-Discovering Approximate Value Iteration Methods. In: Zucker, JD., Saitta, L. (eds) Abstraction, Reformulation and Approximation. SARA 2005. Lecture Notes in Computer Science(), vol 3607. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11527862_25

Download citation

  • DOI: https://doi.org/10.1007/11527862_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-27872-6

  • Online ISBN: 978-3-540-31882-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics