Skip to main content

Synthesizing Understandable Strategies

  • Conference paper
  • First Online:
Engineering of Computer-Based Systems (ECBS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14390))

Included in the following conference series:

  • 166 Accesses

Abstract

The result of reinforcement learning is often obtained in the form of a q-table mapping actions to future rewards. We propose to use SMT solvers and strategy trees to generate a representation of a learned strategy in a format which is understandable for a human. We present the methodology and demonstrate it on a small game.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    As well as a set of meta-parameters to the learning algorithm, e.g., learning rate.

  2. 2.

    For simplicity, action \(a_2\) is forbidden when \(s = 1\), i.e., it is impossible to pick more sticks than remaining.

  3. 3.

    Where % is the remainder operator.

  4. 4.

    The subtraction of one comes from the action space being defined as \(\{a_1= 0, a_2 = 1\}\) instead of the number of sticks removed (\(\{a_1 = 1, a_2 = 2\}\)).

  5. 5.

    Interval constraints are added on edges, limiting the functions domains for efficiency.

References

  1. Barrett, C., Tinelli, C.: Satisfiability modulo theories. In: Handbook of Model Checking, pp. 305–343. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-10575-8_11

    Chapter  Google Scholar 

  2. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  3. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. The MIT Press (2018)

    Google Scholar 

  4. Wu, K., et al.: Automatic synthesis of generalized winning strategies of impartial combinatorial games using SMT solvers. In: Bessiere, C. (ed.) Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pp. 1703–1711. International Joint Conferences on Artificial Intelligence Organization (2020)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Knowledge Foundation in Sweden through the ACICS project (20190038).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Backeman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Backeman, P. (2024). Synthesizing Understandable Strategies. In: Kofroň, J., Margaria, T., Seceleanu, C. (eds) Engineering of Computer-Based Systems. ECBS 2023. Lecture Notes in Computer Science, vol 14390. Springer, Cham. https://doi.org/10.1007/978-3-031-49252-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-49252-5_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-49251-8

  • Online ISBN: 978-3-031-49252-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics