Skip to main content

An Agent-Based Market Simulator for Back-Testing Deep Reinforcement Learning Based Trade Execution Strategies

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13110))

Included in the following conference series:

Abstract

Researchers have reported success in developing autonomous trade execution systems based on Deep Reinforcement Learning (DRL) techniques aiming to minimize the execution costs. However, they all back-test the trade execution policies on historical datasets. One of the biggest drawbacks of back-testing on historical datasets is that it cannot account for the permanent market impacts caused by interactions among various trading agents and real-world factors such as network latency and computational delays.

In this article, we investigate an agent-based market simulator as a back-testing tool. More specifically, we design agents which use the trade execution policies learned by two previously proposed Deep Reinforcement Learning algorithms, a modified Deep-Q Network (DQN) and Proximal Policy Optimization with Long-Short Term Memory networks (PPO LSTM), to execute trades and interact with each other in the market simulator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For example, the differences of immediate rewards between different time steps, indicators to represent various market scenarios (i.e., regime shift, trends in price changes, etc.).

  2. 2.

    TWAP represents the volume projections of the TWAP strategy in one step.

  3. 3.

    Implementation Shortfall = (arrival price − executed price) \(\times \) traded volume.

References

  1. Byrd, D.: Explaining agent-based financial market simulation. arXiv preprint arXiv:1909.11650 (2019)

  2. Byrd, D., et al.: ABIDES: towards high-fidelity market simulation for AI research. arXiv preprint arXiv:1904.12066 (2019)

  3. Chakraborty, T., Kearns, M.: Market making and mean reversion. In: Proceedings of the 12th ACM Conference on Electronic Commerce, San Jose, CA, USA (2011)

    Google Scholar 

  4. Hendricks, D., Wilcox, D.: A reinforcement learning extension to the Almgren-Chriss framework for optimal trade execution. In: Proceedings of the IEEE Conference on Computational Intelligence for Financial Economics and Engineering, London, UK, pp. 457–464 (2014)

    Google Scholar 

  5. Lin, S., Beling, P.-A.: Optimal liquidation with deep reinforcement learning. In: Proceedings of the 33rd Conference on Neural Information Processing Systems, Deep Reinforcement Learning Workshop, Vancouver, Canada (2019)

    Google Scholar 

  6. Lin, S., Beling, P.-A.: An end-to-end optimal trade execution framework based on proximal policy optimization. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence (2020)

    Google Scholar 

  7. Lin, S., Beling, P.A.: A deep reinforcement learning framework for optimal trade execution. In: Dong, Y., Ifrim, G., Mladenić, D., Saunders, C., Van Hoecke, S. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12461, pp. 223–240. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67670-4_14

    Chapter  Google Scholar 

  8. Lin, S., Beling, P.A.: Investigating the robustness and generalizability of deep reinforcement learning based optimal trade execution systems. In: Arai, K. (ed.) Intelligent Computing. LNNS, vol. 284, pp. 912–926. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80126-7_64

    Chapter  Google Scholar 

  9. Mnih, V., et al.: Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)

  10. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  11. Nevmyvaka, Y., et al.: Reinforcement learning for optimal trade execution. In: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, pp. 673–68. Association for Computing Machinery (2006)

    Google Scholar 

  12. Ning, B., Ling, F.-H.-T., Jaimungal, S.: Double deep Q-learning for optimal execution. arXiv arXiv:1812.06600 (2018)

  13. Schulman, J., et al.: Proximal policy optimization algorithms. arXiv:1811.08540v2 (2017)

  14. Wah, E., et al.: Welfare effects of market making in continuous double auctions. J. Artif. Intell. Res. 59, 613–650 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Siyu Lin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, S., Beling, P.A. (2021). An Agent-Based Market Simulator for Back-Testing Deep Reinforcement Learning Based Trade Execution Strategies. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13110. Springer, Cham. https://doi.org/10.1007/978-3-030-92238-2_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92238-2_53

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92237-5

  • Online ISBN: 978-3-030-92238-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics