Abstract
Market making (MM) is a trading activity by an individual market participant or a member firm of an exchange that buys and sells same securities with the primary goal of profiting on the bid-ask spread, which contributes to the market liquidity. Reinforcement learning (RL) is emerging as a quite popular method for automated market making, in addition to many other financial problems. The current state of the art in MM based on RL includes two recent benchmarks which use temporal-difference learning with Tile-Codings and Deep Q Networks (DQN). These two benchmark approaches focus on single-asset modelling, limiting their applicability in realistic scenarios, where the MM agents are required to trade on a collection of assets. Moreover, the Multi-Asset trading reduces the risk associated with the returns. Therefore, we design a Multi-Asset Market Making (MAMM) model, known as MTDRLMM, based on Multi-Task Deep RL. From a Multi-Task Learning perspective, multiple assets are considered as multiple tasks of the same nature. These assets share common characteristics among them, along with their individual traits. The experimental results show that the MAMM is more profitable than Single-Asset MM, in general. Moreover, the MTDRLMM model achieves the state-of-the-art in terms of investment return in a collection of assets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abernethy, J., Chen, Y., Vaughan, J.W.: An optimization-based framework for automated market-making. In: EC’11: Proceedings of the 12th ACM Conference on Electronic Commerce, pp. 297–306 (2011)
Abernethy, J., Kale, S.: Adaptive market making via online learning. In: Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS’13), vol. 2, pp. 2058–2066. Curran Associates Inc. (2013)
Ait-Sahalia, Y., Saglam, M.: High frequency market making: optimal quoting (2017). Available at SSRN: https://ssrn.com/abstract=2331613 or https://doi.org/10.2139/ssrn.2331613
Avellaneda, M., Stoikov, S.: High-frequency trading in a limit order book. Quant. Finance 8(3), 217–224 (2008)
Baldacci, B., Derchu, J., Manziuk, I.: An approximate solution for options market-making in high dimension (2020). https://arxiv.org/pdf/2009.00907.pdf
Bergault, P., Evangelista, D., Gueant, O., Vieira, D.: Closed-form approximations in multi-asset market making, September 2020. https://arxiv.org/pdf/1810.04383.pdf
Carta, S., Corriga, A., Ferreira, A., Podda, A.S., Recupero, D.R.: A multi-layer and multi-ensemble stock trader using deep learning and deep reinforcement learning. Appl. Intell. 51(2), 889–905 (2020). https://doi.org/10.1007/s10489-020-01839-5
Cartea, A., Wang, Y.: Market making with minimum resting times. Quant. Finance 19, 903–920 (2019)
Caruana, R.: Multitask learning: a knowledge-based source of inductive bias. In: Proceedings of the Tenth International Conference on Machine Learning (1993)
Chan, N.T., Shelton, C.: An electronic market-maker. Technical report. MIT (2001)
Das, S.: The effects of market-making on price dynamics. In: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’08), vol. 2, pp. 887–894. International Foundation for Autonomous Agents and Multiagent Systems (2008)
Eramo, C.D., Tateo, D., Bonarini, A., Restelli, M., Peters, J.: Sharing knowledge in multi-task deep reinforcement learning. In: International Conference on Learning Representations (ICLR) (2020)
Gould, M.D., Porter, M.A., Williams, S., McDonald, M., Fenn, D.J., Howison, S.D.: Limit order books. Quant. Finance 13(11), 1709–1742 (2013)
Guéant, O., Manziuk, I.: Deep reinforcement learning for market making in corporate bonds: beating the curse of dimensionality. Appl. Math. Finance 26, 387–452 (2020)
Hirsa, A., Osterrieder, J., Hadji-Misheva, B., Posth, J.A.: Deep reinforcement learning on a multi-asset environment for trading. arXiv:2106.08437, June 2021
Kumar, P.: Deep reinforcement learning for market making. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1892–1894 (2020)
Lussange, J., Lazarevich, I., Bourgeois-Gironde, S., Palminteri, S., Gutkin, B.: Modelling stock markets by multi-agent reinforcement learning. Comput. Econ. 57(1), 113–147 (2020). https://doi.org/10.1007/s10614-020-10038-w
Selser, M., Kreiner, J., Maurette, M.: Optimal market making by reinforcement learning (2021)
Spooner, T., Fearnley, J., Savani, R., Koukorinis, A.: Market making via reinforcement learning. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS’18), pp. 434–442. International Foundation for Autonomous Agents and Multiagent Systems (2018)
Spooner, T., Savani, R.: Robust market making via adversarial reinforcement learning. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Special Track on AI in FinTech (2020)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge (2018)
Zhong, Y., Bergstrom, Y., Ward, A.: Data driven market making via model free learning. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Special Track on AI in FinTech, pp. 4461–4468 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Haider, A., Hawe, G.I., Wang, H., Scotney, B. (2022). Multi-Asset Market Making via Multi-Task Deep Reinforcement Learning. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-95470-3_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95469-7
Online ISBN: 978-3-030-95470-3
eBook Packages: Computer ScienceComputer Science (R0)