Skip to main content

Risk-Aware Reinforcement Learning for Multi-Period Portfolio Selection

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13718))

Abstract

The task of portfolio management is the selection of portfolio allocations for every single time step during an investment period while adjusting the risk-return profile of the portfolio to the investor’s individual level of risk preference. In practice, it can be hard for an investor to quantify his individual risk preference. As an alternative, approximating the risk-return Pareto front allows for the comparison of different optimized portfolio allocations and hence for the selection of the most suitable risk level. Furthermore, an approximation of the Pareto front allows the analysis of the overall risk sensitivity of various investment policies. In this paper, we propose a deep reinforcement learning (RL) based approach, in which a single meta agent generates optimized portfolio allocation policies for any level of risk preference in a given interval. Our method is more efficient than previous approaches, as it only requires training of a single agent for the full approximate risk-return Pareto front. Additionally, it is more stable in training and only requires per time step market risk estimations independent of the policy. Such risk control per time step is a common regulatory requirement for e.g., insurance companies. We benchmark our meta agent against other state-of-the-art risk-aware RL methods using a realistic environment based on real-world Nasdaq-100 data. Our evaluation shows that the proposed meta agent outperforms various benchmark approaches by generating strategies with better risk-return profiles.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.eiopa.europa.eu/browse/solvency-2_en.

  2. 2.

    https://github.com/microsoft/qlib/tree/main.

  3. 3.

    https://docs.ray.io/en/master/rllib/index.html.

  4. 4.

    https://github.com/ShangtongZhang/DeepRL.

References

  1. Abrate, C., et al.: Continuous-action reinforcement learning for portfolio allocation of a life insurance company. In: Dong, Y., Kourtellis, N., Hammer, B., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12978, pp. 237–252. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86514-6_15

    Chapter  Google Scholar 

  2. Akaike, H.: A new look at the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974)

    Article  MathSciNet  Google Scholar 

  3. Almahdi, S., Yang, S.Y.: An adaptive portfolio trading system: a risk-return portfolio optimization using recurrent reinforcement learning with expected maximum drawdown. Expert Syst. Appl. 87, 267–279 (2017)

    Article  Google Scholar 

  4. André, E., Coqueret, G.: Dirichlet policies for reinforced factor portfolios. arXiv preprint arXiv:2011.05381 (2020)

  5. Ariyo, A.A., Adewumi, A.O., Ayo, C.K.: Stock price prediction using the Arima model. In: 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, pp. 106–112. IEEE (2014)

    Google Scholar 

  6. Bisi, L., Sabbioni, L., Vittori, E., Papini, M., Restelli, M.: Risk-averse trust region optimization for reward-volatility reduction. In: Twenty-Ninth International Joint Conference on Artificial Intelligence Special Track, pp. 4583–4589. International Joint Conferences on Artificial Intelligence Organization (2020)

    Google Scholar 

  7. Black, F., Litterman, R.: Global portfolio optimization. Finan. Analy. J. 48(5), 28–43 (1992)

    Article  Google Scholar 

  8. Boyd, S., et al.: Multi-period trading via convex optimization. Found. Trends Optim. 3(1), 1–76 (2017)

    Article  MathSciNet  Google Scholar 

  9. Brigham, E.F., Ehrhardt, M.C.: Financial Management: Theory & Practice. Cengage Learning (2019)

    Google Scholar 

  10. Chow, Y., Ghavamzadeh, M., Janson, L., Pavone, M.: Risk-constrained reinforcement learning with percentile risk criteria. J. Mach. Learn. Res. 18(1), 6070–6120 (2017)

    MathSciNet  Google Scholar 

  11. Costa, G., Kwon, R.: A regime-switching factor model for mean-variance optimization. J. Risk (2020)

    Google Scholar 

  12. Fujimoto, S., Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. In: International Conference on Machine Learning, pp. 1587–1596. PMLR (2018)

    Google Scholar 

  13. Guercio, D.D., Reuter, J.: Mutual fund performance and the incentive to generate alpha. J. Financ. 69(4), 1673–1704 (2014)

    Article  Google Scholar 

  14. Hassan, M.R., Nath, B.: Stock market forecasting using hidden Markov model: a new approach. In: 5th International Conference on Intelligent Systems Design and Applications (ISDA 2005), pp. 192–196. IEEE (2005)

    Google Scholar 

  15. Hiransha, M., Gopalakrishnan, E.A., Menon, V.K., Soman, K.: NSE stock market prediction using deep-learning models. Procedia Comput. Sci. 132, 1351–1362 (2018)

    Article  Google Scholar 

  16. Markowitz, H.: Portfolio selection. J. Finan. 7(1), 77–91 (1952)

    Google Scholar 

  17. Munim, Z.H., Shakil, M.H., Alon, I.: Next-day bitcoin price forecast. J. Risk Finan. Manag. 12(2), 103 (2019)

    Google Scholar 

  18. Navon, A., Shamsian, A., Fetaya, E., Chechik, G.: Learning the pareto front with hypernetworks. In: International Conference on Learning Representations (2021)

    Google Scholar 

  19. Nguyen, N.: Hidden Markov model for stock trading. Int. J. Finan. Stud. 6(2), 36 (2018)

    Article  Google Scholar 

  20. Pang, X., Zhou, Y., Wang, P., Lin, W., Chang, V.: An innovative neural network approach for stock market prediction. J. Supercomput. 76(3), 2098–2118 (2020)

    Article  Google Scholar 

  21. Parisotto, E., et al.: Stabilizing transformers for reinforcement learning. In: International Conference on Machine Learning, pp. 7487–7498. PMLR (2020)

    Google Scholar 

  22. Pirotta, M., Parisi, S., Restelli, M.: Multi-objective reinforcement learning with continuous pareto frontier approximation. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

    Google Scholar 

  23. Plappert, M., et al.: Parameter space noise for exploration. arXiv preprint arXiv:1706.01905 (2017)

  24. Roijers, D.M., Vamplew, P., Whiteson, S., Dazeley, R.: A survey of multi-objective sequential decision-making. J. Artif. Intell. Res. 48, 67–113 (2013)

    Article  MathSciNet  Google Scholar 

  25. Sato, M., Kobayashi, S.: Variance-penalized reinforcement learning for risk-averse asset allocation. In: Leung, K.S., Chan, L.-W., Meng, H. (eds.) IDEAL 2000. LNCS, vol. 1983, pp. 244–249. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44491-2_34

    Chapter  Google Scholar 

  26. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  27. Schwarz, G.: Estimating the dimension of a model. Ann. Statist. 6, 461–464 (1978)

    Google Scholar 

  28. Sharpe, W.F.: The sharpe ratio. Streetwise Best J. Portfolio Manag. 3, 169–185 (1998)

    Google Scholar 

  29. Sobel, M.J.: The variance of discounted Markov decision processes. J. Appl. Probab. 19, pp. 794–802 (1982)

    Google Scholar 

  30. Wang, H., Zhou, X.Y.: Continuous-time mean-variance portfolio selection: a reinforcement learning framework. Math. Financ. 30(4), 1273–1308 (2020)

    Article  MathSciNet  Google Scholar 

  31. Wu, N., Green, B., Ben, X., O’Banion, S.: Deep transformer models for time series forecasting: the influenza prevalence case. arXiv preprint arXiv:2001.08317 (2020)

  32. Zhang, S., Liu, B., Whiteson, S.: Mean-variance policy iteration for risk-averse reinforcement learning. In: AAAI (2021)

    Google Scholar 

Download references

Acknowledgments

This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Winkel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Winkel, D., Strauß, N., Schubert, M., Seidl, T. (2023). Risk-Aware Reinforcement Learning for Multi-Period Portfolio Selection. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13718. Springer, Cham. https://doi.org/10.1007/978-3-031-26422-1_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26422-1_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26421-4

  • Online ISBN: 978-3-031-26422-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics