Abstract
In this paper, we consider different financial trading systems (FTSs) based on a Reinforcement Learning (RL) methodology known as Q-Learning (QL). QL is a machine learning method which real-time optimizes its behavior in relation to the responses it gets from the environment as a consequence of its acting. In the paper, first we introduce the essential aspects of RL and QL which are of interest for our purposes, then we present some original and differently configurated FTSs based on QL, finally we apply such FTSs to eight time series of daily closing stock returns from the Italian stock market.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
In Sect. 4, we specify the policy improvement we consider in the applications.
- 2.
For simplicity’s sake, in the following of the paper we use only the term “net” for the expression “net-of-transaction cost”.
- 3.
Note that the need to specify such an approximator is due to the fact that some of the state variables, namely the logarithmic rates of return, are continuous.
- 4.
When \(k=0\), the parameters are randomly initialized following a \(\mathcal {U}(-1, 1)^{N+2}\).
- 5.
Note that, in order to determine the optimal parameters, we perform a mean square error minimization through a gradient descent-based method.
- 6.
In this context, “annualized” and “monthly” have to be meant as referring to the stock market year and to the stock market month, respectively.
- 7.
From here on in, by the expression \(\ll \)[\(\ldots \)] stocks that contribute most to this result [\(\ldots \)]\(\gg \), or equivalent, we mean stocks whose percentages of succes are greater than or equal to \(60\%\).
References
Barto, A.G., Sutton, R.S.: Reinforcement Learning: An Introduction, 2nd edn. The MIT Press (2018)
Bekiros, S.D.: Heterogeneous trading strategies with adaptive fuzzy actor-critic reinforcement learning: a behavioral approach. J. Econ. Dyn. Control. 34(6), 1153–1170 (2010)
Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific (1996)
Brent, R.P.: Algorithms for Minimization Without Derivatives. Prentice-Hall (1973)
Casqueiro, P.X., Rodrigues, A.J.L.: Neuro-dynamic trading methods. Eur. J. Oper. Res. 175(3), 1400–1412 (2006)
Deng, Y., Bao, F., Kong, Y., Ren, Z., Dai, Q.: Deep Direct Reinforcement Learning for financial signal representation and trading. IEEE Trans. Neural Netw. Learn. Syst. 28(3), 653–664 (2017)
Gosavi, A.: Simulation-Based Optimization. Parametric Optimization Techniques and Reinforcement Learning. Springer, (2015)
Jangmin, O., Lee, J., Lee, J.W., Zhang, B.-T.: Adaptive stock trading with dynamic asset allocation using reinforcemnt learning. Inform. Sci. 176(15), 2121–2147 (2006)
Kearns, M., Nevmyvaka, Y.: Machine learning for market microstructure and high frequency trading. In: Easley, D., López de Prado, M., O’Hara, M. (eds.) High-Frequency Trading—New Realities for Traders, Markets and Regulators, pp. 91–124. Risk Books (2013)
Li, H., Dagli, C.H., Enke, D.: Short-term stock market timing prediction under reinforcement learning schemes. In: Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, pp. 233–240 (2007)
Moody, J., Saffel, M.: Learning to trade via direct reinforcement. IEEE Trans. Neural Netw. 12(4), 875–889 (2001)
Tan, Z., Quek, C., Cheng, P.Y.K: Stock trading with cycles: a financial application of ANFIS and reinforcement learning. Expert. Syst. Appl. 38(5), 4741–4755 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Corazza, M. (2021). Q-Learning-Based Financial Trading: Some Results and Comparisons. In: Esposito, A., Faundez-Zanuy, M., Morabito, F., Pasero, E. (eds) Progresses in Artificial Intelligence and Neural Systems. Smart Innovation, Systems and Technologies, vol 184. Springer, Singapore. https://doi.org/10.1007/978-981-15-5093-5_31
Download citation
DOI: https://doi.org/10.1007/978-981-15-5093-5_31
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-5092-8
Online ISBN: 978-981-15-5093-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)