Skip to main content
Log in

Profit-based deep architecture with integration of reinforced data selector to enhance trend-following strategy

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

Despite the popularity of trend-following strategies in financial markets, they often lack adaptability to the emerging varied markets. Recently, deep learning (DL) methods demonstrate the effectiveness in stock-market analysis. Thus, the application of DL methods to enhance trend-following strategies has received substantial attention. However, there are two key challenges to be solved before the adoption of DL methods in enhancing trend-following strategies: (1) how to design an effective data selector to include more related data? (2) how to design a profit-based model to enhance strategies? To address these two challenges, this paper contributes to a new framework, namely profit-based deep architecture with the integration of reinforced data selector (PDA-RDS) to improve the effectiveness of DL methods. In particular, profit-based deep architecture (PDA) integrates a dynamic profit weight and a focal loss function to obtain high profits. In addition, reinforced data selector (RDS) is constructed to select high-quality training samples and a training-aware immediate reward is designated to improve the effectiveness of RDS. Extensive experiments on both U.S. and China stock-market datasets demonstrate that PDA-RDS outperforms the state-of-the-art baseline methods in terms of higher cumulative percentage rate and average percentage rate, both of which are crucial to investment strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. https://www.investopedia.com/terms/t/trendtrading.asp

  2. https://drive.google.com/file/d/1Iowk-9946O53Okk6vDPnfgRDCGa5sJng/view?usp=sharing

  3. https://drive.google.com/file/d/1InH3nnNEE2lFbWnrrT_67jZBjiH2Hfb5/view?usp=sharing

References

  1. Brock, W., Lakonishok, J., LeBaron, B.: Simple technical trading rules and the stochastic properties of stock returns. J. Financ. 47(5), 1731–1764 (1992)

    Article  Google Scholar 

  2. James, J., et al.: Simple trend-following strategies in currency trading. Quantitative Finance 3(4), 75–77 (2003)

    Article  MathSciNet  Google Scholar 

  3. Jegadeesh, N., Titman, S.: Returns to buying winners and selling losers: Implications for stock market efficiency. J. Financ. 48(1), 65–91 (1993)

    Article  Google Scholar 

  4. Fong, S., Si, Y.-W., Tai, J.: Trend following algorithms in automated derivatives market trading. Expert Syst. Appl. 39(13), 11378–11390 (2012)

    Article  Google Scholar 

  5. Zheng, W., Zheng, Z., Wan, H., Chen, C.: Dynamically route hierarchical structure representation to attentive capsule for text classification. In: IJCAI’19, pp. 5464–5470. AAAI Press, (2019)

  6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS’12, vol. 60, pp. 1097–1105. MIT Press, (2012)

  7. Feng, F., He, X., Wang, X., Luo, C., Liu, Y., Chua, T.-S.: Temporal relational ranking for stock prediction. ACM Transactions on Information Systems (TOIS) 37(2), 1–30 (2019)

    Article  Google Scholar 

  8. Wang, H., Wu, Y., Min, G., Miao, W.: A graph neural network-based digital twin for network slicing management. IEEE Trans. Industr. Inf. 18(2), 1367–1376 (2020)

    Article  Google Scholar 

  9. Wu, Y., Wang, Z., Ma, Y., Leung, V.C.: Deep reinforcement learning for blockchain in industrial iot: A survey. Comput. Netw. 191,(2021)

    Article  Google Scholar 

  10. Liang, W., Xie, S., Cai, J., Xu, J., Hu, Y., Xu, Y., Qiu, M.: Deep neural network security collaborative filtering scheme for service recommendation in intelligent cyber-physical systems. IEEE Internet of Things Journal, 1–1 (2021)

  11. Feng, F., Chen, H., He, X., Ding, J., Chua, T.-S.: Enhancing stock movement prediction with adversarial training. In: IJCAI‘19, pp. 5843–5849. AAAI Press, (2019)

  12. Tran, D.T., Iosifidis, A., Kanniainen, J., Gabbouj, M.: Temporal attention-augmented bilinear network for financial time-series data analysis. IEEE Transactions on Neural Networks and Learning Systems 30(5), 1407–1418 (2018)

    Article  Google Scholar 

  13. Hu, Z., Liu, W., Bian, J., Liu, X., Liu, T.-Y.: Listening to chaotic whispers: A deep learning framework for news-oriented stock trend prediction. In: ICDM’18, pp. 261–269. Association for Computing Machinery, (2018)

  14. Bollen, J., Mao, H., Zeng, X.: Twitter mood predicts the stock market. Journal of Computational Science 2(1), 1–8 (2011)

    Article  Google Scholar 

  15. He, Q.-Q., Pang, P.C.-I., Si, Y.-W.: Transfer learning for financial time series forecasting. In: PRIJCAI’19, vol. 11671, pp. 24–36 (2019). Springer

  16. Nguyen, T.-T., Yoon, S.: A novel approach to short-term stock price movement prediction using transfer learning. Appl. Sci. 9(22), 4745 (2019)

    Article  Google Scholar 

  17. Fischer, T., Krauss, C.: Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 270(2), 654–669 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Cao, Z., Long, M., Wang, J., Jordan, M.I.: Partial transfer learning with selective adversarial networks. In: CVPR’18, pp. 2724–2732 (2018)

  19. Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., He, Q.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2020)

    Article  Google Scholar 

  20. Wang, C., Mahadevan, S.: Heterogeneous domain adaptation using manifold alignment. In: IJCAI’11, pp. 1541–1546 (2011)

  21. Xie, S., Zheng, Z., Chen, L., Chen, C.: Learning semantic representations for unsupervised domain adaptation. In: ICML’18, vol. 80, pp. 5423–5432. Cambridge MA: JMLR, (2018)

  22. Ruder, S., Plank, B.: Learning to select data for transfer learning with Bayesian optimization. In: EMNLP’17, pp. 372–382. Association for Computational Linguistics (2017)

  23. Ye, R., Dai, Q.: A novel transfer learning framework for time series forecasting. Knowl.-Based Syst. 156, 74–99 (2018)

    Article  Google Scholar 

  24. Wang, B., Qiu, M., Wang, X., Li, Y., Gong, Y., Zeng, X., Huang, J., Zheng, B., Cai, D., Zhou, J.: A minimax game for instance based selective transfer learning. In: KDD’19, pp. 34–43. Association for Computing Machinery (2019)

  25. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)

    Article  Google Scholar 

  26. VanHasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: AAAI’16, pp. 2094–2100 (2016)

  27. Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., De Freitas, N.: Dueling network architectures for deep reinforcement learning. In: ICML’16, pp. 1995–2003. JMLR.org (2016)

  28. Yan, Z., Ge, J., Wu, Y., Li, L., Li, T.: Automatic virtual network embedding: A deep reinforcement learning approach with graph convolutional networks. IEEE J. Sel. Areas Commun. 38(6), 1040–1057 (2020)

    Article  Google Scholar 

  29. Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NIPS’00, pp. 1057–1063. MIT Press (2000)

  30. Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: NIPS’00, pp. 1008–1014. MIT Press (2000)

  31. Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning. arXiv:1509.02971 (2015)

  32. Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: ICML’16, vol. 48, pp. 1928–1937 (2016)

  33. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms. https://arxiv.org/pdf/1707.06347.pdf (2017)

  34. Fang, M., Li, Y., Cohn, T.: Learning how to active learn: A deep reinforcement learning approach. In: EMNLP’17, pp. 595–605. Association for Computational Linguistics (2017)

  35. Wu, J., Li, L., Wang, W.Y.: Reinforced co-training. In: ACL’18, pp. 1252–1262 (2018)

  36. Qu, C., Ji, F., Qiu, M., Yang, L., Min, Z., Chen, H., Huang, J., Croft, W.B.: Learning to selectively transfer: Reinforced transfer learning for deep text matching. In: ICDM’19, pp. 699–707. Association for Computing Machinery (2019)

  37. Hurst, B., Ooi, Y.H., Pedersen, L.H.: A century of evidence on trend-following investing. The Journal of Portfolio Management 44(1), 15–29 (2017)

    Article  Google Scholar 

  38. Baltas, N.: Trend-following, risk-parity and the influence of correlations. In: Risk-Based and Factor Investing, pp. 65–95. Elsevier (2015)

  39. Wang, J., Zhang, Y., Tang, K., Wu, J., Xiong, Z.: Alphastock: A buying-winners-and-selling-losers investment strategy using interpretable deep reinforcement attention networks. In: KDD’19, pp. 1900–1908. Association for Computing Machinery (2019)

  40. Sharpe, W.F.: The sharpe ratio. J. Portf. Manag. 21(1), 49–58 (1994)

    Article  Google Scholar 

  41. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  42. Gao, L., Guo, Z., Zhang, H., Xu, X., Shen, H.T.: Video captioning with attention-based lstm and semantic consistency. IEEE Trans. Multimedia 19(9), 2045–2055 (2017)

    Article  Google Scholar 

  43. Li, Y., Zheng, W., Zheng, Z.: Deep robust reinforcement learning for practical algorithmic trading. IEEE Access 7, 1–1 (2019)

    Google Scholar 

  44. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, U., Polosukhin, I.: Attention is all you need. In: NIPS’17, Red Hook, NY, USA, pp. 6000–6010 (2017)

  45. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 (2014)

  46. Nguyen, G.H., Bouzerdoum, A., Phung, S.L.: Learning pattern classification tasks with imbalanced data sets. Pattern Recognition, 193–208 (2009)

  47. Wang, X., Matwin, S., Japkowicz, N., Liu, X.: Cost-sensitive boosting algorithms for imbalanced multi-instance datasets. In: AAI’13, pp. 174–186. Springer (2013)

  48. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV’17, pp. 2980–2988 (2017)

  49. Lin, T., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal Loss for Dense Object Detection. In: ICCV’17, pp. 2999–3007 (2017)

  50. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: ACL’19, pp. 4171–4186 (2019)

  51. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR’15. OpenReview.net (2015)

  52. Qin, Y., Song, D., Chen, H., Cheng, W., Jiang, G., Cottrell, G.W.: A dual-stage attention-based recurrent neural network for time series prediction. In: IJCAI’17, pp. 2627–2633. AAAI Press (2017)

  53. Chen, W., Qiu, X., Cai, T., Dai, H.-N., Zheng, Z., Zhang, Y.: Deep reinforcement learning for internet of things: A comprehensive survey. IEEE Communications Surveys Tutorials 23(3), 1659–1692 (2021)

    Article  Google Scholar 

Download references

Funding

The research is supported by the Key-Area Research and Development Program of Guangdong Province (2020B010165003), the National Natural Science Foundation of China under project (62032025), and the Technology Program of Guangzhou, China (202103050004), Faculty Research Grants (DB22A5 and DB22B7) of Lingnan University, Hong Kong.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Li.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

This article belongs to the Topical Collection: Web-based Intelligent Financial Services

Guest Editors: Hong-Ning Dai, Xiaohui Tao, Haoran Xie, and Miguel Martinez.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Zheng, Z., Dai, HN. et al. Profit-based deep architecture with integration of reinforced data selector to enhance trend-following strategy. World Wide Web 26, 1685–1705 (2023). https://doi.org/10.1007/s11280-022-01112-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-022-01112-4

Keywords

Navigation