Skip to main content

Predicting the Product Life Cycle of Songs on the Radio

How Record Labels can Manage Product Portfolios and Prioritise Artists by Using Machine Learning Techniques

  • Conference paper
  • First Online:
Intelligent Computing

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 284))

  • 2340 Accesses

Abstract

In terms of determining the success of a musical artist's song, there is a positive correlation of radio play success and music sales success. Therefore, being able to forecast the future plays of a song on the radio can serve as powerful risk management and product portfolio management tools for record labels and other stakeholders of a song. This research strives to predict the remaining product life cycle of a song on the radio after it has been played for one or two months. The best results were achieved using a k-d tree to calculate the songs the most similar to the test songs and use a Random Forest model to forecast radio plays. Accuracy of 82.78% and 83.44% was achieved for the two time periods, respectively. This explorative research leads to over 4500 test metrics to find the best combination of models and pre-processing techniques. Other algorithms tested were KNN, MLP, and CNN. The features only consist of daily radio plays and use no musical features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Middlebrook, K., Sheik, K.: Song Hit prediction: predicting billboard hits using spotify data, pp. 1–6, August 2019. http://arxiv.org/abs/1908.08609

  2. Cibils, C., Meza, Z., Ramel, G.: Predicting a song’s path through the billboard hot 100. Stanford Univ. Calif., pp. 1–6 (2015). cs229.stanford.edu/proj2015/012_report.pdf

  3. Herremans, D., Martens, D., Sörensen, K.: Dance hit song prediction. J. New Music Res. 43(3), 291–302 (2014). https://doi.org/10.1080/09298215.2014.881888

    Article  Google Scholar 

  4. Dewan, S., Ramaprasad, J.: Social media, traditional media, and music sales. MIS Q. 38(1), 101–121 (2014). https://doi.org/10.25300/MISQ/2014/38.1.05

    Article  Google Scholar 

  5. Tibshirani, R., Walther, G., Hastie, T.: Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. B 63, 411–423 (2001)

    Article  MathSciNet  Google Scholar 

  6. Hu, K., Acimovic, J., Erize, F., Thomas, D.J., Van Mieghem, J.A.: Forecasting new product life cycle curves: practical approach and empirical analysis. Manuf. Serv. Oper. Manag. 21(1), 66–85 (2019). https://doi.org/10.1287/msom.2017.0691

    Article  Google Scholar 

  7. Chen, Y., et al.: Fast neighbor search by using revised k-d tree. Inf. Sci. (Ny) 472, 145–162 (2019). https://doi.org/10.1016/j.ins.2018.09.012

    Article  MathSciNet  MATH  Google Scholar 

  8. Susto, G.A., Schirru, A., Pampuri, S., McLoone, S., Beghi, A.: Machine learning for predictive maintenance: a multiple classifier approach. IEEE Trans. Ind. Informatics 11(3), 812–820 (2015). https://doi.org/10.1109/TII.2014.2349359

    Article  Google Scholar 

  9. Parmezan, A.R.S., Souza, V.M.A., Batista, G.E.A.P.A.: Evaluation of statistical and machine learning models for time series prediction: Identifying the state-of-the-art and the best conditions for the use of each model. Inf. Sci. (Ny) 484(February), 302–337 (2019). https://doi.org/10.1016/j.ins.2019.01.076

    Article  Google Scholar 

  10. Chu, Y., Coimbra, C.F.M.: Short-term probabilistic forecasts for direct normal irradiance. Renew. Energy 101, 526–536 (2017). https://doi.org/10.1016/j.renene.2016.09.012

    Article  Google Scholar 

  11. Haara, A., Kangas, A.: Comparing K nearest neighbours methods and linear regression-is there reason to select one over the other? Math. Comput. For. Nat. Sci. 4(1), 50–65 (2012)

    Google Scholar 

  12. Becketti, S.: Nonparametric regression: Kernel, WARP, ad k-NN estimators, no. May (2014)

    Google Scholar 

  13. Tyralis, H., Papacharalampous, G., Langousis, A.: A brief review of random forests for water scientists and practitioners and their recent history in water resources. Water 11(5), 910 (2019). https://doi.org/10.3390/w11050910

    Article  Google Scholar 

  14. Zhang, C., Ma, Y.: Ensemble Machine Learning. Springer, Heidelberg (2012). https://doi.org/10.1007/978-1-4419-9326-7

    Book  MATH  Google Scholar 

  15. Kaparthi, S., Bumblauskas, D.: Designing predictive maintenance systems using decision tree-based machine learning techniques. Int. J. Qual. Reliab. Manag. 37(4), 659–686 (2020). https://doi.org/10.1108/IJQRM-04-2019-0131

    Article  Google Scholar 

  16. Biau, Gérard., Scornet, E.: A random forest guided tour. TEST 25(2), 197–227 (2016). https://doi.org/10.1007/s11749-016-0481-7

    Article  MathSciNet  MATH  Google Scholar 

  17. Ingrassia, S., Morlini, I.: Neural network modeling for small datasets. Technometrics 47(3), 297–311 (2005). https://doi.org/10.1198/004017005000000058

    Article  MathSciNet  Google Scholar 

  18. Nwankpa, C., Ijomah, W., Gachagan, A., Marshall, S.: Activation functions: comparison of trends in practice and research for deep learning, pp. 1–20, November 2018. http://arxiv.org/abs/1811.03378

  19. Gu, J., et al.: Recent advances in convolutional neural networks. Pattern Recognit. 77, 354–377 (2018). https://doi.org/10.1016/j.patcog.2017.10.013

    Article  Google Scholar 

  20. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990). https://doi.org/10.1109/5.58337

    Article  Google Scholar 

  21. Zhang, A., Lopton, Z.C., Li, M., Smola, A.J.: Dive Into Deep Learning (2020)

    Google Scholar 

  22. Wang, K., et al.: Multiple convolutional neural networks for multivariate time series prediction. Neurocomputing 360, 107–119 (2019). https://doi.org/10.1016/j.neucom.2019.05.023

    Article  Google Scholar 

  23. Le Guennec, A., Malinowski, S., Tavenard, R.: Data augmentation for time series classification using convolutional neural networks. In: ECML/PKDD Workshop on Advanced Analytics and Learning Temporal Data (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to O. F. Grooss .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Grooss, O.F., Holm, C.N., Alphinas, R.A. (2021). Predicting the Product Life Cycle of Songs on the Radio. In: Arai, K. (eds) Intelligent Computing. Lecture Notes in Networks and Systems, vol 284. Springer, Cham. https://doi.org/10.1007/978-3-030-80126-7_34

Download citation

Publish with us

Policies and ethics