Skip to main content
Log in

Comparative Research of Hyper-Parameters Mathematical Optimization Algorithms for Automatic Machine Learning in New Generation Mobile Network

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

Under the configuration of the new generation communication network, the algorithm based on machine learning has been widely used in network optimization and mobile user behavior prediction. Therefore, the optimization method with hyper-parameters will have a huge development space in the field of mobile communication network. However, for non-professionals, the bottleneck that restricts the further development and application of the whole machine learning is the selection of suitable machine learning algorithm and the determination of suitable algorithm hyper-parameters. Researchers have proposed to use automatic machine learning algorithm to solve this remarkable problem. This article forms a technical manual that can be easily searched by researchers with summarizing related hyper-parameter optimization methods and proposing the corresponding algorithm framework. Moreover, through the comparison of related optimization methods, we highlight the characteristics and deficiencies of related algorithms in the new generation of mobile networks, and put forward suggestions for future improvement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Audet C, Hare W (2017) Derivative-free and blackbox optimization. Springer, Cham

    Book  Google Scholar 

  2. Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13:281–305

    MathSciNet  MATH  Google Scholar 

  3. Bergstra JS, et al. (2011) Algorithms for hyper-parameter optimization. Advances in Neural Information Processing Systems 24. Curran Associates, Inc. 2546–2554

  4. Falkner S, Klein A, Hutter, F (2018) Practical hyper-parameter optimization for deep learning. in ICLR 2018 Workshop.

  5. Feurer M et al (2015) Efficient and robust automated machine learning. Adv Neural Inf Process Syst 28:962–2970

    Google Scholar 

  6. Lorraine J, Duvenaud D (2018) Stochastic hyper-parameter optimization through hypernetworks. CoRR, abs/1802.0

  7. Li L, et al. (2016) Hyperband: A novel bandit-based approach to hyper-parameter optimization, eprint arXiv:1603.06560

  8. Thornton C, et al. (2013) Auto-WEKA: combined selection and hyper-parameter optimization of classification algorithms, in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM (KDD' 13) 847–855

  9. Ghanbari H, Scheinberg K (2017) Black-box optimization in machine learning with trust region based derivative free algorithm, CoRR, abs/1703.0.2017

  10. Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration, in Coello LION 5, Rome, Italy, C. A. C. B. T.-L. and I. O. 5th I. C. (ed.). Berlin, Heidelberg: Springer Berlin Heidelberg 507–523

  11. Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. Adv Neural Inf Process Syst 25:2951–2959

    Google Scholar 

  12. Springenberg JT et al (2016) Bayesian optimization with robust bayesian neural networks. Adv Neural Inf Process Syst 29:4134–4142

    Google Scholar 

  13. Swersky K, Snoek J, Adams RP (2013) Advances in neural information processing systems 26. Adv Neural Inf Process Syst 29:2004–2012

    Google Scholar 

  14. Swersky K, Snoek J, Adams RP (2014) Freeze-thaw bayesian optimization. CoRR, abs/1406.3. 2014

  15. Maclaurin D, Duvenaud D, Adams R (2015) Gradient-based hyper-parameter optimization through reversible learning. Proceedings of the 32nd International Conference on Machine Learning 2113–2122

  16. Pedregosa F (2016) Hyper-parameter optimization with approximate gradient. Proceedings of The 33rd International Conference on Machine Learning. New York, New York, USA: PMLR (Proceedings of Machine Learning Research). 737–746

  17. Franceschi L et al. (2017) Forward and reverse gradient-based hyper-parameter optimization. Proceedings of the 34th International Conference on Machine Learning. International Convention Centre, Sydney, Australia: PMLR (Proceedings of Machine Learning Research). 1165–1173

  18. Klein A et al. (2016) Fast Bayesian optimization of machine learning hyper-parameters on large datasets. CoRR, abs/1605.0.2016.

  19. Kotthoff L, et al. (2017) Auto-WEKA 2.0: Automatic model selection and hyper-parameter optimization in WEKA. J Mach Learn Res 18(25): 1–5

  20. Yogatama D, Mann G (2014) Efficient transfer learning method for automatic hyper-parameter tuning, Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics 1077–1085

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuqi Li.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Li, Y. & Li, Z. Comparative Research of Hyper-Parameters Mathematical Optimization Algorithms for Automatic Machine Learning in New Generation Mobile Network. Mobile Netw Appl 27, 928–935 (2022). https://doi.org/10.1007/s11036-022-01913-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-022-01913-x

Keywords

Navigation