Skip to main content

From Feature Selection to Continuous Optimization

  • Conference paper
  • First Online:
Artificial Evolution (EA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12052))

  • 454 Accesses

Abstract

Metaheuristic algorithms (MAs) have seen unprecedented growth thanks to their successful applications in fields including engineering and health sciences. In this work, we investigate the use of a deep learning (DL) model as an alternative tool to do so. The proposed method, called MaNet, is motivated by the fact that most of the DL models often need to solve massive nasty optimization problems consisting of millions of parameters. Feature selection is the main adopted concepts in MaNet that helps the algorithm to skip irrelevant or partially relevant parameters and use those design variables which contribute most to the overall performance. The introduced model is applied on several unimodal and multimodal continuous problems. The experiments indicate that MaNet is able to yield competitive results compared to one of the best hand-designed algorithms for the aforementioned problems, in terms of the solution accuracy and scalability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Netron Visualizer is used to illustrate the model. The tools is available online at: https://github.com/lutzroeder/netron.

  2. 2.

    The codes for CEC problems and the jSO algorithm are publicly available at: http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2017/CEC2017.htm.

  3. 3.

    F2 has been excluded by the organizers because it shows unstable behavior especially for higher dimensions [4].

References

  1. Alom, M.Z., et al.: The history began from alexnet: a comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164 (2018)

  2. Amos, B., Kolter, J.Z.: OptNet: differentiable optimization as a layer in neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 136–145. JMLR. org (2017)

    Google Scholar 

  3. Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: Advances in Neural Information Processing Systems, pp. 3981–3989 (2016)

    Google Scholar 

  4. Awad, N., Ali, M., Liang, J., Qu, B., Suganthan, P.: Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Technical report (2016)

    Google Scholar 

  5. Brest, J., Maučec, M.S., Bošković, B.: iL-SHADE: improved L-SHADE algorithm for single objective real-parameter optimization. In: 2016 IEEE Congress on Evolutionary Computation (CEC). pp. 1188–1195. IEEE (2016)

    Google Scholar 

  6. Brest, J., Maučec, M.S., Bošković, B.: Single objective real-parameter optimization: algorithm JSO. In: 2017 IEEE Congress on Evolutionary Computation (CEC), pp. 1311–1318. IEEE (2017)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition. Signal Process. Mag. 29, 82–97 (2012)

    Article  Google Scholar 

  9. Kang, K., Bae, C., Yeung, H.W.F., Chung, Y.Y.: A hybrid gravitational search algorithm with swarm intelligence and deep convolutional feature for object tracking optimization. Appl. Soft Comput. 66, 319–329 (2018)

    Article  Google Scholar 

  10. Kennedy, M.P., Chua, L.O.: Neural networks for nonlinear programming. IEEE Trans. Circuits Syst. 35(5), 554–562 (1988)

    Article  MathSciNet  Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  13. Lei Ba, J., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)

  14. Li, K., Malik, J.: Learning to optimize. arXiv preprint arXiv:1606.01885 (2016)

  15. Loshchilov, I.: CMA-ES with restarts for solving CEC 2013 benchmark problems. In: 2013 IEEE Congress on Evolutionary Computation, pp. 369–376. IEEE (2013)

    Google Scholar 

  16. Qin, A.K., Suganthan, P.N.: Self-adaptive differential evolution algorithm for numerical optimization. In: 2005 IEEE Congress on Evolutionary Computation, vol. 2, pp. 1785–1791. IEEE (2005)

    Google Scholar 

  17. Rakhshani, H., Idoumghar, L., Lepagnot, J., Brévilliers, M.: MAC: many-objective automatic algorithm configuration. In: Deb, K., et al. (eds.) EMO 2019. LNCS, vol. 11411, pp. 241–253. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12598-1_20

    Chapter  Google Scholar 

  18. Rakhshani, H., Rahati, A.: Snap-drift cuckoo search: a novel cuckoo search optimization algorithm. Appl. Soft Comput. 52, 771–794 (2017)

    Article  Google Scholar 

  19. Salimans, T., Kingma, D.P.: Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In: Advances in Neural Information Processing Systems, pp. 901–909 (2016)

    Google Scholar 

  20. Santurkar, S., Tsipras, D., Ilyas, A., Madry, A.: How does batch normalization help optimization? In: Advances in Neural Information Processing Systems, pp. 2483–2493 (2018)

    Google Scholar 

  21. Senjyu, T., Saber, A., Miyagi, T., Shimabukuro, K., Urasaki, N., Funabashi, T.: Fast technique for unit commitment by genetic algorithm based on unit clustering. IEE Proc.-Gener. Transm. Distrib. 152(5), 705–713 (2005)

    Article  Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  23. Snoek, J., et al.: Scalable Bayesian optimization using deep neural networks. In: International Conference on Machine Learning, pp. 2171–2180 (2015)

    Google Scholar 

  24. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  Google Scholar 

  25. Tanabe, R., Fukunaga, A.: Success-history based parameter adaptation for differential evolution. In: 2013 IEEE Congress on Evolutionary Computation, pp. 71–78. IEEE (2013)

    Google Scholar 

  26. Tanabe, R., Fukunaga, A.S.: Improving the search performance of shade using linear population size reduction. In: 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 1658–1665. IEEE (2014)

    Google Scholar 

  27. Zhang, J., Sanderson, A.C.: JADE: adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 13(5), 945–958 (2009)

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported through computational resources provided by Mésocentre of Strasbourg: https://services-numeriques.unistra.fr/.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Hojjat Rakhshani , Lhassane Idoumghar , Julien Lepagnot or Mathieu Brévilliers .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rakhshani, H., Idoumghar, L., Lepagnot, J., Brévilliers, M. (2020). From Feature Selection to Continuous Optimization. In: Idoumghar, L., Legrand, P., Liefooghe, A., Lutton, E., Monmarché, N., Schoenauer, M. (eds) Artificial Evolution. EA 2019. Lecture Notes in Computer Science(), vol 12052. Springer, Cham. https://doi.org/10.1007/978-3-030-45715-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-45715-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-45714-3

  • Online ISBN: 978-3-030-45715-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics