Skip to main content

Comparison Between Stochastic Gradient Descent and VLE Metaheuristic for Optimizing Matrix Factorization

  • Conference paper
  • First Online:
  • 718 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1173))

Abstract

Matrix factorization is used by recommender systems in collaborative filtering for building prediction models based on a couple of matrices. These models are usually generated by stochastic gradient descent algorithm, which learns the model minimizing the error done. Finally, the obtained models are validated according to an error criterion by predicting test data. Since the model generation can be tackled as an optimization problem where there is a huge set of possible solutions, we propose to use metaheuristics as alternative solving methods for matrix factorization. In this work we applied a novel metaheuristic for continuous optimization, which works inspired by the vapour-liquid equilibrium. We considered a particular case were matrix factorization was applied: the prediction student performance problem. The obtained results surpassed thoroughly the accuracy provided by stochastic gradient descent.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alpaydin, E.: Introduction to Machine Learning. The Massachusetts Institute of Technology Press, Cambridge (2010)

    MATH  Google Scholar 

  2. Angra, S., Ahuja, S.: Machine learning and its applications: a review. In: 2017 International Conference on Big Data Analytics and Computational Intelligence, pp. 57–60 (2017)

    Google Scholar 

  3. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT 2010, pp. 177–186. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-7908-2604-3_16

    Chapter  Google Scholar 

  4. Cortes-Toro, E.M., Crawford, B., Gomez-Pulido, J.A., Soto, R., Lanza-Gutierrez, J.M.: A new metaheuristic inspired by the vapour-liquid equilibrium for continuous optimization. Appl. Sci. 8(11), 2080 (2018)

    Article  Google Scholar 

  5. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 26(1), 29–41 (1996)

    Article  Google Scholar 

  6. Feo, T.A., Resende, M.G.C.: Greedy randomized adaptive search procedures. J. Glob. Optim. 6(2), 109–133 (1995)

    Article  MathSciNet  Google Scholar 

  7. Gansterer, M., Almeder, C., Hartl, R.F.: Simulation-based optimization methods for setting production planning parameters. Int. J. Prod. Econ. 151, 206–213 (2014)

    Article  Google Scholar 

  8. Gendreau, M., Potvin, J.E.: Handbook of Metaheuristics. Springer, Heidelberg (2010). https://doi.org/10.1007/978-1-4419-1665-5

    Book  MATH  Google Scholar 

  9. Glover, F.: Tabu search - part II. INFORMS J. Comput. 2(1), 4–32 (1990)

    Article  MathSciNet  Google Scholar 

  10. Holland, J.H.: Genetic Algorithms and Adaptation. In: Selfridge, O.G., Rissland, E.L., Arbib, M.A. (eds.) Adaptive Control of Ill-Defined Systems. NATO Conference Series (II Systems Science), vol. 16, pp. 317–333. Springer, Boston (1984). https://doi.org/10.1007/978-1-4684-8941-5_21

    Chapter  Google Scholar 

  11. Jannach, D., Zanker, M., Felfernig, A., Friedrich, G.: Recommender Systems: An Introduction. Cambridge University Press, Cambridge (2011)

    Google Scholar 

  12. Karaboga, D.: Artificial bee colony algorithm. Scholarpedia 5(3), 6915 (2010)

    Article  Google Scholar 

  13. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: IEEE International Conference on Neural Networks (1995)

    Google Scholar 

  14. Kirkpatrick, S., Gelatt Jr., D., Vecchi, M.P.: Optimization by simmulated annealing. Science 220(4598), 671–680 (1983)

    Article  MathSciNet  Google Scholar 

  15. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)

    Article  Google Scholar 

  16. Masoumeh, R., Reza, B.: Using the genetic algorithm to enhance nonnegative matrix factorization initialization. Expert Syst. 31(3), 213–219 (2013)

    Google Scholar 

  17. McCabe, W.L., Smith, J.C., Harriot, P.: Unit Operations of Chemical Engineering. The McGraw-Hill Companies, Inc., New York (2007)

    Google Scholar 

  18. Melville, P., Sindhwani, V.: Recommender systems. In: Encyclopedia of Machine Learning, pp. 829–838 (2010)

    Google Scholar 

  19. Mladenovic, N., Drazic, M., Kovacevic-Vujcic, V., Cangalovic, M.: General variable neighborhood search for the continuous optimization. Eur. J. Oper. Res. 191(3), 753–770 (2008)

    Article  MathSciNet  Google Scholar 

  20. Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In: GECCO, pp. 477–484. ACM (2016)

    Google Scholar 

  21. Murphy, K.: Machine Learning. A Probabilistic Perspective. The Massachusetts Institute of Technology Press, Cambridge (2012)

    MATH  Google Scholar 

  22. Rana, S., Jasola, S., Kumar, R.: A review on particle swarm optimization algorithms and their applications to data clustering. Artif. Intell. Rev. 35(3), 211–222 (2011)

    Article  Google Scholar 

  23. Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)

    Article  Google Scholar 

  24. Rendle, S., Schmidt-Thieme, L.: Online-updating regularized kernel matrix factorization models for large-scale recommender systems. In: Proceedings of the 2008 ACM Conference on Recommender Systems, pp. 251–258 (2008)

    Google Scholar 

  25. Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.): Recommender Systems Handbook. Springer, Heidelberg (2011). https://doi.org/10.1007/978-0-387-85820-3

    Book  MATH  Google Scholar 

  26. Smith, J., Van Ness, H., Abbott, M., Borgnakke, C.: Introduction to Chemical Engineering Thermodynamics, 7th edn. The McGraw-Hill Companies, Inc., New York (2005)

    Google Scholar 

  27. Smith, R.: Chemical Process Design and Integration. Wiley, Hoboken (2005)

    Google Scholar 

  28. Sonntag, R.E., Borgnakke, C., Wylen, G.J.V.: Fundamentals of Thermodynamics, 6th edn. Wiley, Hoboken (2003)

    Google Scholar 

  29. Soto, M., Rossi, A., Sevaux, M.: Two iterative metaheuristic approaches to dynamic memory allocation for embedded systems. In: Merz, P., Hao, J.-K. (eds.) EvoCOP 2011. LNCS, vol. 6622, pp. 250–261. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20364-0_22

    Chapter  Google Scholar 

  30. Sun, J., Garibaldi, J.M., Hodgman, C.: Parameter estimation using metaheuristics in systems biology: a comprehensive review. IEEE/ACM Trans. Comput. Biology Bioinform. 9(1), 185–202 (2012)

    Article  Google Scholar 

  31. Talbi, E.G.: Metaheuristics: From Design to Implementation. Wiley, Hoboken (2009)

    Book  Google Scholar 

  32. Tan, Y.: FWA application on non-negative matrix factorization. In: Tan, Y. (ed.) Fireworks Algorithm, pp. 247–262. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46353-6_15

    Chapter  Google Scholar 

  33. Thai-Nghe, N., Drumond, L., Horvath, T., Krohn-Grimberghe, A., Nanopoulos, A., Schmidt-Thieme, L.: Factorization techniques for predicting student performance. In: Educational Recommender Systems and Technologies: Practices and Challenges, pp. 129–153. IGI-Global (2012)

    Google Scholar 

  34. Yoo, D., Kim, J., Geem, Z.: Overview of harmony search algorithm and its applications in civil engineering. Evol. Intell. 7(1), 3–16 (2014)

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the grants given as follows: PhD. Juan A. Gomez-Pulido is supported by grant IB16002 (Junta Extremadura, Spain). MSc. Enrique Cortés-Toro is supported by grant INF-PUCV 2015. PhD. Broderick Crawford is supported by grant Conicyt/Fondecyt/Regular/1171243. PhD. Ricardo Soto is supported by grant Conicyt/Fondecyt/Regular/1160455.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan A. Gómez-Pulido .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gómez-Pulido, J.A., Cortés-Toro, E., Durán-Domínguez, A., Lanza-Gutiérrez, J.M., Crawford, B., Soto, R. (2020). Comparison Between Stochastic Gradient Descent and VLE Metaheuristic for Optimizing Matrix Factorization. In: Dorronsoro, B., Ruiz, P., de la Torre, J., Urda, D., Talbi, EG. (eds) Optimization and Learning. OLA 2020. Communications in Computer and Information Science, vol 1173. Springer, Cham. https://doi.org/10.1007/978-3-030-41913-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41913-4_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41912-7

  • Online ISBN: 978-3-030-41913-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics