Skip to main content
Log in

Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve \({\mathcal {O}}(1/t)\) ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

The datasets generated during the current study are available from the corresponding author on reasonable request.

References

  1. Alacaoglu, A., Malitsky, Y., Cevher, V.: Forward-reflected-backward method with variance reduction. Comput. Optim. Appl. 80(2), 321–346 (2021)

    MathSciNet  Google Scholar 

  2. Azizian, W., Iutzeler, F., Malick, J., Mertikopoulos, P.: The last-iterate convergence rate of optimistic mirror descent in stochastic variational inequalities. In: Proceedings of 34th Conference on Learning Theory, PMLR 134: 326–358 (2021)

  3. Böhm, A., Sedlmayer, M., Csetnek, E.R., Boţ, R.I.: Two steps at a time-taking GAN training in stride with Tseng’s method. SIAM J. Math. Data Sci. 4(2), 750–771 (2022)

    MathSciNet  Google Scholar 

  4. Boţ, R.I., Mertikopoulos, P., Staudigl, M., Vuong, P.T.: Minibatch forward-backward-forward methods for solving stochastic variational inequalities. Stoch. Syst. 11(2), 112–139 (2021)

    MathSciNet  Google Scholar 

  5. Chen, Y., Lan, G., Ouyang, Y.: Accelerated schemes for a class of variational inequalities. Math. Program. 165(1), 113–149 (2017)

    MathSciNet  Google Scholar 

  6. Chen, X., Wets, R.J.-B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sampling average approximations. SIAM J. Optim. 22(2), 649–673 (2012)

    MathSciNet  Google Scholar 

  7. Cui, S., Shanbhag, U.V.: On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems. Set-Valued Variat. Anal. 29(2), 453–499 (2021)

    MathSciNet  Google Scholar 

  8. Flåm, S.D.: Games and cost of change. Ann. Oper. Res. 301(1), 107–119 (2021)

    MathSciNet  Google Scholar 

  9. Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. In: Proceedings of the 32th International Conference on Learning Representations (2019) https://openreview.net/pdf?id=r1laEnA5Ym

  10. Grad, S.M., Lara, F.: Solving mixed variational inequalities beyond convexity. J. Optim. Theory Appl. 190(2), 565–580 (2021)

    MathSciNet  Google Scholar 

  11. Guo, L., Chen, X.: Mathematical programs with complementarity constraints and a non-Lipschitz objective: optimality and approximation. Math. Program. 185(1), 455–485 (2021)

    MathSciNet  Google Scholar 

  12. Gürkan, G., Yonca Özge, A., Robinson, S.M.: Sample-path solution of stochastic variational inequalities. Math. Program. 84(2), 313–333 (1999)

    MathSciNet  Google Scholar 

  13. Hsieh, Y.G., Iutzeler, F., Malick, J., Mertikopoulos, P.: Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling, Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver Virtual, Canada, pp. 16223–16234 (2020)

  14. Hsieh, Y.G., Iutzeler, F., Malick, J., Mertikopoulos, P.: On the convergence of single-call stochastic extra-gradient methods. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6936–6946 (2019)

  15. Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Optim. 27(2), 686–724 (2017)

    MathSciNet  Google Scholar 

  16. Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Variance-based extragradient methods with line search for stochastic variational inequalities. SIAM J. Optim. 29(1), 175–206 (2019)

    MathSciNet  Google Scholar 

  17. Iusem, A.N., Jofré, A., Thompson, P.: Incremental constraint projection methods for monotone stochastic variational inequalities. Math. Oper. Res. 44(1), 236–263 (2018)

    MathSciNet  Google Scholar 

  18. Jadamba, B., Raciti, F.: Variational inequality approach to stochastic Nash equilibrium problems with an application to Cournot oligopoly. J. Optim. Theory Appl. 165(3), 1050–1070 (2015)

    MathSciNet  Google Scholar 

  19. Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53(6), 1462–1475 (2008)

    MathSciNet  Google Scholar 

  20. Johnstone, P.R., Moulin, P.: Faster subgradient methods for functions with Hölderian growth. Math. Program. 180(1–2), 417–450 (2020)

    MathSciNet  Google Scholar 

  21. Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1(1), 17–58 (2011)

    MathSciNet  Google Scholar 

  22. Kannan, A., Shanbhag, U.V.: Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Comput. Optim. Appl. 74(3), 779–820 (2019)

    MathSciNet  Google Scholar 

  23. Koshal, J., Nedić, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE Trans. Autom. Control 58(3), 594–609 (2013)

    MathSciNet  Google Scholar 

  24. Kotsalis, G., Lan, G., Li, T.: Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation. SIAM J. Optim. 32(3), 2041–2073 (2022)

    MathSciNet  Google Scholar 

  25. Kotsalis, G., Lan, G., Li, T.: Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning. SIAM J. Optim. 32(2), 1120–1155 (2022)

    MathSciNet  Google Scholar 

  26. Lan, G.: First-order and stochastic optimization methods for machine learning. Springer, Switzerland (2020)

    Google Scholar 

  27. Lei, J., Shanbhag, U.V.: Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems. SIAM J. Optim. 32(2), 573–603 (2022)

    MathSciNet  Google Scholar 

  28. Lei, J., Shanbhag, U.V., Pang, J.S., Sen, S.: On synchronous, asynchronous, and randomized best-response schemes for stochastic Nash games. Math. Oper. Res. 45(1), 157–190 (2020)

    MathSciNet  Google Scholar 

  29. Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. 184(1), 383–410 (2020)

    MathSciNet  Google Scholar 

  30. Malitsky, Y., Pock, T.: A first-order primal-dual algorithm with linesearch. SIAM J. Optim. 28(1), 411–432 (2018)

    MathSciNet  Google Scholar 

  31. Malitsky, Y., Tam, M.K.: A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM J. Optim. 30(2), 1451–1472 (2020)

    MathSciNet  Google Scholar 

  32. Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C. S., Chandrasekhar, V., Piliouras, G.: Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile, Proceedings of the 7th International Conference on Learning Representations, pp. 1–23 (2019)

  33. Mishchenko, K., Kovalev, D., Shulgin, E., Richtárik, P., Malitsky, Y.: Revisiting stochastic extragradient. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Vol 108, 4573–4582 (2020)

  34. Nemirovski, A., Juditsky A, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    MathSciNet  Google Scholar 

  35. Nemirovski, A.: Prox-method with rate of convergence \(O(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)

    MathSciNet  Google Scholar 

  36. Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2–3), 319–344 (2007)

    MathSciNet  Google Scholar 

  37. Outrata, J.V., Valdman, J.: On computation of optimal strategies in oligopolistic markets respecting the cost of change. Math. Methods Oper. Res. 92(3), 489–509 (2020)

    MathSciNet  Google Scholar 

  38. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)

    MathSciNet  Google Scholar 

  39. Shanbhag, U.V.: Stochastic variational inequality problems: applications, analysis, and algorithms. INFORMS Tutor. Operat. Res. 2013, 71–107 (2013)

    Google Scholar 

  40. Wang, M., Bertsekas, D.P.: Incremental constraint projection methods for variational inequalities. Math. Program. 150(2), 321–363 (2015)

    MathSciNet  Google Scholar 

  41. Xiao, X.: A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods. J. Optim. Theory Appl. 188(3), 605–627 (2021)

    MathSciNet  Google Scholar 

  42. Yang, Z.P., Lin, G.H.: Two fast variance-reduced proximal gradient algorithms for SMVIPs-Stochastic Mixed Variational Inequality Problems with suitable applications to stochastic network games and traffic assignment problems. J. Comput. Appl. Math. 408(3), 114132 (2022)

    MathSciNet  Google Scholar 

  43. Yang, Z.P., Lin, G.H.: Variance-based single-call proximal extragradient algorithms for stochastic mixed variational inequalities. J. Optim. Theory Appl. 190(2), 393–427 (2021)

    MathSciNet  Google Scholar 

  44. Yang, Z.P., Wang, Y., Lin, G.H.: Variance-based modified backward-forward algorithm with line search for stochastic variational inequality problems and its applications. Asia-Pacific J. Oper. Res. 37(3), 2050011 (2020)

    MathSciNet  Google Scholar 

  45. Yang, Z.P., Zhang, J., Wang, Y., Lin, G.H.: Variance-based subgradient extragradient method for stochastic variational inequality problems. J. Sci. Comput. 89, 4 (2021)

    MathSciNet  Google Scholar 

  46. Yin, Y., Madanat, S.M., Lu, X.Y.: Robust improvement schemes for road networks under demand uncertainty. Eur. J. Oper. Res. 198(2), 470–479 (2009)

    MathSciNet  Google Scholar 

  47. Yousefian, F., Nedić, A., Shanbhag, U.V.: On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems. Math. Program. 165(1), 391–431 (2017)

    MathSciNet  Google Scholar 

  48. Yousefian, F., Nedić, A., Shanbhag, U.V.: On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities randomized block coordinate and optimal averaging schemes. Set-Valued and Variational Analysis 26(4), 789–819 (2018)

    MathSciNet  Google Scholar 

  49. Yousefian, F., Nedić, A., Shanbhag, U.V.: Self-tuned stochastic approximation schemes for non-Lipschitzian stochastic multi-user optimization and Nash games. IEEE Trans. Autom. Control 61(7), 1753–1766 (2016)

    MathSciNet  Google Scholar 

  50. Zhang, X.J., Du, X.W., Yang, Z.P., Lin, G.H.: An infeasible stochastic approximation and projection algorithm for stochastic variational inequalities. J. Optim. Theory Appl. 183(3), 1053–1076 (2019)

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gui-Hua Lin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by NSFC (Nos. 12101262, 12071280, 12001072, 12271067), the Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515010263), the Chongqing Natural Science Foundation (cstc2019jcyj-zdxmX0016, CSTB2022NSCQ-MSX1318), the Key Laboratory for Optimization and Control of Ministry of Education, Chongqing Normal University (No. CSSXKFKTZ202001), and the Group Building Scientific Innovation Project for universities in Chongqing (CXQT21021).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, ZP., Zhao, Y. & Lin, GH. Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities. J Glob Optim 89, 143–170 (2024). https://doi.org/10.1007/s10898-023-01346-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-023-01346-0

Keywords

Mathematics Subject Classification

Navigation