Skip to main content
Log in

Parametric Algorithm for Finding a Guaranteed Solution to a Quantile Optimization Problem

  • STOCHASTIC SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

The problem of stochastic programming with a quantile criterion for a normal distribution is studied in the case of a loss function that is piecewise linear in random parameters and convex in strategy. Using the confidence method, the original problem is approximated by a deterministic minimax problem parameterized by the radius of a ball inscribed in a confidence polyhedral set. The approximating problem is reduced to a convex programming problem. The properties of the measure of the confidence set are investigated when the radius of the ball changes. An algorithm is proposed for finding the radius of a ball that provides a guaranteeing solution to the problem. A method for obtaining a lower estimate of the optimal value of the criterion function is described. The theorems are proved on the convergence of the algorithm with any predetermined probability and on the accuracy of the resulting solution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.

REFERENCES

  1. Kibzun, A.I. and Kan, Y.S., Stochastic Programming Problems with Probability and Quantile Functions, Chichester: John Wiley & Sons, 1996.

    Google Scholar 

  2. Kibzun, A.I. and Kan, Yu.S., Zadachi stokhasticheskogo programmirovaniya s veroyatnostnymi kriteriyami (Stochastic Programming Problems with Probabilistic Criteria), Moscow: Fizmatlit, 2009.

  3. Kibzun, A.I. and Naumov, A.V., A Guaranteeing Algorithm for Quantile Optimization, Kosm. Issled., 1995, vol. 33, no. 2, pp. 160–165.

    Google Scholar 

  4. Naumov, A.V. and Ivanov, S.V., On Stochastic Linear Programming Problems with the Quantile Criterion, Autom. Remote Control, 2011, vol. 72, no. 2, pp. 353–369.

    Article  MathSciNet  Google Scholar 

  5. Kan, Yu.S., An Extension of the Quantile Optimization Problem with a Loss Function Linear in Random Parameters, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2194–2205.

    Article  MathSciNet  Google Scholar 

  6. Vasil’eva, S.N. and Kan, Yu.S., A Method for Solving Quantile Optimization Problems with a Bilinear Loss Function, Autom. Remote Control, 2015, vol. 76, no. 9, pp. 1582–1597.

    Article  MathSciNet  Google Scholar 

  7. Vasil’eva, S.N. and Kan, Yu.S., Approximation of Probabilistic Constraints in Stochastic Programming Problems with a Probability Measure Kernel, Autom. Remote Control, 2019, vol. 80, no. 11, pp. 2005–2016.

    Article  MathSciNet  Google Scholar 

  8. Pr’ekopa, A., Stochastic Programming, Dordrecht: Kluwer, 1995.

    Book  Google Scholar 

  9. Shapiro, A., Dentcheva, D., and Ruszczy’nski, A., Lectures on Stochastic Programming. Modeling and Theory, Philadelphia: Society for Industrial and Applied Mathematics (SIAM), 2014.

  10. Lejeune, M.A. and Pr’ekopa, A., Relaxations for Probabilistically Constrained Stochastic Programming Problems: Review and Extensions, Ann. Oper. Res., 2018. https://doi.org/10.1007/s10479-018-2934-8

  11. Dentcheva, D., Pr’ekopa, A., and Ruszczy’nski, A., On Convex Probabilistic Programming with Discrete Distributions Nonlinear Anal.-Theor., 2001, vol. 47, no. 3, pp. 1997–2009.

    Google Scholar 

  12. Van Ackooij, W., Berge, V., de Oliveira, W., and Sagastiz’abal, C., Probabilistic Optimization via Approximate p-Efficient Points and Bundle Methods, Comput. Oper. Res., 2017, vol. 77, pp. 177–193.

    Article  MathSciNet  Google Scholar 

  13. Ivanov, S.V. and Kibzun, A.I., General Properties of Two-Stage Stochastic Programming Problems with Probabilistic Criteria, Autom. Remote Control, 2019, vol. 80, no. 6, pp. 1041–1057.

    Article  MathSciNet  Google Scholar 

  14. Boyd, S. and Vandenberghe, L., Convex Optimization, Cambridge: University Press, 2009.

    Google Scholar 

  15. Shiryaev, A.N., Probability, New York: Springer, 1996.

    Book  Google Scholar 

Download references

Funding

The work was supported by the Russian Science Foundation (project no. 22-21-00213, https://rscf.ru/project/22-21-00213/).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to S. V. Ivanov, A. I. Kibzun or V. N. Akmaeva.

Additional information

This paper was recommended for publication by E.Ya. Rubinovich, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Theorem 1. Conditions 2 and 3 ensure that all constraints in the problem (5) are active. This means that all faces of the set Cr touch the ball Br. As r increases on the segment [0, R] the faces of the set Cr are transferred in parallel, touching the ball Br. This means that the set Cr expands as r increases. Therefore, the function h, defined as the measure Cr, is non-decreasing. Theorem 1 is proved.

Proof of Theorem 2. Let γ ∈ (0, 1). The set \({{C}_{{{{\rho }_{\gamma }}}}}\) is defined as the intersection of k half-planes of measure no less than γ. Denote these half-planes by Li, i = \(\overline {1,k} \). Then

$$h({{\rho }_{\gamma }}) = {\mathbf{P}}\left\{ {X \in \bigcap\limits_{i = 1}^k {{{L}_{i}}} } \right\} = 1 - {\mathbf{P}}\left\{ {X \in \bigcup\limits_{i = 1}^k {({{\mathbb{R}}^{m}}{{\backslash }}{{L}_{i}})} } \right\}\; \geqslant \;1 - \sum\limits_{i = 1}^k {{\mathbf{P}}\{ X \notin {{L}_{i}}\} } = 1 - (1 - \gamma )k.$$

Thus, hγ) \( \geqslant \) α for α \(\leqslant \) 1 – (1 – γ)k, which is equivalent to γ \( \geqslant \) β = 1 – \(\frac{{1 - \alpha }}{k}\). Theorem 2 is proved.

Proof of Theorem 3. Since at each iteration the segment of the search for a solution narrows two times, the number of iterations K of the algorithm can be found as the minimum natural number K, that satisfies the inequality

$$\frac{{\left| {{{{\bar {R}}}_{\alpha }} - {{\rho }_{\alpha }}} \right|}}{{{{2}^{K}}}}\;\leqslant \;\delta .$$

It follows from this inequality that K = \(\left\lceil {{{{\log }}_{2}}\frac{{\left| {{{{\bar {R}}}_{\alpha }} - {{\rho }_{\alpha }}} \right|}}{\delta }} \right\rceil \). The algorithm can make an error in its work only if at some iteration it turns out that \(\hat {h}(r)\) \( \geqslant \) α + ε, although in fact h(r) < α. It is easy to see that the random variable s(r) is distributed according to the binomial law with the success probability h(r) – μ(r). The inequality is known ([15], Chapter 1, Section 6):

$${\mathbf{P}}\{ \hat {h}(r) - h(r)\; \geqslant \;\varepsilon \} = {\mathbf{P}}\left\{ {\frac{{s(r)}}{N} - (h(r) - \mu (r))\; \geqslant \;\varepsilon } \right\}\;\leqslant \;{{e}^{{ - 2N{{\varepsilon }^{2}}}}}.$$

Therefore, if we assume that h(r) < α, then P{\(\hat {h}\)(r) \( \geqslant \) α + ε} \(\leqslant \) \({{e}^{{ - 2N{{\varepsilon }^{2}}}}}\). Since the samples used to evaluate the measure are independent, the probability that the algorithm will work correctly is at least \({{\left( {1 - {{e}^{{ - 2N{{\varepsilon }^{2}}}}}} \right)}^{K}}\). Hence it follows that, in order to ensure the probability p of successful operation of Algorithm 1, the inequality

$$p\;\leqslant \;{{\left( {1 - {{e}^{{ - 2N{{\varepsilon }^{2}}}}}} \right)}^{K}} \Leftrightarrow N\; \geqslant \;\frac{{\ln \left( {{\text{1/}}\left( {1 - \sqrt[K]{p}} \right)} \right)}}{{2{{\varepsilon }^{2}}}}.$$

must be satisfied.

Theorem 3 is proved.

Proof of Theorem 4. Let Ψ(u, r) \( \triangleq \) \({{\max }_{{x \in {{B}_{r}}}}}\Phi (u,x)\) = Φ(u, x0(r)), where x0 is the point on the boundary of the ball Br, where the specified maximum is reached. Since BρBR, Ψ(u, ρ) \(\leqslant \) Ψ(u, R) holds. Since the point y = \(\frac{\rho }{R}{{x}^{0}}(R)\) lies on the boundary of the ball Bρ, Φ(u, y) \(\leqslant \) Ψ{u, ρ). That’s why

$$0\;\leqslant \;\Psi (u,\,\,R) - \Psi (u,\,\,\rho )\;\leqslant \;\Phi (u,\,\,{{x}^{0}}(R)) - \Phi (u,\,\,y)\;\leqslant \;L\left\| {{{x}^{0}}(R) - y} \right\| = (R - \rho )L.$$

Thus, the inequalities

$$\Psi (u,\,\,\rho )\;\leqslant \;\Psi (u,\,\,R)\;\leqslant \;\Psi (u,\,\,\rho ) + (R - \rho )L$$
(A.1)

are true. Minimizing the left and right parts of the first inequality in (A.1) with respect to uU so that \({{\max }_{{j = \overline {1,{{k}_{2}}} }}}\){b2j(u) + ||B2j(u)||R} \(\leqslant \) 0 (constraints of the problem (4) for r = R), we obtain the first inequality to be proved ψ(ρ) \(\leqslant \) ψ(R) (here we take into account that ψ(ρ) is defined at least on a wider set). From (11) and the second inequality in (A.1) it follows that

$$\psi (R)\;\leqslant \;\Psi (u(\rho ),\,\,R)\;\leqslant \;\Psi (u(\rho ),\,\,\rho ) + (R - \rho )L = \psi (\rho ) + (R - \rho )L.$$

This estimate implies the second inequality to be proved. Theorem 4 is proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ivanov, S.V., Kibzun, A.I. & Akmaeva, V.N. Parametric Algorithm for Finding a Guaranteed Solution to a Quantile Optimization Problem. Autom Remote Control 84, 848–857 (2023). https://doi.org/10.1134/S0005117923080039

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923080039

Keywords:

Navigation