Abstract
The problem of stochastic programming with a quantile criterion for a normal distribution is studied in the case of a loss function that is piecewise linear in random parameters and convex in strategy. Using the confidence method, the original problem is approximated by a deterministic minimax problem parameterized by the radius of a ball inscribed in a confidence polyhedral set. The approximating problem is reduced to a convex programming problem. The properties of the measure of the confidence set are investigated when the radius of the ball changes. An algorithm is proposed for finding the radius of a ball that provides a guaranteeing solution to the problem. A method for obtaining a lower estimate of the optimal value of the criterion function is described. The theorems are proved on the convergence of the algorithm with any predetermined probability and on the accuracy of the resulting solution.
REFERENCES
Kibzun, A.I. and Kan, Y.S., Stochastic Programming Problems with Probability and Quantile Functions, Chichester: John Wiley & Sons, 1996.
Kibzun, A.I. and Kan, Yu.S., Zadachi stokhasticheskogo programmirovaniya s veroyatnostnymi kriteriyami (Stochastic Programming Problems with Probabilistic Criteria), Moscow: Fizmatlit, 2009.
Kibzun, A.I. and Naumov, A.V., A Guaranteeing Algorithm for Quantile Optimization, Kosm. Issled., 1995, vol. 33, no. 2, pp. 160–165.
Naumov, A.V. and Ivanov, S.V., On Stochastic Linear Programming Problems with the Quantile Criterion, Autom. Remote Control, 2011, vol. 72, no. 2, pp. 353–369.
Kan, Yu.S., An Extension of the Quantile Optimization Problem with a Loss Function Linear in Random Parameters, Autom. Remote Control, 2020, vol. 81, no. 12, pp. 2194–2205.
Vasil’eva, S.N. and Kan, Yu.S., A Method for Solving Quantile Optimization Problems with a Bilinear Loss Function, Autom. Remote Control, 2015, vol. 76, no. 9, pp. 1582–1597.
Vasil’eva, S.N. and Kan, Yu.S., Approximation of Probabilistic Constraints in Stochastic Programming Problems with a Probability Measure Kernel, Autom. Remote Control, 2019, vol. 80, no. 11, pp. 2005–2016.
Pr’ekopa, A., Stochastic Programming, Dordrecht: Kluwer, 1995.
Shapiro, A., Dentcheva, D., and Ruszczy’nski, A., Lectures on Stochastic Programming. Modeling and Theory, Philadelphia: Society for Industrial and Applied Mathematics (SIAM), 2014.
Lejeune, M.A. and Pr’ekopa, A., Relaxations for Probabilistically Constrained Stochastic Programming Problems: Review and Extensions, Ann. Oper. Res., 2018. https://doi.org/10.1007/s10479-018-2934-8
Dentcheva, D., Pr’ekopa, A., and Ruszczy’nski, A., On Convex Probabilistic Programming with Discrete Distributions Nonlinear Anal.-Theor., 2001, vol. 47, no. 3, pp. 1997–2009.
Van Ackooij, W., Berge, V., de Oliveira, W., and Sagastiz’abal, C., Probabilistic Optimization via Approximate p-Efficient Points and Bundle Methods, Comput. Oper. Res., 2017, vol. 77, pp. 177–193.
Ivanov, S.V. and Kibzun, A.I., General Properties of Two-Stage Stochastic Programming Problems with Probabilistic Criteria, Autom. Remote Control, 2019, vol. 80, no. 6, pp. 1041–1057.
Boyd, S. and Vandenberghe, L., Convex Optimization, Cambridge: University Press, 2009.
Shiryaev, A.N., Probability, New York: Springer, 1996.
Funding
The work was supported by the Russian Science Foundation (project no. 22-21-00213, https://rscf.ru/project/22-21-00213/).
Author information
Authors and Affiliations
Corresponding authors
Additional information
This paper was recommended for publication by E.Ya. Rubinovich, a member of the Editorial Board
APPENDIX
APPENDIX
Proof of Theorem 1. Conditions 2 and 3 ensure that all constraints in the problem (5) are active. This means that all faces of the set Cr touch the ball Br. As r increases on the segment [0, R] the faces of the set Cr are transferred in parallel, touching the ball Br. This means that the set Cr expands as r increases. Therefore, the function h, defined as the measure Cr, is non-decreasing. Theorem 1 is proved.
Proof of Theorem 2. Let γ ∈ (0, 1). The set \({{C}_{{{{\rho }_{\gamma }}}}}\) is defined as the intersection of k half-planes of measure no less than γ. Denote these half-planes by Li, i = \(\overline {1,k} \). Then
Thus, h(ργ) \( \geqslant \) α for α \(\leqslant \) 1 – (1 – γ)k, which is equivalent to γ \( \geqslant \) β = 1 – \(\frac{{1 - \alpha }}{k}\). Theorem 2 is proved.
Proof of Theorem 3. Since at each iteration the segment of the search for a solution narrows two times, the number of iterations K of the algorithm can be found as the minimum natural number K, that satisfies the inequality
It follows from this inequality that K = \(\left\lceil {{{{\log }}_{2}}\frac{{\left| {{{{\bar {R}}}_{\alpha }} - {{\rho }_{\alpha }}} \right|}}{\delta }} \right\rceil \). The algorithm can make an error in its work only if at some iteration it turns out that \(\hat {h}(r)\) \( \geqslant \) α + ε, although in fact h(r) < α. It is easy to see that the random variable s(r) is distributed according to the binomial law with the success probability h(r) – μ(r). The inequality is known ([15], Chapter 1, Section 6):
Therefore, if we assume that h(r) < α, then P{\(\hat {h}\)(r) \( \geqslant \) α + ε} \(\leqslant \) \({{e}^{{ - 2N{{\varepsilon }^{2}}}}}\). Since the samples used to evaluate the measure are independent, the probability that the algorithm will work correctly is at least \({{\left( {1 - {{e}^{{ - 2N{{\varepsilon }^{2}}}}}} \right)}^{K}}\). Hence it follows that, in order to ensure the probability p of successful operation of Algorithm 1, the inequality
must be satisfied.
Theorem 3 is proved.
Proof of Theorem 4. Let Ψ(u, r) \( \triangleq \) \({{\max }_{{x \in {{B}_{r}}}}}\Phi (u,x)\) = Φ(u, x0(r)), where x0 is the point on the boundary of the ball Br, where the specified maximum is reached. Since Bρ ⊂ BR, Ψ(u, ρ) \(\leqslant \) Ψ(u, R) holds. Since the point y = \(\frac{\rho }{R}{{x}^{0}}(R)\) lies on the boundary of the ball Bρ, Φ(u, y) \(\leqslant \) Ψ{u, ρ). That’s why
Thus, the inequalities
are true. Minimizing the left and right parts of the first inequality in (A.1) with respect to u ∈ U so that \({{\max }_{{j = \overline {1,{{k}_{2}}} }}}\){b2j(u) + ||B2j(u)||R} \(\leqslant \) 0 (constraints of the problem (4) for r = R), we obtain the first inequality to be proved ψ(ρ) \(\leqslant \) ψ(R) (here we take into account that ψ(ρ) is defined at least on a wider set). From (11) and the second inequality in (A.1) it follows that
This estimate implies the second inequality to be proved. Theorem 4 is proved.
Rights and permissions
About this article
Cite this article
Ivanov, S.V., Kibzun, A.I. & Akmaeva, V.N. Parametric Algorithm for Finding a Guaranteed Solution to a Quantile Optimization Problem. Autom Remote Control 84, 848–857 (2023). https://doi.org/10.1134/S0005117923080039
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117923080039