Skip to main content
Log in

On the adaptivity gap in two-stage robust linear optimization under uncertain packing constraints

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper, we study the performance of static solutions in two-stage adjustable robust packing linear optimization problem with uncertain constraint coefficients. Such problems arise in many important applications such as revenue management and resource allocation problems where demand requests have uncertain resource requirements. The goal is to find a two-stage solution that maximizes the worst case objective value over all possible realizations of the second-stage constraints from a given uncertainty set. We consider the case where the uncertainty set is column-wise and constraint-wise (any constraint describing the set involve entries of only a single column or a single row). This is a fairly general class of uncertainty sets to model constraint coefficient uncertainty. We show that the two-stage adjustable robust problem is \(\varOmega (\log n)\)-hard to approximate. On the positive side, we show that a static solution is an \(O\big (\log n \cdot \min (\log \varGamma , \log (m+n))\big )\)-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \(\varGamma \) is the maximum possible ratio of upper bounds of the uncertain constraint coefficients. Therefore, for constant \(\varGamma \), surprisingly the performance bound for static solutions and therefore, the adaptivity gap matches the hardness of approximation for the adjustable problems. Furthermore, in general the static solution provides nearly the best efficient approximation for the two-stage adjustable robust problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Arora, S., Babai, L., Stern, J., Sweedyk, Z.: The hardness of approximate optima in lattices, codes, and systems of linear equations. In: 34th Annual Symposium on Foundations of Computer Science, 1993. Proceedings, pp. 724–733. IEEE (1993)

  2. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  3. Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertsimas, D., de Ruiter, F.J.C.T.: Duality in two-stage adaptive linear optimization: faster computation and stronger bounds. INFORMS J. Comput. 28(3), 500–511 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bertsimas, D., Goyal, V.: On the power of robust solutions in two-stage stochastic and adaptive optimization problems. Math. Oper. Res. 35, 284–305 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertsimas, D., Goyal, V.: On the approximability of adjustable robust convex optimization under uncertainty. Math. Methods Oper. Res. 77(3), 323–343 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bertsimas, D., Goyal, V., Lu, B.Y.: A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150(2), 281–319 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bertsimas, D., Goyal, V., Sun, X.A.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bertsimas, D., Natarajan, K., Teo, C.-P.: Applications of semidefinite optimization in stochastic project scheduling. Technical report, High Performance Computation for Engineered Systems, Singapore–MIT Alliance (2002)

  14. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dean, B.C., Goemans, M.X., Vondrák, J.: Adaptivity and approximation for stochastic packing problems. In: Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 395–404. Society for Industrial and Applied Mathematics (2005)

  17. El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  18. Feige, U.: A threshold of ln n for approximating set cover. J. ACM (JACM) 45(4), 634–652 (1998)

    Article  MATH  Google Scholar 

  19. Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  20. Goel, A., Indyk, P.: Stochastic load balancing and related problems. In: 40th Annual Symposium on Foundations of Computer Science, 1999, pp. 579–586. IEEE (1999)

  21. Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  22. Goyal, V., Ravi, R.: A ptas for the chance-constrained knapsack problem with random item sizes. Oper. Res. Lett. 38(3), 161–164 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hadjiyiannis, M.J., Goulart, P.J., Kuhn, D.: A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization. In: 2011 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC), pp. 7386–7391. IEEE (2011)

  24. Kall, P., Wallace, S.W.: Stochastic Programming. Wiley, New York (1994)

    MATH  Google Scholar 

  25. Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht (1995)

    Book  MATH  Google Scholar 

  26. Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  27. Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. Society for Industrial and Applied Mathematics, Philadelphia (2009)

    Book  MATH  Google Scholar 

  28. Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In: Jeyakumar, V., Rubinov, A.M. (eds.) Continuous Optimization: Current Trends and Applications, pp. 111–144. Springer, Berlin (2005)

    Chapter  Google Scholar 

  29. Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  30. Vazirani, V.: Approx. Algorithms. Springer, Berlin (2013)

    Google Scholar 

  31. Wiesemann, W., Kuhn, D., Rustem, B.: Robust resource allocations in temporal networks. Math. Program. 135(1–2), 437–471 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vineet Goyal.

Additional information

Vineet Goyal has been supported by NSF Grant CMMI-1201116, CMMI-1351838 (CAREER), Google Faculty Research Award and IBM Faculty Award. Brian Y. Lu has been supported by NSF Grant CMMI-1201116.

Appendices

Appendix A: Proof of Theorem 2

In this section, we show that the general two-stage adjustable robust problem \(\varPi _\mathsf{AR}^\mathsf{Gen}\) (2.1) is \(\varOmega (2^{\log ^{1-\epsilon }m})\)-hard to approximate for any constant \(0<\epsilon <1\). We prove this by an approximation preserving reduction from the Label-Cover-Problem. The reduction is similar in spirit to the reduction from the Set-Cover-Problem to the two-stage adjustable robust problem.

Label-cover-problem We are given a finite set V (\(|V|=m\)), a family of subset \(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}\) of V and graph \(G=(V, E)\). Let H be a supergraph with vertices \(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}\) and edges F where \(({\mathcal V}_i, {\mathcal V}_j)\in F\) if there exists \((k,l)\in E\) such that \(k\in {\mathcal V}_i, l\in {\mathcal V}_j\). The goal is to find the smallest cardinality set \(C\subseteq V\) such that F is covered, i.e., for each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), there exists \(k\in {\mathcal V}_i\cap C, l\in {\mathcal V}_j\cap C\) such that \((k,l)\in E\).

The label cover problem is \(\varOmega (2^{\log ^{1-\epsilon }m})\)-hard to approximate for any constant \(0<\epsilon <1\), i.e., there is no polynomial time approximation algorithm that give an \(O(2^{\log ^{1-\epsilon }m})\)-approximation for any constant \(0<\epsilon <1\) unless \(\mathbf {NP}\subseteq \mathbf {DTIME}(m^{\text {polylog}(m)})\) [1].

Proof of Theorem 2

Consider an instance \(\mathcal I\) of Label-Cover-Problem with ground elements V (\(|V|=m\)), graph \(G=(V, E)\), a family of subset of V: \(({\mathcal V}_1,\ldots ,{\mathcal V}_K)\) and a supergraph \(H=(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}, F)\) where \(|F|=n\). We construct the following instance \(\mathcal{I}'\) of the general adjustable robust problem \(\varPi _\mathsf{AR}^\mathsf{Gen}\) (2.1):

$$\begin{aligned} \varvec{A}=\varvec{0},\;\varvec{c}=\varvec{0},\;\varvec{d}=\left( \begin{array}{c}\varvec{e}\\ mb{e}\end{array}\right) \in {\mathbb R}^{n+m},\; \varvec{h}=\varvec{e}\in {\mathbb R}^m,\; \mathcal{U}=\left\{ [\varvec{B}\;-\varvec{I}_m]\;|\;\varvec{B}\in \mathcal{U}_F\right\} \end{aligned}$$

where \(d_1=d_2=\cdots =d_n=1\), \(\varvec{I}_m\) is the m-dimensional identity matrix and each column set of \(\mathcal{U}_F\subseteq {\mathbb R}^{m\times n}_+\) corresponds to an edge \(({\mathcal V}_i, {\mathcal V}_j)\in F\) with

$$\begin{aligned} \mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}=\mathsf{conv}\left( \{\varvec{0}\}\bigcup \left\{ \left. \frac{1}{2}(\varvec{e}_k+\varvec{e}_l)\;\right| \;(k,l)\in E, k\in {\mathcal V}_i, l\in {\mathcal V}_j\right\} \right) \subseteq {\mathbb R}^m_+. \end{aligned}$$

Therefore, \(\mathcal U\) is column-wise with column sets \(\mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}, \forall ({\mathcal V}_i, {\mathcal V}_j)\in F\) and \(\mathcal{U}_j,j\in [m]\) where \(\mathcal{U}_j=\{-\varvec{e}_j\}\), i.e., there is no uncertainty in \(\mathcal{U}_j\). The instance \(\mathcal{I}'\) of \(\varPi _\mathsf{AR}^\mathsf{Gen}\) can be formulated as

$$\begin{aligned} \begin{aligned} z_\mathsf{AR}^\mathsf{Gen}&=\min _{\varvec{B}\in \mathcal{U}_F} \max _{\varvec{y}\ge \varvec{0},\varvec{z}\ge \varvec{0}} \{\varvec{e}^T \varvec{y}-\varvec{e}^T\varvec{z}\;|\;\varvec{B}\varvec{y}-\varvec{z} \le \varvec{e}, \varvec{y}\ge \varvec{0}, \varvec{z}\ge \varvec{0}\}\\&=\min _{\varvec{b}_{({\mathcal V}_i, {\mathcal V}_j)}\in \mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}} \max _{\varvec{y}\ge \varvec{0},\varvec{z}\ge \varvec{0}} \left\{ \varvec{e}^T \varvec{y}-\varvec{e}^T\varvec{z}\;\left| \;\sum _{({\mathcal V}_i, {\mathcal V}_j)\in F} y_{({\mathcal V}_i, {\mathcal V}_j)}\varvec{b}_{({\mathcal V}_i, {\mathcal V}_j)}-\varvec{z} \le \varvec{e}, \varvec{y}\ge \varvec{0}, \varvec{z}\ge \varvec{0}\right. \right\} . \end{aligned} \end{aligned}$$

Suppose \((\hat{\varvec{y}},\hat{\varvec{z}}, {\hat{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}, ({\mathcal V}_i, {\mathcal V}_j)\in F)\) is a feasible solution for instance \(\mathcal{I}'\). Then, we can compute a label cover of instance \(\mathcal I\) with cardinality at most \(\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\). From strong duality, there exists an optimal solution \({\hat{\varvec{\mu }}}\) for

$$\begin{aligned} \min \left\{ \varvec{e}^T\varvec{\mu }\;|\;{\hat{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T\varvec{\mu }\ge 1, \quad \forall ({\mathcal V}_i, {\mathcal V}_j)\in F, \mu \in [0,1]^{m}\right\} \end{aligned}$$

and \(\varvec{e}^T{\hat{\varvec{\mu }}}=\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\). For each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), consider a basic optimal solution \(({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)},({\mathcal V}_i, {\mathcal V}_j)\in F)\) where

$$\begin{aligned} {\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}\in \arg \max \left\{ \varvec{b}^T{\hat{\varvec{\mu }}}\;|\;\varvec{b}\in \mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}\right\} . \end{aligned}$$

Therefore, \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}\) is a vertex of \(\mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}\) for each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), which implies that \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) for some \((k_i,l_j) \in E\) and \(k_i\in {\mathcal V}_i, l_j\in {\mathcal V}_j\). Also, \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T{\hat{\varvec{\mu }}}\ge 1, \forall ({\mathcal V}_i, {\mathcal V}_j)\in F\). Now, let \(\tilde{\varvec{\mu }}\) the optimal solution of the following LP:

$$\begin{aligned} \min \left\{ \varvec{e}^T\varvec{\mu }\;|\;{\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T\varvec{\mu }\ge 1, \quad \forall ({\mathcal V}_i, {\mathcal V}_j)\in F,\varvec{0}\le \varvec{\mu }\le \varvec{e}\right\} . \end{aligned}$$

Clearly, \(\varvec{e}^T\tilde{\varvec{\mu }}\le \varvec{e}^T{\hat{\varvec{\mu }}}\). Also, since \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) and \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T\tilde{\varvec{\mu }}\ge 1\), \(\tilde{\mu }_{k_i}=\tilde{\mu }_{l_j}=1\). Therefore, \(\tilde{\varvec{\mu }}\in \{0,1\}^{m}\). Let

$$\begin{aligned} C=\{j\;|\;\tilde{\mu }_j=1\}. \end{aligned}$$

Clearly, C is a valid label cover for F and \(|C|=\varvec{e}^T\tilde{\varvec{\mu }}\le \varvec{e}^T{\hat{\varvec{\mu }}}=\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\).

Conversely, given a label cover C of instance \(\mathcal I\), for any \(j\in [m]\), let \(\bar{\mu }_j=1\) if \(j\in C\) and zero otherwise. This implies that \(\varvec{e}^T\bar{\varvec{\mu }}=|C|\). For any \({({\mathcal V}_i, {\mathcal V}_j)}\in F\), let \(\bar{\varvec{b}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) where \(k_i\in {\mathcal V}_i\cap C, l_j\in {\mathcal V}_j\cap C\) such that \((k_i,l_j) \in E\). Then, let \(\varvec{\mu }'\) be an optimal solution for the following LP

$$\begin{aligned} \min \left\{ \varvec{e}^T\varvec{\mu }\;|\;\bar{\varvec{b}}_{({\mathcal V}_i, {\mathcal V}_j)}^T\varvec{\mu }\ge 1, \quad \forall ({\mathcal V}_i, {\mathcal V}_j)\in F,\varvec{0}\le \varvec{\mu }\le \varvec{e}\right\} . \end{aligned}$$

Then, \(\varvec{e}^T\varvec{\mu }'\le \varvec{e}^T\bar{\varvec{\mu }}\) as \(\bar{\varvec{\mu }}\) is feasible for the above LP. From strong duality, there exists \(\bar{\varvec{y}}\in {\mathbb R}^n_+\) and \(\bar{\varvec{z}}\in {\mathbb R}^m_+\) such that \((\bar{\varvec{y}},\bar{\varvec{z}}, \bar{\varvec{b}}_{({\mathcal V}_i, {\mathcal V}_j)},{({\mathcal V}_i, {\mathcal V}_j)}\in F)\) is a feasible solution for instance \(\mathcal{I}'\) of \(\varPi _\mathsf{AR}^\mathsf{Gen}\) with cost \(\varvec{e}^T\bar{\varvec{y}}-\varvec{e}^T\bar{\varvec{z}}=\varvec{e}^T\varvec{\mu }'\le \varvec{e}^T\bar{\varvec{\mu }}=|C|\). \(\square \)

Appendix B: Approximate separation to optimization

For any \(\varvec{x} \in {\mathbb R}^n_+\), let

$$\begin{aligned} Q^*(\varvec{x})=\min _{\varvec{B}\in \mathcal{U}}\max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}\right\} . \end{aligned}$$

We show that if we can approximate the separation problem, we can also approximate \(\varPi _\mathsf{AR}\). Let \(\mathcal{A}\) be a \(\gamma \)-approximate algorithm for the separation problem (3.1), i.e., \(\mathcal{A}\) computes a \(\gamma \)-approximation algorithm for the min-max problem in (3.1). For any \(\varvec{x} \in {\mathbb R}^n_+\), let \(\varvec{B}^\mathcal{A}(\varvec{x})\) denote the matrix returned by \(\mathcal{A}\) and let

$$\begin{aligned} Q^\mathcal{A}(\varvec{x})=\max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}^\mathcal{A}(\varvec{x})\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}\right\} . \end{aligned}$$

Therefore, the approximate separation based on Algorithm \(\mathcal{A}\) is as follows: for any \((\varvec{x}, z)\), return feasible if \(Q^\mathcal{A}(\varvec{x}) \ge z\). Otherwise give a violating hyperplane corresponding to \(\varvec{B}^\mathcal{A}(\varvec{x})\). Now, we prove the following theorem.

Theorem 12

Suppose we have an Algorithm \(\mathcal{A}\) that is a \(\gamma \)-approximation for the separation problem (3.1). Then we can compute a \(\gamma \)-approximation for the two-stage adjustable robust problem \(\varPi _\mathsf{AR}\) (1.1).

Proof

Since \(\mathcal{A}\) is a \(\gamma \)-approximation to the min-max problem in (3.1), for any \(\varvec{x} \in {\mathbb R}^n_+\),

$$\begin{aligned} Q^*(\varvec{x})\le Q^\mathcal{A}(\varvec{x})\le \gamma \cdot Q^*(\varvec{x}). \end{aligned}$$

Let \((\varvec{x}^*, z^*)\) be an optimal solution for \(\varPi _\mathsf{AR}\) and let

$$\begin{aligned} \mathsf{OPT} = \varvec{c}^T \varvec{x}^* + z^*. \end{aligned}$$

Consider the optimization algorithm based on the approximate separation algorithm \(\mathcal{A}\) and suppose it returns the solution \((\hat{\varvec{x}}, \hat{z})\). Note that \((\varvec{x}^*, z^*)\) is feasible according to the approximate separation algorithm \(\mathcal{A}\) as \(Q^\mathcal{A}(\varvec{x}^*)\ge Q^*(\varvec{x}^*)=z^*\). Therefore,

$$\begin{aligned} \varvec{c}^T\hat{\varvec{x}}+\hat{z}\ge \varvec{c}^T\varvec{x}^*+z^*. \end{aligned}$$
(B.1)

Note that \(\hat{z}\) is an approximation for the worst case second-stage objective value when the first stage solution is \(\hat{\varvec{x}}\). The true objective value for the first stage solution \(\hat{\varvec{x}}\) is given by

$$\begin{aligned} \varvec{c}^T\hat{\varvec{x}}+ Q^*(\hat{\varvec{x}})\ge & {} \varvec{c}^T\hat{\varvec{x}}+ \frac{1}{\gamma } Q^\mathcal{A}(\hat{\varvec{x}})\nonumber \\\ge & {} \varvec{c}^T\hat{\varvec{x}}+ \frac{1}{\gamma } \hat{z} \nonumber \\\ge & {} \frac{1}{\gamma }(\varvec{c}^T\hat{\varvec{x}}+\hat{z}) \nonumber \\\ge & {} \frac{1}{\gamma }\mathsf{OPT}, \end{aligned}$$
(B.2)

where the first inequality follows as \(\mathcal{A}\) is a \(\gamma \)-approximation and \(Q^\mathcal{A}(\hat{\varvec{x}}) \le \gamma \cdot Q^*(\hat{\varvec{x}})\). Inequality (B.2) follows as \((\hat{\varvec{x}}, \hat{z})\) is feasible according to \(\mathcal{A}\) and therefore, \(\hat{z} \le Q^\mathcal{A}(\hat{\varvec{x}})\) and the last inequality follows from (B.1). Therefore, the optimization problem based on algorithm \(\mathcal{A}\) computes a \(\gamma \)-approximation for \(\varPi _\mathsf{AR}\). \(\square \)

Appendix C: Transformation of the adjustable robust problem

Let \(\varvec{x}^*\) be the optimal first-stage solution for \(\varPi _\mathsf{AR}\), i.e.,

$$\begin{aligned} z_\mathsf{AR}=\varvec{c}^T\varvec{x}^*+\min _{\varvec{B}}\max _{\varvec{y}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}^*, \varvec{B}\in \mathcal{U}, \varvec{y}\ge \varvec{0}\right\} . \end{aligned}$$

Note that \((\varvec{x}^*, \varvec{0})\) is a feasible solution for \(\varPi _\mathsf{Rob}\). We have

$$\begin{aligned} z_\mathsf{Rob}\ge \varvec{c}^T\varvec{x}^*+\max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}^*, \quad \forall \varvec{B}\in \mathcal{U}\right\} . \end{aligned}$$

Since \(\varvec{c}\) and \(\varvec{x}^*\) are both non-negative, to prove Theorem 7, it suffice to show

$$\begin{aligned}&\min _{\varvec{B}\in \mathcal{U}}\max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}^*\right\} \\&\quad \le O(\log (m+n)\log n)\cdot \max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}^*, \quad \forall \varvec{B}\in \mathcal{U}\right\} . \end{aligned}$$

In this section, we show that we can assume without loss of generality that \((\varvec{h}-\varvec{A}\varvec{x}^*)>\varvec{0}\), as otherwise the static solution is optimal for the two-stage adjustable robust problem \(\varPi _\mathsf{AR}\) (1.1), i.e., \(z_\mathsf{AR}=z_\mathsf{Rob}\): Note that \((\varvec{h}-\varvec{A}\varvec{x}^*)\ge \varvec{0}\), since otherwise the inner problem becomes infeasible. Now, suppose that \((\varvec{h}-\varvec{A}\varvec{x})_i=0\) for some \(i\in [m]\). Since \(\mathcal U\) is a full-dimensional convex set, there exist \(\varvec{B}^*\in \mathcal{U}\) such that \(B_{ij}^*>0\) for all \(j\in [n]\). Therefore,

$$\begin{aligned} \min _{\varvec{B}\in \mathcal{U}}\max _{\varvec{y}\ge \varvec{0}}\left\{ \varvec{d}^T\varvec{y}\;|\;\varvec{B}\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}\}\le \max _{\varvec{y}\ge \varvec{0}}\{\varvec{d}^T\varvec{y}\;|\;\varvec{B}^*\varvec{y}\le \varvec{h}-\varvec{A}\varvec{x}\right\} =0, \end{aligned}$$

which implies that \(z_\mathsf{AR}=\varvec{c}^T\varvec{x}^*\) since \(\varvec{d},\varvec{y}\) are non-negative. On the other hand, \((\varvec{x}^*, \varvec{0})\) is a feasible solution for \(\varPi _\mathsf{Rob}\). Therefore,

$$\begin{aligned} z_\mathsf{Rob}\ge \varvec{c}^T\varvec{x}^*=z_\mathsf{AR}. \end{aligned}$$

However, suppose \((\bar{\varvec{x}}, \bar{\varvec{y}})\) is an optimal solution for \(\varPi _\mathsf{Rob}\), then \(\varvec{x}=\bar{\varvec{x}}, \varvec{y}(\varvec{B})=\bar{\varvec{y}}\) for all \(\varvec{B}\in \mathcal{U}\) is feasible for \(\varPi _\mathsf{AR}\). Therefore, \(z_\mathsf{AR}\ge z_\mathsf{Rob}\).

Appendix D: Proof of Theorem 3

Let \(\varvec{y}^*\) be such that \({\hat{\varvec{B}}}\varvec{y}^*\le \varvec{h}\). For any \(\varvec{B}\in \mathcal{U}\), we have \(\varvec{B}\le {\hat{\varvec{B}}}\) component-wise by construction. Note that \(\varvec{y}^*\ge \varvec{0}\), this implies \(\varvec{B}\varvec{y}^*\le {\hat{\varvec{B}}}\varvec{y}^*\le \varvec{h}\) for all \(\varvec{B}\in \mathcal{U}\).

Conversely, suppose \(\tilde{\varvec{y}}\) satisfies \(\varvec{B}\tilde{\varvec{y}}\le \varvec{h}\) for all \(\varvec{B}\in \mathcal{U}\). For each \(i\in [m]\), note that \(\mathsf{diag}(\varvec{e}_i){\hat{\varvec{B}}}\in \mathcal{U}\) by construction. Therefore, \(\varvec{e}_i^T{\hat{\varvec{B}}}\tilde{\varvec{y}}\le h_i\) for all \(i\in [m]\), which implies that \({\hat{\varvec{B}}}\tilde{\varvec{y}}\le \varvec{h}\).

Appendix E: Proof of Lemma 1.

Let

$$\begin{aligned} {\hat{B}}_{ij}=\frac{1}{(n+i-j+1)\bmod {m}}. \end{aligned}$$

From Theorem 3, \(\varPi _\mathsf{Rob}\) is equivalent to

$$\begin{aligned} z_\mathsf{Rob}=\max \left\{ \varvec{e}^T\varvec{y}\;|\;{\hat{\varvec{B}}}\varvec{y}\le \varvec{e},\varvec{y}\ge \varvec{0}\right\} .\end{aligned}$$

The dual problem is

$$\begin{aligned} z_\mathsf{Rob}=\min \left\{ \varvec{e}^T\varvec{z}\;|\;{\hat{\varvec{B}}}^T\varvec{z}\ge \varvec{e},\varvec{z}\ge \varvec{0}\right\} .\end{aligned}$$

Let

$$\begin{aligned} s=\sum _{i=1}^n \frac{1}{i} = \varTheta (\log n). \end{aligned}$$

It is easy to observe that \(\frac{1}{s}\varvec{e}\) is a feasible solution for both the primal and the dual formulations of \(z_\mathsf{Rob}\). Moreover, they have the same objective value. Therefore,

$$\begin{aligned} z_\mathsf{Rob}=\frac{n}{s}. \end{aligned}$$

On the other hand, for each \(j\in [n]\), denote

$$\begin{aligned} \mathcal{U}_j=\left\{ \varvec{b}\in {\mathbb R}^n_+\;\left| \;\sum _{i=1}^{n} [(n+i-j+1)\bmod {n}]\cdot b_i\le 1\right. \right\} . \end{aligned}$$

By writing the dual of the inner maximization problem of \(\varPi _\mathsf{AR}\), we have

$$\begin{aligned}\begin{aligned} z_\mathsf{AR}&=\min \left\{ \varvec{e}^T\varvec{\alpha }\;|\;\varvec{B}^T\varvec{\alpha }\ge \varvec{e},\varvec{\alpha }\ge \varvec{0},\varvec{B}\in \mathcal{U}\right\} \\&=\min \left\{ \lambda \;|\;\lambda \varvec{B}^T\varvec{\mu }\ge \varvec{e},\varvec{e}^T\varvec{\mu }=1,\varvec{\mu }\ge \varvec{0},\varvec{B}\in \mathcal{U}\right\} \\&\\&=\min \left\{ \left. \frac{1}{\theta }\; \right| \;\varvec{b}_j^T\varvec{\mu }\ge \theta ,\varvec{b}_j\in \mathcal{U}_j,\varvec{e}^T\varvec{\mu }=1,\varvec{\mu }\ge \varvec{0}\right\} . \end{aligned} \end{aligned}$$

Therefore, we just need to solve

$$\begin{aligned} \frac{1}{z_\mathsf{AR}}=\max \left\{ \theta \;|\;\varvec{b}_j^T\varvec{\mu }\ge \theta ,\varvec{b}_j\in \mathcal{U}_j,\varvec{e}^T\varvec{\mu }=1,\varvec{\mu }\ge \varvec{0}\right\} \end{aligned}$$
(E.1)

Suppose \(({\hat{\theta }},{\hat{\varvec{\mu }}}, {\hat{\varvec{b}}}_j, j\in [m])\) is an optimal solution for (E.1). For each \(j\in [n]\), consider a basic optimal solution \({\tilde{\varvec{b}}}_j\) of the following LP:

$$\begin{aligned} {\tilde{\varvec{b}}}_j\in \arg \max \left\{ \varvec{b}^T{\hat{\varvec{\mu }}}\;|\;\varvec{b}\in \mathcal{U}_j\right\} . \end{aligned}$$

Therefore, \({\tilde{\varvec{b}}}_j\) is a vertex of \(\mathcal{U}_j\), which implies that \({\tilde{\varvec{b}}}_j={\hat{B}}_{i_j j}\varvec{e}_{i_j}\) for some \(i_j\in [n]\) and \({\tilde{\varvec{b}}}_j^T{\hat{\varvec{\mu }}}\ge {\hat{\theta }}\). For each \(i\in [n]\), let \({\mathcal S}_i=\{j\;|\;i_j=i\}\). We have \(\sum _{i=1}^n |{\mathcal S}_i|=n\). For each \(i\in [n]\) such that \({\mathcal S}_i\ne \emptyset \), \({\hat{B}}_{ij}\) can only take values in \(\{1, 1/2,\ldots , 1/n\}\) for \(j\in {\mathcal S}_i\). Moreover, \({\hat{B}}_{ij}\ne {\hat{B}}_{ik}\) for \(j\ne k\). Therefore, there exists \(l_i\in {\mathcal S}_i\) such that

$$\begin{aligned} {\hat{B}}_{i l_i}\le \frac{1}{|{\mathcal S}_i|}, \text{ and } {\tilde{\varvec{b}}}_{l_i}^T{\hat{\varvec{\mu }}}={\hat{B}}_{i l_i}{\hat{\mu }}_{i}\ge {\hat{\theta }}. \end{aligned}$$

We have

$$\begin{aligned} 1=\sum _{i:{\mathcal S}_i\ne \emptyset } {\hat{\mu }}_i \ge \sum _{i:{\mathcal S}_i\ne \emptyset } \frac{{\hat{\theta }}}{{\hat{B}}_{i l_i}}\ge \sum _{i:{\mathcal S}_i\ne \emptyset }{\hat{\theta }}|{\mathcal S}_i|={\hat{\theta }}n. \end{aligned}$$

Therefore, \({\hat{\theta }}\le \frac{1}{n}\), which implies that \(z_\mathsf{AR}\ge n\).

On the other hand, it is easy to observe that \(z_\mathsf{AR} \le n\): \(\varvec{b}_j=\varvec{e}_j\), \(\varvec{\mu } = 1/n \cdot \varvec{e}\) and \(\theta = 1/n\) is a feasible solution for (E.1). Therefore,

$$\begin{aligned} z_\mathsf{AR}= n=\sum _{i=1}^n \frac{1}{i}\cdot z_\mathsf{Rob}=\varTheta (\log n)\cdot z_\mathsf{Rob}, \end{aligned}$$

which completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Awasthi, P., Goyal, V. & Lu, B.Y. On the adaptivity gap in two-stage robust linear optimization under uncertain packing constraints. Math. Program. 173, 313–352 (2019). https://doi.org/10.1007/s10107-017-1222-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-017-1222-8

Keywords

Mathematics Subject Classification

Navigation