Abstract
This paper analyzes the Lipschitz behavior of the feasible set mapping associated with linear and convex inequality systems in \({\mathbb {R}}^{n}\). To start with, we deal with the parameter space of linear (finite/semi-infinite) systems identified with the corresponding sets of coefficient vectors, which are assumed to be closed subsets of \({\mathbb {R}} ^{n+1}\). In this framework the size of perturbations is measured by means of the (extended) Hausdorff distance. A direct antecedent, extensively studied in the literature, comes from considering the parameter space of all linear systems with a fixed index set, T, where the Chebyshev (extended) distance is used to measure perturbations. In the present work we propose an appropriate indexation strategy which allows us to establish the equality of the Lipschitz moduli of the feasible set mappings in both parametric contexts, as well as to benefit from existing results in the Chebyshev setting for transferring them to the Hausdorff one. In a second stage, the possibility of perturbing directly the set of coefficient vectors of a linear system leads to new contributions on the Lipschitz behavior of convex systems via linearization techniques.
Similar content being viewed by others
1 Introduction
This paper is initially focussed on the Lipschitz behavior of the feasible set mapping associated with a parametric family of linear inequality systems of the form:
where \(x\in {\mathbb {R}}^{n}\) is the vector of variables and \(CL\left( {\mathbb {R}}^{n+1}\right) \) is the parameter space of all nonempty closed subsets in \({\mathbb {R}}^{n+1}.\) Elements in \(U\in CL\left( {\mathbb {R}} ^{n+1}\right) \) are denoted as \(\left( a,b\right) ,\) where \(a\in {\mathbb {R}} ^{n}\) and \(b\in {\mathbb {R}}.\) Given \(x,y\in {\mathbb {R}}^{n},\) \(\left\langle x,y\right\rangle \) represents the usual inner product of x and y. When U is an infinite set, (1) is a linear semi-infinite inequality system. Observe that, in this framework, perturbations fall on U and, so, obviously, two different systems, associated with different sets \( U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) can have different cardinality.
This setting includes, as a particular case, the parametric family of linear systems coming from linearizing convex inequalities of the form
where \(f\in \Gamma \), and \(\Gamma \) is the family of all the finite-valued convex functions defined on \({\mathbb {R}}^{n}\). Specifically, the feasible set of (2) does coincide with the set of solutions of the following linear system
where \(\mathrm {gph}\partial f\) and \(\mathrm {rge}\partial f\) represent the graph and the range (or image) of the subdifferential mapping \(\partial f\), respectively, and \(f^{*}\) is the Fenchel conjugate of f (the last equality above comes from [19, Theorem 23.5]).
Remark 1.1
(a) The finite-valuedness of the functions in \(\Gamma \) is not too restrictive. In fact, given a convex function \(g:{\mathbb {R}} ^{n}\longrightarrow {\mathbb {R}}\cup \left\{ +\infty \right\} \), since our approach is local and we mainly work in a ball \(x_{0}+\alpha {\mathbb {B}}\) (where \({\mathbb {B}}\) stands for the closed unit ball in \( {\mathbb {R}}^{n}\)) contained in the interior (assumed non-empty) of the effective domain of g, we can replace g with the function \(f\in \Gamma \) given by
as f and g coincide on such a ball. Observe that the maximum in (4) is attained due to the compactness of the set \(\{\left( z,a\right) :z\in x_{0}+\alpha {\mathbb {B}},~a\in \partial g\left( z\right) \}\) (see, e.g., [19, Theorems 24.4 and 24.7]).
(b) An alternative linearization of the inequality \(f\left( x\right) \le 0\) is the linear inequality system
where \({\mathrm{ri}}({\mathrm{dom}}f^{*})\) is the relative interior of the effective domain of \(f^{*}\). In fact, applying [19, Corollary 12.2.2]
The main objectives of this work consist of analyzing the Lipschitzian behavior of the parametrized linear system (1), and to apply the obtained results to derive new contributions on the convex case (2) via the standard linearization (3). We emphasize the fact that previous results about stability of subdifferentials (traced out from [2]) are also used in the study of this convex case.
Formally, associated with (1), we consider the feasible set mapping, \(\mathcal {F}:CL\left( {\mathbb {R}}^{n+1}\right) \rightrightarrows {\mathbb {R}}^{n},\) which assigns to each \(U\in CL\left( {\mathbb {R}} ^{n+1}\right) \) the set of solutions of the corresponding system:
The parameter space, \(CL\left( {\mathbb {R}}^{n+1}\right) ,\) will be endowed with the Pompeiu–Hausdorff distance (from now on, Hausdorff distance, for simplicity; see Sect. 2 for details). For convenience (in order to ensure the existence of projections) we deal with closed sets, but the study could be carried out with general nonempty sets, since both the feasible set mapping and the Hausdorff distance do not distinguish between sets and their closures.
The current paper is firstly concerned with analyzing the Lipschitz modulus of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \in \mathrm {gph} \mathcal {F}\) and, in a second stage, to derive a Lipschitzian type condition for the feasible set of the parametrized convex inequality (2). Roughly speaking we provide measures (or estimations) of the rate of variation of feasible points, around a nominal one \(x_{0}\in {\mathbb {R}}^{n},\) with respect to perturbations of a nominal parameter set \( U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) in the case of explicit linear systems, and of a nominal function \(f_{0}\in \Gamma \) in the case of linear systems obtained implicitly from a convex inequality.
We can find in the literature classical studies on convex multifunction and convex systems. The reader is referred to [18, Corollary 2 and p. 140] for the analysis of the Lipschitz behavior of feasible sets under right-hand side perturbations; or to [16] for a survey on characterizations of metric regularity (see Sect. 2).
As immediate antecedents of the present work we cite [3] and [4]. The first paper deals with the Lipschitz modulus of the feasible set mapping in the context of linear systems with a fixed index set T of the form
where \(x\in {\mathbb {R}}^{n}\) is the variable and \(\left( a_{t},b_{t}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}.\) The parameter space considered there, \(\left( {\mathbb {R}}^{n+1}\right) ^{T},\) is formed by all functions from T to \({\mathbb {R}}^{n+1}\) and it is endowed with the (extended) Chebyshev distance. The reader is addressed to the monograph [12] for a comprehensive study of such systems.
The results of [3] do not apply directly to our current setting. A first connection between both parameter spaces \(CL\left( {\mathbb {R}} ^{n+1}\right) \) and \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\) was established in [4], which provides some motivation and background for the present paper from the methodological point of view. That paper is focussed on the calmness modulus (see again Sect. 2), and takes advantage of previous results developed in the context of systems (6) to derive new contributions for the parametrized system (1). Formally, [4] introduces an appropriate indexation scheme assigning to each set in \(CL\left( {\mathbb {R}}^{n+1}\right) \) an element in \( \left( {\mathbb {R}}^{n+1}\right) ^{T}\) in such a way that the Hausdorff distance around \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) translates into the Chebyshev distance around its image in \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\). That indexation strategy is shown to be inappropriate for studying the Lipschitz (instead of calmness) modulus, where we need to index simultaneously pairs of systems around \(U_{0}\), as shown in Lemma 3.1 in Sect. 3.
The problem of analyzing the relationship among different parametric contexts was also addressed in [5] and [6] from a different perspective, mainly focussed on the lower semicontinuity of the feasible set mapping.
Now we summarize the structure of the paper. Section 2 gathers some definitions and key results of the background on the Lipschitz modulus in the context of systems (6), indexations, and stability of subdifferentials. Section 3 develops the study of the Lipschitz modulus of \( \mathcal {F},\) including the definition of an appropriate indexation which allows us to take advantage of the background about systems (6). Finally, Sect. 4 applies the results of the previous section to tackle the convex case.
2 Preliminaries and first results
To start with, recall that a set-valued mapping \(\mathcal {M} :Y\rightrightarrows X\) between metric spaces (both distances denoted by d) has the Aubin property (also called pseudo-Lipschitz–cf. [15]– or Lipschitz-like –cf. [17]–) at \(\left( y_{0},x_{0}\right) \in \mathrm {gph}\mathcal {M}\) if there exist a constant \( \kappa \ge 0\) and neighborhoods W of \(x_{0}\) and V of \(y_{0}\) such that
The infimum of constants \(\kappa \) over all \(\left( \kappa ,W,V\right) \) satisfying (7) is called the Lipschitz modulus of \( \mathcal {M}\) at \(\left( y_{0},x_{0}\right) \), denoted by \(\mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) \), and it is defined as \(+\infty \) when the Aubin property fails at \(\left( y_{0},x_{0}\right) \). The Aubin property of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \in \mathrm {gph}\mathcal {M}\) is known to be equivalent to the metric regularity of its inverse mapping \( \mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \); moreover, \(\mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) \) is known to coincide with the modulus of metric regularity of \(\mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \). So, we can write
under the convention \(\frac{0}{0}:=0.\)
The particularization of (7) to \(y_{2}=y_{0}\) yields the definition of calmness of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \), whose associated calmness modulus, \(\mathrm {clm}\mathcal {M}\left( y_{0},x_{0}\right) \), is defined analogously. It is also known that the calmness of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \) is equivalent to the metric subregularity of \(\mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \), and that the corresponding moduli do coincide; so,
Clearly \(\mathrm {clm}\mathcal {M}\left( y_{0},x_{0}\right) \le \mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) .\) For additional information about the Aubin property, metric regularity, calmness, metric subregularity, and related topics of variational analysis, the reader is addressed to [11, 14, 15, 17, 20].
Throughout the paper we use the following standard notation: Given \(X\subset {\mathbb {R}}^{k},\) \(k\in \mathbb {N},\) we denote by convX the convex hull of X, and \(\mathrm {int}X,\) \(\mathrm {cl}X\) and \(\mathrm {bd}X\) stand, respectively, for the interior, the closure and the boundary of X.
2.1 Indexation strategies and calmness of linear systems
For comparative purposes and as a motivation of the results of Sect. 3, this subsection recalls some details about the indexation scheme introduced in [4]. First, we fix the topologies considered in the space of variables, \({\mathbb {R}}^{n},\) and the parameter spaces, \(CL\left( {\mathbb {R}} ^{n+1}\right) \) and \(\left( {\mathbb {R}}^{n+1}\right) ^{T},\) for a given index set T.
Unless otherwise stated, \({\mathbb {R}}^{n}\) (space of variables) is equipped with an arbitrary norm, \(\left\| \cdot \right\| ,\) while \({\mathbb {R}} ^{n+1}\) (space of coefficient vectors of linear systems) is endowed with the norm
where \(\left\| \cdot \right\| _{*}\) represents the dual norm of \( \left\| \cdot \right\| \) in \({\mathbb {R}}^{n}\), which is given by \(\left\| a\right\| _{*}=\sup _{\left\| x\right\| \le 1}\left\langle a,x\right\rangle \).
The space \(CL\left( {\mathbb {R}}^{n+1}\right) \) is endowed with the (extended) Hausdorff distance \(d_{H}:CL\left( {\mathbb {R}}^{n+1}\right) \times CL\left( {\mathbb {R}}^{n+1}\right) \rightarrow [0,+\infty ]\) given by
where \(e\left( U_{i},U_{j}\right) ,\, i,j=1,2,\) represents the excess of \( U_{i}\) over \(U_{j},\)
where here \({\mathbb {B}}\) denotes the closed unit ball in \({\mathbb {R}}^{n+1}\). See [1, Section 3.2] for details about the Hausdorff distance in general settings. In particular, the triangle inequality is satisfied by the excess.
In \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\), the (extended) Chebyshev (or supremum) distance, \(d_{\infty }:\left( {\mathbb {R}}^{n+1}\right) ^{T}\times \left( {\mathbb {R}}^{n+1}\right) ^{T}\rightarrow [0,+\infty ]\), given by
is considered.
As commented in the introduction, paper [4] analyzes the calmness of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \in \mathrm {gph}\mathcal {F}\) via the calmness of the feasible set mapping associated with systems (6) with an appropriate index set T. To do this, a particular indexation scheme
is introduced. Recall that \(\sigma _{U}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) is said to be an indexation of \(U\in CL\big ( {\mathbb {R}} ^{n+1}\big ) \) if
specifically [4] considers \(T:={\mathbb {R}}^{n+1}\ \)and assigns to each \(U\in CL\left( {\mathbb {R}}^{n+1}\right) \) an indexation \({\sigma }_{U}\in \left( {\mathbb {R}}^{n+1}\right) ^{{\mathbb {R}}^{n+1}}\) defined as
where, for each \(U\in \) \(CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(P_{U}:{\mathbb {R}}^{n+1}\rightarrow {\mathbb {R}}^{n+1}\) is a particular selection of the metric projection multifunction on U; i.e., \(P_{U}\left( t\right) \) is a best approximation of \(t\in {\mathbb {R}}^{n+1}\) on U. Observe that, in particular, \(\sigma _{U_{0}}=P_{U_{0}}.\) A comparative analysis with other possible indexations, and particularly one given in [7], is carried out in [4, Section 3]. Theorem 3.1 in [4] shows that
Example 3.1 in the same paper shows that \(U\mapsto P_{U}\) is not an adequate indexation scheme in relation to calmness, as far as Chebyshev distances between projections, \(d_{\infty }\left( P_{U},P_{U_{0}}\right) \), can be much larger than Hausdorff distances between sets \(d_{H}\left( U,U_{0}\right) \).
The indexation scheme in (10) is suitable for the study of the calmness property of \(\mathcal {F},\) but it is no longer appropriate for the Aubin property, for which we need more. Specifically, the current paper introduces a new indexation scheme working in pairs. Formally, given \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \), we define in an appropriate way, see Lemma 3.1, \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) such that \(\sigma _{i}\) is an indexation of \( U_{i},\) \(i=1,2,\) and \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) \) when \(U_{1}\) and \(U_{2}\) are close enough to the nominal set \(U_{0}.\) In addition we exhibit a Lipschitzian dependence of \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) \) for \(i=1,2\) on the quantities \(d_{H}\left( U_{1},U_{0}\right) \) and \(d_{H}\left( U_{2},U_{0}\right) \) taken together, where \(\sigma _{0}\) is the standard indexation of \(U_{0}\) using a particular selection for the metric projection for \(U_{0}.\)
2.2 On the stability of subdifferentials
This subsection gathers some results from [2] about stability of subdifferentials of convex functions at a point \(x_{0}\in {\mathbb {R}}^{n}\), and provides some extensions and consequences on the stability over a compact set \(K_{0}\subset {\mathbb {R}}^{n}.\) These results will be used in Sect. 4.
Given any two functions \(f_{1},f_{2}\in \Gamma \) and a subset \(K\subset {\mathbb {R}}^{n}\) we use the notation
If K is compact, the supremum in (12) is a maximum.
The following theorem gathers two stability conditions for subdifferentials. The first one, which is a direct consequence of [19, Theorem 24.5], provides the Hausdorff upper semicontinuity of the multifunction which assigns to each pair \(\left( f,x\right) \in \Gamma \times {\mathbb {R}}^{n}\) the subdifferential of f at x, \(\partial f\left( x\right) \). On the other hand, condition (ii) expresses a certain uniform lower Hölder-type property.
Theorem 2.1
Let \(x_{0}\in {\mathbb {R}}^{n},\) \(\alpha >0,\) and \(K:=x_{0}+\alpha {\mathbb {B}}.\) One has:
\(\left( i\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta >0\) such that
provided that \(f\in \Gamma \) satisfies \(d_{K}\left( f,f_{0}\right) \le \delta . \)
\(\left( ii\right) \) [2, Theorem 3.4] For any \(0<\delta \le \alpha ^{2},\) and any \(f_{1},f_{2}\in \Gamma \) such that \( d_{K}(f_{1},f_{2})\le \delta ,\) we have
where in (ii) \({\mathbb {B}}\) denotes the Euclidean closed unit ball.
Corollary 2.1
Let \(K_{0}\subset {\mathbb {R}}^{n}\) be a compact set, \( \alpha >0,\) and \(K:=K_{0}+\alpha {\mathbb {B}}.\) One has:
\(\left( i\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta >0\) such that
provided that \(f\in \Gamma \) satisfies \(d_{K}\left( f,f_{0}\right) \le \delta .\)
\(\left( ii\right) \) For any \(0<\delta \le \alpha ^{2},\) and any \( f_{1},f_{2}\in \Gamma \) such that \(d_{K}(f_{1},f_{2})\le \delta ,\) we have
where here in (ii) \({\mathbb {B}}\) denotes the Euclidean closed unit ball.
\(\left( iii\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta _{0}>0\) such that for any \(0<\delta \le \delta _{0},\) and any \(f\in \Gamma ,\) with \(d_{K}\left( f,f_{0}\right) \le \delta ,\) one has
Proof
\(\left( i\right) \) follows the same argument of the proof of Theorem 2.1\(\left( i\right) .\) Here we present a sketch for completeness. Arguing by contradiction, assume the existence of sequences \(\left\{ f_{r}\right\} \subset \Gamma \) and \(\{\left( x_{r},u_{r}\right) \}\subset {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\) such that \(d_{K}\left( f_{r},f_{0}\right) \le \frac{1}{r},\) \(x_{r}\in K_{0}+\frac{1}{r}{\mathbb {B}},\) and \(u_{r}\in \partial f_{r}\left( x_{r}\right) \diagdown \left( \partial f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\right) ,\) \(r=1,2,\ldots ,\) and we also assume that \(\{x_{r}\}\) converges to a certain \(x_{0}\in K_{0}.\) In this way we attain a contradiction with [19, Theorem 24.5].
\(\left( ii\right) \) Comes straightforwardly from Theorem 2.1\( \left( ii\right) .\) Indeed, let \(0<\delta \le \alpha ^{2},\) let \( f_{1},f_{2}\in \Gamma \) be such that \(d_{K}(f_{1},f_{2})\le \delta ,\) and take any \(x_{0}\in K_{0}.\,\ \)We have \(d_{x_{0}+\alpha {\mathbb {B}} }(f_{1},f_{2})\le d_{K}(f_{1},f_{2})\le \delta ,\) which entails
\(\left( iii\right) \) Since all norms in \({\mathbb {R}}^{n}\) are topologically equivalent, it is enough to prove the assertion for the Euclidean norm. Take \(f_{0}\in \Gamma \) and \(\varepsilon >0.\) From the statement in \(\left( i\right) \ \)there exists \(\delta _{1}>0\) such that
We may assume \(\delta _{1}\le 1\). Define the number \(\delta _0\) by
and take \(0<\delta \le \delta _{0},\) and \(f\in \Gamma \) such that \( d_{K}\left( f,f_{0}\right) \le \delta .\) Then, since \(\sqrt{\delta }\le \sqrt{\delta _{0}}\le \delta _{1}\) and \(\delta \le \delta _{1}^{2}\le \delta _{1}\) (which yields \(d_{K}\left( f,f_{0}\right) \le \delta _{1}),\) one has
On the other hand since \(0<\delta \le \alpha ^{2},\) condition \(\left( ii\right) \) yields
where the last inclusion comes from \(\delta \le \left( \frac{\varepsilon }{4 }\right) ^{2}\). \(\square \)
3 Lipschitz modulus of \(\mathcal {F}\) in the Hausdorff setting
In this section we prove that \(\mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) \) coincides with its counterpart in the Chebyshev setting (6), \(\mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) \), where from now on \(T={\mathbb {R}}^{n+1},\) \(\sigma _{0}=P_{U_{0}}\) (see Sect. 2.1) and \(\mathcal {F}^{T}:\left( {\mathbb {R}} ^{n+1}\right) ^{T}\rightrightarrows {\mathbb {R}}^{n}\) is given by
for \(\sigma =\left( a_{t},b_{t}\right) _{t\in T}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}.\)
To do this, the following lemma constitutes a key step. In it we construct appropriate indexations of sets \(U_{1},U_{2}\in CL\left( {\mathbb {R}} ^{n+1}\right) \), denoted by \(\sigma _{1},\sigma _{2}\), which preserve the distance between them; i.e., \(d_{H}\left( U_{1},U_{2}\right) =d_{\infty }\left( \sigma _{1},\sigma _{2}\right) ,\) and we obtain Lipschitz estimates for each \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) \) in terms of \( d_{H}\left( U_{j},U_{0}\right) ,\) \(j=1,2.\)
Remark 3.1
The easily checked fact that, for any \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\), one has
could be used to develop our study for ‘closed convex systems’; i.e., U belonging to the family of closed convex subsets of \({\mathbb {R}} ^{n+1}\) instead of \(CL\left( {\mathbb {R}}^{n+1}\right) \). The advantages of dealing with closed, non necessarily convex, sets U is that this allows more general perturbations, as for instance discretizations of \(U_{0}\) by grids (see, e.g. [12, Chapter 11], [21], and more recently [13], as well as references therein).
For simplicity in the notation, in the sequel we write \(P_{i}\left( t\right) \) instead of \(P_{U_{i}}\left( t\right) \), for \(t\in {\mathbb {R}}^{n+1}\) and \( i=0,1,2\).
Lemma 3.1
Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(T:={\mathbb {R}} ^{n+1},\) and \(\sigma _{0}:=P_{0}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\). Associated with each pair \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \) let us define a pair of functions \(\sigma _{1},\sigma _{2}\in \left( \mathbb { R}^{n+1}\right) ^{T}\) as follows: for each \(t\in {\mathbb {R}}^{n+1}\),
and
Then we have
Proof
Take any pair \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) and the associated functions \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\) defined in (16) and (17). First, let us see that
For \(t\in U_{1}\) we have
For \(t\in U_{2}\) we have
For \(t\notin U_{1}\cup U_{2}\) we have
In summary, (18) holds in any case.
Now, let us check that
For \(t\in U_{1}\cup U_{2}\) the arguments are completely analogous to those of \(\sigma _{1}.\) For \(t\notin U_{1}\cup U_{2}\) we have
So, we have established (19).
The last step consists of checking \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) .\) On the one hand,
and, analogously, \(\sup _{t\in U_{2}}\left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| =e\left( U_{2},U_{1}\right) .\) On the other hand, for all \(t\notin U_{1}\cup U_{2}\) we have
\(\square \)
Theorem 3.1
Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) and let \( \sigma _{0}:=P_{0}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}.\) We have, for every \(x_{0}\in \mathcal {F}\left( U_{0}\right) \),
Proof
In order to prove ‘\(\le \)’ in (20) we assume the nontrivial case \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) <+\infty .\) Let \(\varepsilon >0\) be arbitrarily given. By the definition of \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) \), there exists \(\delta >0\) such that, for all \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) <\delta ,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}^{T}\left( \sigma _{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) one has
We are going to prove that, for all \(U_{1},U_{2}\in CL\left( {\mathbb {R}} ^{n+1}\right) \) with \(d_{H}\left( U_{i},U_{0}\right) <\delta /3,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}\left( U_{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) one has
Once this is proved, we will conclude that \(\mathrm {lip\,}\mathcal { F}\left( U_{0},x_{0}\right) \le \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) .\) Let \(U_{1},\) \(U_{2},\) and \(x_{1}\) be given as above. Associated with the pair \(U_{1},U_{2}\) consider the pair of indexations \( \sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) proposed in the previous lemma. Then, we have \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) \) and
Then, the aimed result follows straightforwardly from (21). Specifically,
This finishes the proof of
To prove the opposite inequality, we suppose again the nontrivial case \( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) <+\infty \). Then, for an arbitrary \(\varepsilon >0,\) there exists \(\delta >0\) such that for all \( U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \) with \(d_{H}\left( U_{i},U_{0}\right) <\delta ,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}\left( U_{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) the following inequality holds
For the same \(\delta \), consider any pair \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) <\delta ,\) \(i=1,2,\) and any \(x_{1}\in \mathcal {F}^{T}\left( \sigma _{1}\right) \) with \(\Vert x_{1}-x_{0}\Vert < \delta \). Then, appealing to (15) we have
and we conclude, from (15) and (24), that
Hence
and we are done. \(\square \)
The following corollary characterizes the Aubin property of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \) in terms of the well-known strong Slater condition (SSC, in brief), which in this context reads as follows: \( U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) satisfies the SSC when there exists \(\widehat{x}\in {\mathbb {R}}^{n}\) (called an SS point of \(U_{0}\) ) such that \(\sup _{\left( a,b\right) \in U_{0}}\left( \left\langle a, \widehat{x}\right\rangle -b\right) <0\).
Corollary 3.1
Let \(\left( U_{0},x_{0}\right) \in \mathrm {gph}\mathcal {F} \). The following statements are equivalent:
-
(i)
\(\mathcal {F}\) has the Aubin property at \(\left( U_{0},x_{0}\right) \);
-
(ii)
\(U_{0}\) satisfies the SSC;
-
(iii)
\(0_{n+1}\notin {\mathrm{cl}}{\mathrm{conv}}U_{0}\).
Proof
From the previous theorem, it is clear that \(\mathcal {F}\) has the Aubin property at \(\left( U_{0},x_{0}\right) \) if and only if \(\mathcal {F}^{T}\) enjoys the same property at \(\left( \sigma _{0},x_{0}\right) ,\) with \(\sigma _{0}{:}{=}P_{0},\) which is known to be equivalent to the SSC for \(\sigma _{0};\) i.e., there exists \(\widehat{x}\in {\mathbb {R}}^{n}\) such that \(\sup _{t\in T}\left( \left\langle a_{t}^{0},\widehat{x}\right\rangle -b_{t}^{0}\right) <0,\) with \(\sigma _{0}=\left( a_{t}^{0},b_{t}^{0}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) (see, e.g., [12, Theorem 6.1] and [5, Corollary 5]). Finally, the SSC for \(\sigma _{0}\) is trivially equivalent to the same property for \(U_{0}.\) This proves the equivalence between (i) and (ii). The equivalence with (iii) comes from [12, Theorem 6.1]. \(\square \)
In the particular case when \(\left\{ a\in {\mathbb {R}}^{n}:\left( a,b\right) \in U_{0}\right\} \) is bounded, we can provide a data-based formula for \( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) \) by applying the following result.
Theorem 3.2
(see [3, Theorem 1]) Let \(\left( \sigma _{0},x_{0}\right) \in \mathrm {gph}\mathcal {F}^{T}\), with \(\sigma _{0}=\) \(\left( a_{t}^{0},b_{t}^{0}\right) _{t\in T}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\). Assume that \(\left\{ a_{t}^{0},\text { }t\in T\right\} \) is bounded. Then
where \(d_{*}\) represents the distance associated with \(\Vert \cdot \Vert _{*}\) and
Remark 3.2
Up to the authors’ knowledge, the validity of the previous theorem without the boundedness assumption on \(\left\{ a_{t}^{0},\,t\in T\right\} \) remains as an open problem. The theorem includes the cases \(\mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\), i.e. \(C_{0}=\emptyset ,\) under the convention \(d_{*}\left( 0_{n},\emptyset \right) =+\infty ,\) and \(\mathrm {lip\,}\mathcal {F} ^{T}\left( \sigma _{0},x_{0}\right) =+\infty ,\) i.e. \(0_{n}\in C_{0}\) (or, equivalently, the SSC fails at \(\sigma _{0}\)). When \(\left\{ a_{t}^{0},\,t\in T\right\} \) is bounded, \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\) is equivalent to the fact that \(x_{0}\) is a SS point of \(\sigma _{0},\) i.e. \(\sup _{t\in T}\left( \left\langle a_{t}^{0},x_{0}\right\rangle -b_{t}^{0}\right) <0\) (see [3, Theorem 1] for details). This is no longer true when \(\left\{ a_{t}^{0},\,t\in T\right\} \) is unbounded, as the following example shows.
Example 3.1
Consider the system, in \({\mathbb {R}}\) with the usual metric, \( \sigma _{0}{:}{=}\{ tx\le 1/t,~t\in T{:}{=}[ 1,+\infty [ \} .\) One can easily check that \(x_{0}{:}{=}0\) is not a SS point of \( \sigma _{0}\) whereas \(C_{0}=\emptyset \). In order to see that \(\mathrm {lip}\,\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\), consider any \(\sigma _{i}=\left( a_{t}^{i},b_{t}^{i}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right)<\varepsilon <1\), for \(i=1,2\); then \( \mathcal {F}^{T}\left( \sigma _{i}\right) =] -\infty ,u\left( \sigma _{i}\right) ] \), where \(u\left( \sigma _{i}\right) {:}{=}\inf _{t\ge 1/\varepsilon }\left( b_{t}^{i}/a_{t}^{i}\right) \le 0\). Denoting by \(\delta {:}{=}d_{\infty }\left( \sigma _{1},\sigma _{2}\right) \) and writing \(u\left( \sigma _{2}\right) =\lim _{r\rightarrow \infty }\left( b_{t_{r}}^{2}/a_{t_{r}}^{2}\right) \) for some sequence \(\left\{ t_{r}\right\} _{r\in \mathbb {N}}\subset [ 1/\varepsilon ,+\infty [\), we can easily check that
By symmetry we can replace \(u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right) \) with \(\left| u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right) \right| \) in the previous inequality. Accordingly, for any \(x_{1}\in \mathcal {F}\left( \sigma _{1}\right) \), one has
Letting \(\varepsilon \searrow 0\) and recalling (7) we conclude \(\mathrm {lip}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\).
The following corollary comes straightforwardly from the two previous theorems.
Corollary 3.2
Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) be such that \(\left\{ a\in {\mathbb {R}}^{n}: \exists b \in \mathbb {R} \hbox { with } \left( a,b\right) \in U_{0}\right\} \) is bounded. Then,
where
4 Application to convex inequalities
This section is devoted to apply the previous results about linear systems to the convex inequality (2). Throughout this section \({{\mathbb {R}}}^{n}\) is endowed with the Euclidean norm, also denoted by \(\left\| \cdot \right\| \) for simplicity, and \({\mathbb {B}}\) is the corresponding closed unit ball.
We consider the feasible set mapping \(\mathcal {L}:\Gamma \rightrightarrows {{\mathbb {R}}}^{n}\) assigning to each function \(f\in \Gamma \) its zero-(sub)level set
It is well-known that, for each \(f\in \Gamma ,\) \(\mathcal {L}\left( f\right) \subset {\mathbb {R}}^{n}\) is a closed convex set and, as commented in Sect. 1, via linearization, it can be written as the feasible set of a linear semi-infinite inequality system of the form (3); i.e.,
First, let us see that we can reduce the index set of system (25) to a certain subset of \(\mathrm {gph}\partial f\), provided that \(\mathcal {L} \left( f\right) \ne \emptyset .\)
Lemma 4.1
Let \(f\in \Gamma \) with \(\mathcal {L}\left( f\right) \ne \emptyset \) and let \(X\subset {\mathbb {R}}^{n}\) be a neighborhood of \(\mathcal {L}\left( f\right) \) (i.e., X contains an open set containing \( \mathcal {L}\left( f\right) \)). Then,
Proof
The inclusion ‘\(\subset \)’ is trivial from (25). Let us prove ‘\( \supset \)’ reasoning by contradiction. Assume the existence of \(x_{0}\in {\mathbb {R}}^{n}\) such that
and \(x_{0}\notin \mathcal {L}\left( f\right) ,\) which entails \(f\left( x_{0}\right) >0.\) Indeed, we have \(x_{0}\notin X;\) otherwise, taking \( z=x_{0} \) in (27), we would have \(\left\langle a,x_{0}\right\rangle \le \left\langle a,x_{0}\right\rangle -f\left( x_{0}\right) ,\) for any \(a\in \partial f\left( x_{0}\right) \), yielding the contradiction \(f\left( x_{0}\right) \le 0.\) Once we know that \(x_{0}\notin X,\) let \(x_{1}\) be the Euclidean projection of \(x_{0}\) on \(\mathcal {L}\left( f\right) \subset X,\) and define \(x^{\lambda }{:}{=}\left( 1-\lambda \right) x_{1}+\lambda x_{0}\), which does not belong to \(\mathcal {L}\left( f\right) \) for any \(0<\lambda <1. \) Observe that for each \(0<\lambda <1,\) \(x^{\lambda }\) also verifies the linear inequalities of the right member in (27), i.e.,
Then, arguing as in the previous paragraph, \(x^{\lambda }\notin X\), \( 0<\lambda <1\), which represents a contradiction since we can choose \( x^{\lambda }\) sufficiently close to \(x^{1}\) to ensure \(x^{\lambda }\in X\).
Remark 4.1
The assumption \(\mathcal {L}\left( f\right) \ne \emptyset \) in the previous lemma is not superfluous. Just consider \(f:\mathbb { R\longrightarrow }{\mathbb {R}}\) given by \(f\left( x\right) {:}{=}e^{x}\) and any nonempty and bounded from below subset \(X\subset {\mathbb {R}}.\) Indeed, if \(\beta {:}{=}\inf X,\) we can check that the set in the right hand side of (26) equals \(] -\infty ,\beta -1] .\)
From now on \(f_{0}\in \Gamma \) is our nominal convex function, with \( \mathcal {L}(f_{0})\) assumed to be nonempty, \(\alpha _{0}>0\) is a fixed scalar, and \(E_{0}\subset {\mathbb {R}}^{n}\) is the \(\alpha _{0}\)-enlargement of the nominal feasible set \(\mathcal {L}(f_{0});\) i.e.,
Observe that \(E_{0}\) is a closed convex set. As a consequence of the previous lemma, we have
Note that, in the previous expression, \(E_{0}\) cannot be replaced with \( \mathcal {L}\left( f_{0}\right) ,\) as the reader can easily see by just considering \(f_{0}:\mathbb {R\longrightarrow }{\mathbb {R}}\) given by \( f_{0}\left( x\right) {:}{=}x^{2}.\) Going further, the following result ensures that we can keep the same \(E_{0}\) in the linear representation of \(\mathcal {L}(f)\) provided that \(f\in \Gamma \) is close enough to \(f_{0}\) in relation to the pseudo-distance \(d_{E_{0}}\) defined in (12).
Theorem 4.1
Let \(f_{0}\in \Gamma \) be such that
with \(E_{0}\) defined in (28). Then
whenever \(f\in \Gamma \) such that \(d_{E_{0}}\left( f,f_{0}\right) <m/2.\)
Proof
Consider any convex function \(f\in \Gamma \) such that \(d_{E_{0}}\left( f,f_{0}\right) <m/2\). Let us see that
Then, the statement of the theorem follows from the previous lemma.
For any \(x\in \mathrm {bd}\,E_{0}\) we have
while, for \(x\in \mathcal {L}\left( f_{0}\right) \) one has
Now, arguing by contradiction, assume that there exists \(x_{1}\in \mathcal {L} \left( f\right) \diagdown \mathrm {int}\mathbf {\,}E_{0},\) take any \(x_{0}\in \mathcal {L}\left( f_{0}\right) \subset \mathrm {int}\,E_{0}\), and let \( \lambda \in ] 0,1]\) be such that
Then, we attain the contradiction, with (29),
\(\square \)
Remark 4.2
With the notation of the previous theorem, \(m>0\) whenever \(\mathcal {L}\left( f_{0}\right) \) is bounded, since in such a case m is the minimum of the continuous function \(f_{0}\) on the compact set \(\mathrm {bd}\,E_{0}{,}\) where the function is positive.
The following example shows that the assertion of the previous theorem may fail when \(m=0.\)
Example 4.1
Let us consider the continuously differentiable convex function \( g(x,y)=\frac{x^{2}}{y}\) defined on \(S{:}{=}\{(x,y):y>|x|\}\subset {\mathbb {R}}^{2}\). Since its gradient is bounded by \(\sqrt{5} \) on S, the Mean Value theorem allows us to see that g is \(\sqrt{5}\)-Lipschitz on S, and Theorem 4.1.7 in [8] establishes that
is a convex extension of g on the whole space \({\mathbb {R}} ^{2}\), and the Lipschitz constant of \(\sqrt{5}\) is maintained in the extension (see, also, Theorem 1 in [9]). The function \(f_{0}\) is not everywhere differentiable.
First, let us see that, with the notation of the previous theorem, \( m=0.\) It is easy to see that \(f_{0}\) only takes nonnegative values, \(\mathcal {L}\left( f_{0}\right) =\left\{ 0\right\} \times {\mathbb {R}} _{+},\) and \(\left( \alpha _{0},r\right) \in \mathrm {bd}\,E_{0}\) for all \(r\in \mathbb {N}\). Moreover, for all \(r>\alpha _{0}\) one has \(f_{0}\left( \alpha _{0},r\right) =\frac{\alpha _{0}^{2}}{r} \rightarrow 0\) as \(r\rightarrow \infty ,\) which shows that \(m=0\). Now, for all \(\varepsilon >0\) and all \(\left( x,y\right) \in {\mathbb {R}}^{2}\), define
where \(\left[ z\right] _{+}{:}{=}\max \left\{ z,0\right\} \) denotes the positive part of \(z\in {\mathbb {R}}\). Observe that \([ \left| x\right| -\alpha _{0}-\varepsilon ] _{+}\) is nothing else but the distance from \(\left( x,y\right) \) to the strip \(\left[ -\alpha _{0}-\varepsilon ,\alpha _{0}+\varepsilon \right] \times {\mathbb {R}}\) which is a neighborhood of \(E_{0}\). Therefore, \(\widetilde{f}_{\varepsilon }\) and \(f_{\varepsilon }\) coincide on \(\left[ -\alpha _{0}-\varepsilon ,\alpha _{0}+\varepsilon \right] \times {\mathbb {R}}\) and their subdifferentials coincide on \(E_{0}\). Then
Reasoning by contradiction, let us suppose that the statement in Theorem 4.1 is true and there exists \(\eta >0\) such that for \(\varepsilon <\eta \) we have
Then, on the one hand, \(\left| x\right| >\alpha _{0}+2\varepsilon \) implies \(\widetilde{f}_{\varepsilon }\left( x,y\right) >-\varepsilon +\varepsilon =0\) which entails
On the other hand,
In particular, \(\mathcal {L}\left( f_{\varepsilon }\right) \) contains points outside \(\left[ -\alpha _{0}-2\varepsilon ,\alpha _{0}+2\varepsilon \right] \times {\mathbb {R}},\) contradicting (30).
Remark 4.3
In relation with the previous example we have:
-
1)
Another possible convex extension of g to \({\mathbb {R}}^{2}\) is given by
$$\begin{aligned} \widetilde{f}_{0}\left( x,y\right) =\sup \limits _{\left| u\right| <v}\left\{ g\left( u,v\right) +\left\langle \nabla g\left( u,v\right) ,\left( x-u,y-v\right) \right\rangle \right\} , \end{aligned}$$yielding \(\widetilde{f}_{0}\left( x,y\right) =2\left| x\right| -y\) if \(\left| x\right| \ge y.\)
-
2)
Note that \(\left( x,y\right) \mapsto \dfrac{x^{2}}{y}\) cannot be extended from \(\left\{ \left( x,y\right) \in {\mathbb {R}} ^{2}\mid y>0\right\} \) (where it is also convex) to the whole plane (see [22]).
-
3)
The values of the extensions \(f_{0}\) outside \(S\cup \{(0,0)\}\) are positive, but they are not used in the reasoning above.
The following lemma constitutes a key tool for our purposes. It concerns sets in \({\mathbb {R}}^{n+1}\) of the form
with \(K\subset {\mathbb {R}}^{n}\) and \(f\in \Gamma .\)
Lemma 4.2
Let \(K_{1},K_{2}\subset {\mathbb {R}}^{n}\) be compact sets, \( f_{1},f_{2}\in \Gamma \), and consider the sets \(U(f_{i},K_{i}),\ i=1,2.\) Then,
where \(\rho {:}{=}\max \{1+\left\| x\right\| :x\in K_{1}\cup K_{2}\}.\)
Proof
Take \(K_{i},f_{i},U(f_{i},K_{i}),\) for \(i=1,2,\) and \(\rho ,\) as in the statement of the lemma. Let us establish the inequality
which yields, by symmetry, the aimed inequality (31). For simplicity, in this proof we use the notation
Since \(\partial f_{1}\left( K_{1}\right) \) and \(\partial f_{2}\left( K_{2}\right) \) are compact subsets in \({\mathbb {R}}^{n}\) (see again [19, Theorem 24.7]), \(\xi \) is finite, and in particular we have that
Now, take any \(\left( a_{1},b_{1}\right) \in U(f_{1},K_{1}),\) and let us prove the existence of \(\left( a_{2},b_{2}\right) \in U(f_{2},K_{2})\) such that
By definition, \(\left( a_{1},b_{1}\right) \in U(f_{1},K_{1})\) entails the existence of \(z_{1}\in K_{1}\) such that
and, by (33), we can write
Define
and let us establish (34) for such an element \(\left( a_{2},b_{2}\right) \). On the one hand,
where for the first inequality we have applied the fact that \(a_{2}\in \partial f_{2}\left( z_{2}\right) .\)
On the other hand, since \(a_{1}\in \partial f_{1}\left( z_{1}\right) ,\) we have
So, we have established
Finally, we have (recall that \(\rho >1),\)
which yields (34) and the proof is complete. \(\square \)
In order to present the announced results about the Lipschitzian behavior of the feasible set of convex inequalities, we consider the set \( U(f_{0},E_{0}),\ \)and appeal to the constant
where, as in Corollary 3.2,
Remark 4.4
If \(\mathcal {L}(f_{0})\) is nonempty and bounded, [19, Theorem 24.7] ensures that \(\partial f_{0}\left( E_{0}\right) \) is compact, from where we easily deduce that \( U(f_{0},E_{0})\) also is. More in detail, the b-coordinates in \(U(f_{0},E_{0})\) are clearly bounded because of the compactness of \(\partial f_{0}\left( E_{0}\right) \) and the continuity of \( f_{0} \). In order to verify the closedness of \(U(f_{0},E_{0}),\) if \(U(f_{0},E_{0})\ni \left( a^{r},b^{r}\right) \rightarrow \left( a,b\right) \in {\mathbb {R}}^{n+1}\) with \(a^{r}\in \partial f_{0}\left( z^{r}\right) \), \(b^{r}=\left\langle a^{r},z^{r}\right\rangle -f_{0}\left( z^{r}\right) ,\) \(z^{r}\in E_{0}\) (compact), then we may assume, by taking an appropriate subsequence, that \(z^{r}\rightarrow z\in E_{0}\) whence, by applying [19, Theorem 24.5], \(a\in \partial f_{0}\left( z\right) \) and \(b=\left\langle a,z\right\rangle -f_{0}\left( z\right) ,\) yielding \(\left( a,b\right) \in U(f_{0},E_{0}).\) Once we know that \(U(f_{0},E_{0})\) is compact, it is well-known that its convex hull also is, so that we may remove the closure of \(\mathrm {conv\,}U(f_{0},E_{0})\) in the definition of \(C_{U(f_{0},E_{0})}.\)
The boundedness assumption on \(\mathcal {L}(f_{0})\) in the previous remark is not superfluous for the compactness of \(\mathrm {conv\,}U(f_{0},E_{0}),\) as we see by considering \(f_{0}:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) given by
Clearly \(E_{0}=] -\infty ,\alpha _{0}] \) and
which satisfies \(\left( 0,\pi /2\right) \in \left( \mathrm {cl\,} U(f_{0},E_{0})\right) \backslash U(f_{0},E_{0}).\) Note that \(\left( 0,\pi /2\right) \notin \mathrm {conv\,}U(f_{0},E_{0}),\) since the projection of \( U(f_{0},E_{0})\) on the first coordinate is ]0, 1] .
The next result concerns the Lipschitz behavior of \(\mathcal {L}\) around \( f_{0}\) with respect to the distance in \(\Gamma \) given by
This metric equips \(\Gamma \) with the topology of uniform convergence on bounded sets of \({\mathbb {R}}^{n},\) so it involves the values of functions on the whole space \({\mathbb {R}}^{n}\). In contrast, the subsequent Theorem 4.2 appeals to pseudo-distance \(d_{E}\), E being an appropriate enlargement of \(\mathcal {L}(f_{0}),\) assumed to be bounded.
Proposition 4.1
Let \((f_{0,}x_{0})\in \mathrm {gph}\mathcal {L}.\) The following statements are equivalent:
(i) \(\mathcal {L}:\left( \Gamma ,\mathbf {d}\right) \rightrightarrows {\mathbb {R}}^{n}\) has the Aubin property at \((f_{0,}x_{0}),\)
(ii) There exists \(z_{0}\in {\mathbb {R}}^{n}\) such that \(f_{0}\left( z_{0}\right) <0,\)
(iii) \(0_{n+1}\notin \mathrm {cl\ conv\,}U(f_{0},E_{0});\) in other words, \( 0_{n}\notin C_{U(f_{0},E_{0})}.\)
Moreover, if \(\mathcal {L}(f_{0})\) is bounded, \(\left( iii\right) \) reads as
\((iii^{\prime })\) \(0_{n+1}\notin \mathrm {conv\,}U(f_{0},E_{0}).\)
Proof
\((i)\Leftrightarrow (ii).\) It comes from Theorems 1 and 2 in [10], applied to the simplest case when the abstract constraint set C is the whole space \({\mathbb {R}}^{n}\) and we deal with systems with a unique convex inequality.
\((ii)\Leftrightarrow (iii)\) According to Corollary 3.1 we only have to prove that the existence of \(z_{0}\in {\mathbb {R}}^{n}\) such that \( f_{0}\left( z_{0}\right) <0\) is equivalent to the SSC at \( U(f_{0},E_{0}).\)
Take \(z_{0}\in {\mathbb {R}}^{n}\) such that \(f_{0}\left( z_{0}\right) <0\) and let us see that \(z_{0}\) is a SS point of \(U(f_{0},E_{0}).\) Observe that
yielding
So, \(\sup _{\left( {\begin{array}{c}a\\ v\end{array}}\right) \in U(f_{0},E_{0})}\left( \left\langle a,z_{0}\right\rangle -v\right) \le f_{0}\left( z_{0}\right) <0.\) Conversely, let \(z_{0}\) be an SS point of \(U(f_{0},E_{0})\). Lemma 4.1 yields \(z_{0}\in \mathcal {L}(f_{0})\subset E_{0}\) and, so, taking any \(a\in \partial f_{0}\left( z_{0}\right) ,\) we have that
which entails \(f_{0}\left( z_{0}\right) <0.\)
Finally, if \(\mathcal {L}(f_{0})\) is bounded, \((iii)\Leftrightarrow (iii^{\prime })\) follows trivially from Remark 4.4. \(\square \)
The next result describes the Lipschitz behavior of the feasible set of the convex inequality (2) around a given \(f_{0}\in \Gamma \) and a given solution \(x_{0}\in \mathcal {L}(f_{0})\). Roughly speaking, the variation of feasible points is controlled by the variations of functions and their subdifferentials. We point out that constant \(\kappa _{0}\) therein is (conceptually) computable as far as it depends only the nominal data \( f_{0}\) and \(x_{0}.\)
Theorem 4.2
Assume that \(\mathcal {L}(f_{0})\) is nonempty and bounded, and \(0_{n}\notin C_{U(f_{0},E_{0})}\). Let \(\kappa >\kappa _{0}\) with \(\kappa _{0}\) defined in (35), \(E{:}{=}E_{0}+\alpha {\mathbb {B}}\) with \(\alpha >0,\) \(E_{0}\) defined in (28), and \(\rho {:}{=}\max \{1+\left\| x\right\| :x\in E\}.\) Then, there exists \(\delta _{0}>0\) such that \(0<\delta \le \delta _{0}\) implies
provided that \(f_{1},f_{2}\in \Gamma \), \(x_{1}\in \mathcal {L}(f_{1}),\) \( d_{_{E}}\left( f_{i},f_{0}\right) \le \delta ,\) \(i=1,2,\) and \(\left\| x_{1}-x_{0}\right\| \le \delta .\)
Proof
Take \(\kappa >\kappa _{0}.\) Corollary 3.2 ensures the existence of \(\delta _{1}>0\) such that \(d_{H}\left( U_{i},U(f_{0},E_{0})\right) \le \delta _{1},\) \(U_{i}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(i=1,2,\) and \( \left\| x_{1}-x_{0}\right\| \le \delta _{1},\) \(x_{1}\in \mathcal {F} \left( U_{1}\right) ,\) imply
On the other hand, according to Corollary 2.1\(\left( iii\right) ,\) choose \(\delta _{2}>0\) such that \(d_{_{E}}\left( f,f_{0}\right) \le \delta \le \delta _{2}\) implies
Let \(m>0\) be as in Theorem 4.1 (see Remark 4.2) and consider
Now take \(0<\delta \le \delta _{0},\) and \(f_{1},f_{2}\in \Gamma \), with \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta .\) Consider also the sets \(U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\ i=1,2.\) Appealing to Lemma 4.2, and taking into account that \(E_{0}+ \sqrt{\delta }{\mathbb {B}}\subset E\), we have, for \(i=1,2,\)
Moreover, since \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta \le m/2,\) we have from Theorem 4.1
Consequently, applying Lemma 4.2 to the sets \(U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\) \(i=1,2,\) we conclude from ( 36)
\(\square \)
Remark 4.5
Observe that the inequality established in Theorem 4.2 is based on (36) and Lemma 4.2, the last one used to provide an upper bound to \(d_{H}\left( U\left( f_{1},E_{0}+\sqrt{\delta } {\mathbb {B}}\right) , U\left( f_{2},E_{0}+\sqrt{\delta } {\mathbb {B}}\right) \right) \). This upper bound is a weighted sum of the Hausdorff distance between the subdifferential images \(\partial f_{1}(E_{0}+\sqrt{\delta } \mathbb {B})\) and \(\partial f_{2}(E_{0}+\sqrt{\delta } \mathbb {B})\) and the restricted distance \(d_{E}\left( f_{1},f_{2}\right) \). Thus, the distance \(\mathbf {d}(f_{1},f_{2})\), requiring the values of \(f_{1}\) and \(f_{2}\) at any point of the whole space \({\mathbb {R}}^{n}\), is not involved. Both elements concern the enlargements \(E_{0}+\sqrt{\delta } {\mathbb {B}}\) and E of the nominal feasible set \(\mathcal {L}(f_{0}),\) which can be taken arbitrarily close to \(\mathcal {L}(f_{0})\) by taking \(\alpha _{0}\) and \(\alpha \) sufficiently small. The need of taking two different scalars, \(\alpha _{0}\) and \(\alpha ,\) comes from the proof as it requires the fulfillment of the inclusion \(E_{0}+\sqrt{\delta } {\mathbb {B}}\subset E.\)
Our approach in this section strongly relies on the homogeneous linearization of the involved functions by means of sets of subgradients (see Theorem 4.1), as well as on their stability as studied in Sect. 2.2. Therefore, it is a linear approach in its essence.
4.1 The convex differentiable case
Throughout this subsection we assume that our nominal function \(f_{0}\in \Gamma \) is differentiable everywhere, so that we write \(\nabla f_{0}\) instead of \(\partial f_{0}\). The following theorem provides the counterpart of Corollary 2.1\(\left( iii\right) \) under differentiability of \(f_{0}\).
Theorem 4.3
Let \(K_{0}\subset {\mathbb {R}}^{n}\) be a compact set, \(\alpha >0, \) and \(K{:}{=}K_{0}+\alpha {\mathbb {B}}.\) Given \(\varepsilon >0,\) there exists \(\delta >0\) such that, for any \(f\in \Gamma \) with \(d_{K}\left( f,f_{0}\right) \le \delta ,\) one has
Proof
From Corollary 2.1(i) there exists \(\delta _{1}>0 \) such that
provided that \(d_{K}\left( f,f_{0}\right) \le \delta _{1},\) \(f\in \Gamma \). In particular, \(\partial f\left( K_{0}\right) \subset \nabla f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\), if \(d_{K}\left( f,f_{0}\right) \le \delta _{1},\) \(f\in \Gamma \).
Let us prove the existence of \(\delta _{2}>0\) such that
Having obtained \(\delta _2\), just take \(\delta {:}{=}\min \{\delta _{1},\delta _{2}\}\) to finish the proof.
Arguing by contradiction, assume the existence of a sequence \(\{f_{r}\}\subset \Gamma ,\) with \(d_{K}\left( f_{r},f_{0}\right) \le \frac{1 }{r}\) such that \(\nabla f_{0}\left( K_{0}\right) \not \subset \partial f_{r}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\), for all r. For each r, let \(x_{r}\in K_{0}\) such that
The compactness of \(K_{0},\) and consequently of \(\nabla f_{0}\left( K_{0}\right) \) (since \(\nabla f_{0}\) is continuous) allows us to assume that \(\{x_{r}\}\) and \(\{\nabla f_{0}\left( x_{r}\right) \}\,\) converge to \(\overline{x}\in K_{0}\) and \(\nabla f_{0}\left( \overline{x} \right) ,\) respectively (see [19, Theorem 24.4]). This fact, together with (37) yields the existence of \(r_{0}\in \mathbb {N}\) such that
On the other hand, [19, Theorem 24.5] guarantees, for r large enough,
which represents a contradiction. \(\square \)
Following the proof of Theorem 4.2 and appealing to the previous theorem instead of Corollary 2.1\(\left( iii\right) ,\) we derive the following corollary. Recall that \(E_{0}\) and \( \kappa _{0}\) are defined in (28) and (35), respectively. Moreover, the differentiability of \(f_{0}\) entails
Corollary 4.1
Assume that \(\mathcal {L}(f_{0})\) is nonempty and bounded, and \(0_{n}\notin C_{U(f_{0},E_{0})}\). Let \(\kappa >\kappa _{0}\), \(\alpha >0\), \( E{:}{=}E_{0}+\alpha {\mathbb {B}}\), and \(\rho _{0}{:}{=}\max \{1+\left\| x\right\| :x\in E_{0}\}\). Then there exists \(\delta >0\) such that, for any \(f_{1},f_{2}\in \Gamma \) with \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta \), \(i=1,2\), and any \(x_{1}\in \mathcal {L}(f_{1})\), with \(\left\| x_{1}-x_{0}\right\| \le \delta \), one has
Proof
(sketch) Take the same \(\delta _{1}\) as in the proof of Theorem 4.2. From Theorem 4.3 take \(\delta _{2}>0\) such that
Set \(\delta _{0}=\min \left\{ \dfrac{\delta _{1}}{2},\delta _{2},\dfrac{m}{2} \right\} ,\) where m comes from Theorem 4.1. Then, as in the proof of Theorem 4.2, Lemma 4.2 specified at \( K_{1}=K_{2}=E_{0}\) entails \(d_{H}\left( U\left( f_{i},E_{0}\right) ,U\left( f_{0},E_{0}\right) \right) \le \delta _{1},\) whenever \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta ,\ i=1,2.\) Finally, appealing again to Lemma 4.2 together with (36), we obtain (39). \(\square \)
Change history
22 December 2021
A Correction to this paper has been published: https://doi.org/10.1007/s10107-021-01751-x
References
Beer, G.: Topologies on Closed and Closed Convex Sets. Kluwer Academic Publishers, Dordrecht (1993)
Beer, G., Cánovas, M.J., López, M.A., Parra, J.: A uniform approach to Hölder calmness of subdifferentials. J. Convex Anal. 27, 167–180 (2020)
Cánovas, M.J., Gómez-Senent, F.J., Parra, J.: Regularity modulus of arbitrarily perturbed linear inequality systems. J. Math. Anal. Appl. 343, 315–327 (2008)
Cánovas, M.J., Henrion, R., López, M.A., Parra, J.: Indexation strategies and calmness constants for uncertain linear inequality systems. In: Gil, E., et al. (eds.) The Mathematics of the Uncertain: A Tribute to Pedro Gil. Studies in Systems, Decision and Control, vol. 142, pp. 831–843. Springer, Berlin (2018)
Cánovas, M. J., López, M.A., Parra. J.: Stability of linear inequality systems in a parametric setting. J. Optim. Theory Appl. 125, 275-297 (2005)
Cánovas, M.J., López, M.A., Parra, J.: On the equivalence of parametric contexts for linear inequality systems. J. Comput. Appl. Math. 217, 448–456 (2008)
Chan, T.C.Y., Mar, P.A.: Stability and continuity in robust optimization. SIAM J. Optim. 27, 817–841 (2017)
Cobzaş, Ş., Miculescu, R., Nicolae, A.: Lipschitz Functions: Lecture Notes in Mathematics, vol. 2241. Springer, Cham (2019)
Cobzaş, Ş., Mustăţa, C.: Norm preserving extensions of convex Lipschitz functions. J. Approx. Theory 24, 236–244 (1978)
Dinh, N., Goberna, M.A., López, M.A.: On the stability of the optimal value and the optimal set in optimization problems. J. Convex Anal. 19, 927–953 (2012)
Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings: A View from Variational Analysis. Springer, New York (2009)
Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998)
Goberna, M.A., López, M.A.: Recent contributions to linear semi-infinite optimization: an update. Ann. Oper. Res. 271, 237–278 (2018)
Ioffe, A.D.: Variational Analysis of Regular Mappings, Theory and Applications. Springer Monographs in Mathematics. Springer, Cham (2017)
Klatte, D., Kummer, B.: Nonsmooth Equations in Optimization: Regularity, Calculus, Methods and Applications - Nonconvex Optimization and its Applications, vol. 60. Kluwer Academic, Dordrecht (2002)
Li, W., Singer, I.: Global error bounds for for convex multifunctions and applications. Math. Oper. Res. 23, 443–462 (1998)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin (2006)
Robinson, S.M.: Regularity and stability for convex multivalued functions. Math. Oper. Res. 1, 130–143 (1976)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Still, G.: Discretization in semi-infinite programming: the rate of convergence. Math. Program. 91, 53–69 (2001)
Schulz, K., Schwartz, B.: Finite extensions of convex functions. Math. Operationsforsch. Stat. Ser. Optim. 10, 501–509 (1979)
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated by his coauthors to Marco A. López on his 70th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised due to a retrospective Open Access order.
This research has been partially supported by Grants MTM2014-59179-C2-(1-2)-P and PGC2018-097960-B-C2(1-2) from MINECO/MICINN, Spain, and ERDF, “A way to make Europe”, European Union. The third author was partially supported by the Australian Research Council, Project DP180100602.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Beer, G., Cánovas, M.J., López, M.A. et al. Lipschitz modulus of linear and convex inequality systems with the Hausdorff metric. Math. Program. 189, 75–98 (2021). https://doi.org/10.1007/s10107-020-01543-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-020-01543-9