1 Introduction

This paper is initially focussed on the Lipschitz behavior of the feasible set mapping associated with a parametric family of linear inequality systems of the form:

$$\begin{aligned} \{\left\langle a,x\right\rangle \le b,( a,b) \in U\},\quad U\in CL\big ( {\mathbb {R}}^{n+1}\big ) , \end{aligned}$$
(1)

where \(x\in {\mathbb {R}}^{n}\) is the vector of variables and \(CL\left( {\mathbb {R}}^{n+1}\right) \) is the parameter space of all nonempty closed subsets in \({\mathbb {R}}^{n+1}.\) Elements in \(U\in CL\left( {\mathbb {R}} ^{n+1}\right) \) are denoted as \(\left( a,b\right) ,\) where \(a\in {\mathbb {R}} ^{n}\) and \(b\in {\mathbb {R}}.\) Given \(x,y\in {\mathbb {R}}^{n},\) \(\left\langle x,y\right\rangle \) represents the usual inner product of x and y. When U is an infinite set, (1) is a linear semi-infinite inequality system. Observe that, in this framework, perturbations fall on U and, so, obviously, two different systems, associated with different sets \( U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) can have different cardinality.

This setting includes, as a particular case, the parametric family of linear systems coming from linearizing convex inequalities of the form

$$\begin{aligned} f\left( x\right) \le 0, \end{aligned}$$
(2)

where \(f\in \Gamma \), and \(\Gamma \) is the family of all the finite-valued convex functions defined on \({\mathbb {R}}^{n}\). Specifically, the feasible set of (2) does coincide with the set of solutions of the following linear system

$$\begin{aligned} \sigma&{:}{=}\{\left\langle a,x\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) :\ z\in {\mathbb {R}}^{n},a\in \partial f(z)\} \nonumber \\&=\{\left\langle a,x\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) :\ \left( z,a\right) \in \mathrm {gph}\partial f\} \nonumber \\&=\{\left\langle a,x\right\rangle \le f^{*}(a):\ a\in \mathrm {rge} \partial f\}, \end{aligned}$$
(3)

where \(\mathrm {gph}\partial f\) and \(\mathrm {rge}\partial f\) represent the graph and the range (or image) of the subdifferential mapping \(\partial f\), respectively, and \(f^{*}\) is the Fenchel conjugate of f (the last equality above comes from [19, Theorem 23.5]).

Remark 1.1

(a) The finite-valuedness of the functions in \(\Gamma \) is not too restrictive. In fact, given a convex function \(g:{\mathbb {R}} ^{n}\longrightarrow {\mathbb {R}}\cup \left\{ +\infty \right\} \), since our approach is local and we mainly work in a ball \(x_{0}+\alpha {\mathbb {B}}\) (where \({\mathbb {B}}\) stands for the closed unit ball in \( {\mathbb {R}}^{n}\)) contained in the interior (assumed non-empty) of the effective domain of g,  we can replace g with the function \(f\in \Gamma \) given by

$$\begin{aligned} f\left( x\right) :=\max \left\{ g\left( z\right) +\left\langle a,x-z\right\rangle :z\in x_{0}+\alpha {\mathbb {B}},~a\in \partial g\left( z\right) \right\} , \end{aligned}$$
(4)

as f and g coincide on such a ball. Observe that the maximum in (4) is attained due to the compactness of the set \(\{\left( z,a\right) :z\in x_{0}+\alpha {\mathbb {B}},~a\in \partial g\left( z\right) \}\) (see, e.g., [19, Theorems 24.4 and 24.7]).

(b) An alternative linearization of the inequality \(f\left( x\right) \le 0\) is the linear inequality system

$$\begin{aligned} \widetilde{\sigma }:=\{\left\langle a,x\right\rangle \le f^{*}(a):\ a\in {\mathrm{ri}}({\mathrm{dom}}f^{*})\}, \end{aligned}$$

where \({\mathrm{ri}}({\mathrm{dom}}f^{*})\) is the relative interior of the effective domain of \(f^{*}\). In fact, applying [19, Corollary 12.2.2]

$$\begin{aligned} f(x)\le 0\Leftrightarrow f^{**}(x)\le 0\Leftrightarrow \left\langle a,x\right\rangle \le f^{*}(a)\ \text {for all }a\in {\mathrm{ri}}({\mathrm{dom}}f^{*}). \end{aligned}$$

The main objectives of this work consist of analyzing the Lipschitzian behavior of the parametrized linear system (1), and to apply the obtained results to derive new contributions on the convex case (2) via the standard linearization (3). We emphasize the fact that previous results about stability of subdifferentials (traced out from [2]) are also used in the study of this convex case.

Formally, associated with (1), we consider the feasible set mapping, \(\mathcal {F}:CL\left( {\mathbb {R}}^{n+1}\right) \rightrightarrows {\mathbb {R}}^{n},\) which assigns to each \(U\in CL\left( {\mathbb {R}} ^{n+1}\right) \) the set of solutions of the corresponding system:

$$\begin{aligned} \mathcal {F}\left( U\right) :=\left\{ x\in {\mathbb {R}}^{n}:\left\langle a,x\right\rangle \le b\text { for all }\left( a,b\right) \in U\right\} , \text { }U\in CL\big ( {\mathbb {R}}^{n+1}\big ) . \end{aligned}$$
(5)

The parameter space, \(CL\left( {\mathbb {R}}^{n+1}\right) ,\) will be endowed with the Pompeiu–Hausdorff distance (from now on, Hausdorff distance, for simplicity; see Sect. 2 for details). For convenience (in order to ensure the existence of projections) we deal with closed sets, but the study could be carried out with general nonempty sets, since both the feasible set mapping and the Hausdorff distance do not distinguish between sets and their closures.

The current paper is firstly concerned with analyzing the Lipschitz modulus of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \in \mathrm {gph} \mathcal {F}\) and, in a second stage, to derive a Lipschitzian type condition for the feasible set of the parametrized convex inequality (2). Roughly speaking we provide measures (or estimations) of the rate of variation of feasible points, around a nominal one \(x_{0}\in {\mathbb {R}}^{n},\) with respect to perturbations of a nominal parameter set \( U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) in the case of explicit linear systems, and of a nominal function \(f_{0}\in \Gamma \) in the case of linear systems obtained implicitly from a convex inequality.

We can find in the literature classical studies on convex multifunction and convex systems. The reader is referred to [18, Corollary 2 and p. 140] for the analysis of the Lipschitz behavior of feasible sets under right-hand side perturbations; or to [16] for a survey on characterizations of metric regularity (see Sect. 2).

As immediate antecedents of the present work we cite [3] and [4]. The first paper deals with the Lipschitz modulus of the feasible set mapping in the context of linear systems with a fixed index set T of the form

$$\begin{aligned} \left\{ \left\langle a_{t},x\right\rangle \le b_{t},\,t\in T\right\} , \end{aligned}$$
(6)

where \(x\in {\mathbb {R}}^{n}\) is the variable and \(\left( a_{t},b_{t}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}.\) The parameter space considered there, \(\left( {\mathbb {R}}^{n+1}\right) ^{T},\) is formed by all functions from T to \({\mathbb {R}}^{n+1}\) and it is endowed with the (extended) Chebyshev distance. The reader is addressed to the monograph [12] for a comprehensive study of such systems.

The results of [3] do not apply directly to our current setting. A first connection between both parameter spaces \(CL\left( {\mathbb {R}} ^{n+1}\right) \) and \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\) was established in [4], which provides some motivation and background for the present paper from the methodological point of view. That paper is focussed on the calmness modulus (see again Sect. 2), and takes advantage of previous results developed in the context of systems (6) to derive new contributions for the parametrized system (1). Formally, [4] introduces an appropriate indexation scheme assigning to each set in \(CL\left( {\mathbb {R}}^{n+1}\right) \) an element in \( \left( {\mathbb {R}}^{n+1}\right) ^{T}\) in such a way that the Hausdorff distance around \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) translates into the Chebyshev distance around its image in \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\). That indexation strategy is shown to be inappropriate for studying the Lipschitz (instead of calmness) modulus, where we need to index simultaneously pairs of systems around \(U_{0}\), as shown in Lemma 3.1 in Sect. 3.

The problem of analyzing the relationship among different parametric contexts was also addressed in [5] and [6] from a different perspective, mainly focussed on the lower semicontinuity of the feasible set mapping.

Now we summarize the structure of the paper. Section 2 gathers some definitions and key results of the background on the Lipschitz modulus in the context of systems (6), indexations, and stability of subdifferentials. Section 3 develops the study of the Lipschitz modulus of \( \mathcal {F},\) including the definition of an appropriate indexation which allows us to take advantage of the background about systems (6). Finally, Sect. 4 applies the results of the previous section to tackle the convex case.

2 Preliminaries and first results

To start with, recall that a set-valued mapping \(\mathcal {M} :Y\rightrightarrows X\) between metric spaces (both distances denoted by d) has the Aubin property (also called pseudo-Lipschitz–cf. [15]– or Lipschitz-like –cf. [17]–) at \(\left( y_{0},x_{0}\right) \in \mathrm {gph}\mathcal {M}\) if there exist a constant \( \kappa \ge 0\) and neighborhoods W of \(x_{0}\) and V of \(y_{0}\) such that

$$\begin{aligned} d\left( x_{1},\mathcal {M}\left( y_{2}\right) \right) \le \kappa d\left( y_{1},y_{2}\right) \text { for all }y_{1},y_{2} \in V\text { and all } x_{1}\in \mathcal {M}\left( y_{1}\right) \cap W. \end{aligned}$$
(7)

The infimum of constants \(\kappa \) over all \(\left( \kappa ,W,V\right) \) satisfying (7) is called the Lipschitz modulus of \( \mathcal {M}\) at \(\left( y_{0},x_{0}\right) \), denoted by \(\mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) \), and it is defined as \(+\infty \) when the Aubin property fails at \(\left( y_{0},x_{0}\right) \). The Aubin property of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \in \mathrm {gph}\mathcal {M}\) is known to be equivalent to the metric regularity of its inverse mapping \( \mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \); moreover, \(\mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) \) is known to coincide with the modulus of metric regularity of \(\mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \). So, we can write

$$\begin{aligned} \mathrm {lip}\mathcal {M}\left( y_{0},x_{0}\right) =\underset{\left( x,y\right) \rightarrow \left( x_{0},y_{0}\right) }{\limsup }\frac{d\left( x, \mathcal {M}(y)\right) }{d\left( y,\mathcal {M}^{-1}(x)\right) }, \end{aligned}$$

under the convention \(\frac{0}{0}:=0.\)

The particularization of (7) to \(y_{2}=y_{0}\) yields the definition of calmness of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \), whose associated calmness modulus, \(\mathrm {clm}\mathcal {M}\left( y_{0},x_{0}\right) \), is defined analogously. It is also known that the calmness of \(\mathcal {M}\) at \(\left( y_{0},x_{0}\right) \) is equivalent to the metric subregularity of \(\mathcal {M}^{-1}\) at \(\left( x_{0},y_{0}\right) \), and that the corresponding moduli do coincide; so,

$$\begin{aligned} \mathrm {clm}\mathcal {M}\left( y_{0},x_{0}\right) =\underset{x\rightarrow x_{0}}{\limsup }\frac{d\left( x,\mathcal {M}(y_{0})\right) }{d\left( y_{0}, \mathcal {M}^{-1}(x)\right) }. \end{aligned}$$

Clearly \(\mathrm {clm}\mathcal {M}\left( y_{0},x_{0}\right) \le \mathrm {lip} \mathcal {M}\left( y_{0},x_{0}\right) .\) For additional information about the Aubin property, metric regularity, calmness, metric subregularity, and related topics of variational analysis, the reader is addressed to [11, 14, 15, 17, 20].

Throughout the paper we use the following standard notation: Given \(X\subset {\mathbb {R}}^{k},\) \(k\in \mathbb {N},\) we denote by convX the convex hull of X,  and \(\mathrm {int}X,\) \(\mathrm {cl}X\) and \(\mathrm {bd}X\) stand, respectively, for the interior, the closure and the boundary of X.

2.1 Indexation strategies and calmness of linear systems

For comparative purposes and as a motivation of the results of Sect. 3, this subsection recalls some details about the indexation scheme introduced in [4]. First, we fix the topologies considered in the space of variables, \({\mathbb {R}}^{n},\) and the parameter spaces, \(CL\left( {\mathbb {R}} ^{n+1}\right) \) and \(\left( {\mathbb {R}}^{n+1}\right) ^{T},\) for a given index set T.

Unless otherwise stated, \({\mathbb {R}}^{n}\) (space of variables) is equipped with an arbitrary norm, \(\left\| \cdot \right\| ,\) while \({\mathbb {R}} ^{n+1}\) (space of coefficient vectors of linear systems) is endowed with the norm

$$\begin{aligned} \left\| \left( a,b\right) \right\| =\max \left\{ \left\| a\right\| _{*},\left| b\right| \right\} ,\text { }\left( a,b\right) \in {\mathbb {R}}^{n+1}, \end{aligned}$$
(8)

where \(\left\| \cdot \right\| _{*}\) represents the dual norm of \( \left\| \cdot \right\| \) in \({\mathbb {R}}^{n}\), which is given by \(\left\| a\right\| _{*}=\sup _{\left\| x\right\| \le 1}\left\langle a,x\right\rangle \).

The space \(CL\left( {\mathbb {R}}^{n+1}\right) \) is endowed with the (extended) Hausdorff distance \(d_{H}:CL\left( {\mathbb {R}}^{n+1}\right) \times CL\left( {\mathbb {R}}^{n+1}\right) \rightarrow [0,+\infty ]\) given by

$$\begin{aligned} d_{H}\left( U_{1},U_{2}\right) :=\max \{e\left( U_{1},U_{2}\right) ,e\left( U_{2},U_{1}\right) \}, \end{aligned}$$

where \(e\left( U_{i},U_{j}\right) ,\, i,j=1,2,\) represents the excess of \( U_{i}\) over \(U_{j},\)

$$\begin{aligned} e\left( U_{i},U_{j}\right) :=\inf \left\{ \varepsilon >0:\ U_{i}\subset U_{j}+\varepsilon {\mathbb {B}}\right\} =\sup \left\{ d\left( x,U_{j}\right) :x\in U_{i}\right\} , \end{aligned}$$

where here \({\mathbb {B}}\) denotes the closed unit ball in \({\mathbb {R}}^{n+1}\). See [1, Section 3.2] for details about the Hausdorff distance in general settings. In particular, the triangle inequality is satisfied by the excess.

In \(\left( {\mathbb {R}}^{n+1}\right) ^{T}\), the (extended) Chebyshev (or supremum) distance, \(d_{\infty }:\left( {\mathbb {R}}^{n+1}\right) ^{T}\times \left( {\mathbb {R}}^{n+1}\right) ^{T}\rightarrow [0,+\infty ]\), given by

$$\begin{aligned} d_{\infty }\left( \sigma _{1},\sigma _{2}\right) :=\sup _{t\in T}\left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| , \end{aligned}$$
(9)

is considered.

As commented in the introduction, paper [4] analyzes the calmness of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \in \mathrm {gph}\mathcal {F}\) via the calmness of the feasible set mapping associated with systems (6) with an appropriate index set T. To do this, a particular indexation scheme

$$\begin{aligned} CL\big ( {\mathbb {R}}^{n+1}\big ) \ni U\mapsto \sigma _{U}\in \big ({\mathbb {R}}^{n+1}\big ) ^{T} \end{aligned}$$

is introduced. Recall that \(\sigma _{U}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) is said to be an indexation of \(U\in CL\big ( {\mathbb {R}} ^{n+1}\big ) \) if

$$\begin{aligned} \mathrm {rge}\left( \sigma _{U}\right) =U; \end{aligned}$$

specifically [4] considers \(T:={\mathbb {R}}^{n+1}\ \)and assigns to each \(U\in CL\left( {\mathbb {R}}^{n+1}\right) \) an indexation \({\sigma }_{U}\in \left( {\mathbb {R}}^{n+1}\right) ^{{\mathbb {R}}^{n+1}}\) defined as

$$\begin{aligned} \sigma _{U}\left( t\right) :=\left\{ \begin{array}{ll} t, &{} \quad \text {if }t\in U, \\ (P_{U}\circ P_{U_{0}})(t), &{} \quad \text {if }t\notin U, \end{array} \right. \end{aligned}$$
(10)

where, for each \(U\in \) \(CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(P_{U}:{\mathbb {R}}^{n+1}\rightarrow {\mathbb {R}}^{n+1}\) is a particular selection of the metric projection multifunction on U;  i.e., \(P_{U}\left( t\right) \) is a best approximation of \(t\in {\mathbb {R}}^{n+1}\) on U. Observe that, in particular, \(\sigma _{U_{0}}=P_{U_{0}}.\) A comparative analysis with other possible indexations, and particularly one given in [7], is carried out in [4, Section 3]. Theorem 3.1 in [4] shows that

$$\begin{aligned} d_{\infty }\left( \sigma _{U},\sigma _{U_{0}}\right) =d_{H}\left( U,U_{0}\right) \text { for all }U\in CL\big ( {\mathbb {R}}^{n+1}\big ) . \end{aligned}$$
(11)

Example 3.1 in the same paper shows that \(U\mapsto P_{U}\) is not an adequate indexation scheme in relation to calmness, as far as Chebyshev distances between projections, \(d_{\infty }\left( P_{U},P_{U_{0}}\right) \), can be much larger than Hausdorff distances between sets \(d_{H}\left( U,U_{0}\right) \).

The indexation scheme in (10) is suitable for the study of the calmness property of \(\mathcal {F},\) but it is no longer appropriate for the Aubin property, for which we need more. Specifically, the current paper introduces a new indexation scheme working in pairs. Formally, given \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \), we define in an appropriate way, see Lemma 3.1, \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) such that \(\sigma _{i}\) is an indexation of \( U_{i},\) \(i=1,2,\) and \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) \) when \(U_{1}\) and \(U_{2}\) are close enough to the nominal set \(U_{0}.\) In addition we exhibit a Lipschitzian dependence of \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) \) for \(i=1,2\) on the quantities \(d_{H}\left( U_{1},U_{0}\right) \) and \(d_{H}\left( U_{2},U_{0}\right) \) taken together, where \(\sigma _{0}\) is the standard indexation of \(U_{0}\) using a particular selection for the metric projection for \(U_{0}.\)

2.2 On the stability of subdifferentials

This subsection gathers some results from [2] about stability of subdifferentials of convex functions at a point \(x_{0}\in {\mathbb {R}}^{n}\), and provides some extensions and consequences on the stability over a compact set \(K_{0}\subset {\mathbb {R}}^{n}.\) These results will be used in Sect. 4.

Given any two functions \(f_{1},f_{2}\in \Gamma \) and a subset \(K\subset {\mathbb {R}}^{n}\) we use the notation

$$\begin{aligned} d_{K}\left( f_{1},f_{2}\right) =\sup _{x\in K}\left| f_{1}\left( x\right) -f_{2}\left( x\right) \right| . \end{aligned}$$
(12)

If K is compact, the supremum in (12) is a maximum.

The following theorem gathers two stability conditions for subdifferentials. The first one, which is a direct consequence of [19, Theorem 24.5], provides the Hausdorff upper semicontinuity of the multifunction which assigns to each pair \(\left( f,x\right) \in \Gamma \times {\mathbb {R}}^{n}\) the subdifferential of f at x, \(\partial f\left( x\right) \). On the other hand, condition (ii) expresses a certain uniform lower Hölder-type property.

Theorem 2.1

Let \(x_{0}\in {\mathbb {R}}^{n},\) \(\alpha >0,\) and \(K:=x_{0}+\alpha {\mathbb {B}}.\) One has:

\(\left( i\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta >0\) such that

$$\begin{aligned} \partial f\left( x_{0}+\delta {\mathbb {B}}\right) \subset \partial f_{0}\left( x_{0}\right) +\varepsilon {\mathbb {B}}, \end{aligned}$$

provided that \(f\in \Gamma \) satisfies \(d_{K}\left( f,f_{0}\right) \le \delta . \)

\(\left( ii\right) \) [2, Theorem 3.4] For any \(0<\delta \le \alpha ^{2},\) and any \(f_{1},f_{2}\in \Gamma \) such that \( d_{K}(f_{1},f_{2})\le \delta ,\) we have

$$\begin{aligned} \partial f_{1}\left( x_{0}\right) \subset \partial f_{2}(x_{0}+\sqrt{\delta } \mathbb {B})+4\sqrt{\delta }{\mathbb {B}}, \end{aligned}$$
(13)

where in (ii) \({\mathbb {B}}\) denotes the Euclidean closed unit ball.

Corollary 2.1

Let \(K_{0}\subset {\mathbb {R}}^{n}\) be a compact set, \( \alpha >0,\) and \(K:=K_{0}+\alpha {\mathbb {B}}.\) One has:

\(\left( i\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta >0\) such that

$$\begin{aligned} \partial f\left( K_{0}+\delta {\mathbb {B}}\right) \subset \partial f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}, \end{aligned}$$

provided that \(f\in \Gamma \) satisfies \(d_{K}\left( f,f_{0}\right) \le \delta .\)

\(\left( ii\right) \) For any \(0<\delta \le \alpha ^{2},\) and any \( f_{1},f_{2}\in \Gamma \) such that \(d_{K}(f_{1},f_{2})\le \delta ,\) we have

$$\begin{aligned} \partial f_{1}\left( K_{0}\right) \subset \partial f_{2}(K_{0}+\sqrt{\delta } {\mathbb {B}})+4\sqrt{\delta }{\mathbb {B}}, \end{aligned}$$

where here in (ii) \({\mathbb {B}}\) denotes the Euclidean closed unit ball.

\(\left( iii\right) \) Given \(f_{0}\in \Gamma \) and \(\varepsilon >0,\) there exists \(\delta _{0}>0\) such that for any \(0<\delta \le \delta _{0},\) and any \(f\in \Gamma ,\) with \(d_{K}\left( f,f_{0}\right) \le \delta ,\) one has

$$\begin{aligned} d_{H}(\partial f(K_{0}+\sqrt{\delta }{\mathbb {B}}),\partial f_{0}\left( K_{0}\right) )\le \varepsilon . \end{aligned}$$

Proof

\(\left( i\right) \) follows the same argument of the proof of Theorem 2.1\(\left( i\right) .\) Here we present a sketch for completeness. Arguing by contradiction, assume the existence of sequences \(\left\{ f_{r}\right\} \subset \Gamma \) and \(\{\left( x_{r},u_{r}\right) \}\subset {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\) such that \(d_{K}\left( f_{r},f_{0}\right) \le \frac{1}{r},\) \(x_{r}\in K_{0}+\frac{1}{r}{\mathbb {B}},\) and \(u_{r}\in \partial f_{r}\left( x_{r}\right) \diagdown \left( \partial f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\right) ,\) \(r=1,2,\ldots ,\) and we also assume that \(\{x_{r}\}\) converges to a certain \(x_{0}\in K_{0}.\) In this way we attain a contradiction with [19, Theorem 24.5].

\(\left( ii\right) \) Comes straightforwardly from Theorem 2.1\( \left( ii\right) .\) Indeed, let \(0<\delta \le \alpha ^{2},\) let \( f_{1},f_{2}\in \Gamma \) be such that \(d_{K}(f_{1},f_{2})\le \delta ,\) and take any \(x_{0}\in K_{0}.\,\ \)We have \(d_{x_{0}+\alpha {\mathbb {B}} }(f_{1},f_{2})\le d_{K}(f_{1},f_{2})\le \delta ,\) which entails

$$\begin{aligned} \partial f_{1}\left( x_{0}\right) \subset \partial f_{2}(x_{0}+\sqrt{\delta } \mathbb {B})+4\sqrt{\delta }{\mathbb {B}}\subset \partial f_{2}(K_{0}+\sqrt{ \delta }\mathbb {B})+4\sqrt{\delta }{\mathbb {B}}. \end{aligned}$$

\(\left( iii\right) \) Since all norms in \({\mathbb {R}}^{n}\) are topologically equivalent, it is enough to prove the assertion for the Euclidean norm. Take \(f_{0}\in \Gamma \) and \(\varepsilon >0.\) From the statement in \(\left( i\right) \ \)there exists \(\delta _{1}>0\) such that

$$\begin{aligned} \partial f\left( K_{0}+\delta _{1}{\mathbb {B}}\right) \subset \partial f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}},\text { provided that } d_{K}\left( f,f_{0}\right) \le \delta _{1},\text { }f\in \Gamma . \end{aligned}$$

We may assume \(\delta _{1}\le 1\). Define the number \(\delta _0\) by

$$\begin{aligned} 0<\delta _{0}:=\min \left\{ \delta _{1}^{2},\alpha ^{2},\left( \frac{ \varepsilon }{4}\right) ^{2}\right\} , \end{aligned}$$

and take \(0<\delta \le \delta _{0},\) and \(f\in \Gamma \) such that \( d_{K}\left( f,f_{0}\right) \le \delta .\) Then, since \(\sqrt{\delta }\le \sqrt{\delta _{0}}\le \delta _{1}\) and \(\delta \le \delta _{1}^{2}\le \delta _{1}\) (which yields \(d_{K}\left( f,f_{0}\right) \le \delta _{1}),\) one has

$$\begin{aligned} \partial f(K_{0}+\sqrt{\delta }{\mathbb {B}})\subset \partial f\left( K_{0}+\delta _{1}{\mathbb {B}}\right) \subset \partial f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}. \end{aligned}$$

On the other hand since \(0<\delta \le \alpha ^{2},\) condition \(\left( ii\right) \) yields

$$\begin{aligned} \partial f_{0}\left( K_{0}\right) \subset \partial f(K_{0}+\sqrt{\delta } {\mathbb {B}})+4\sqrt{\delta }\mathbb {B\subset }\partial f(K_{0}+\sqrt{\delta } {\mathbb {B}})+\varepsilon {\mathbb {B}}, \end{aligned}$$

where the last inclusion comes from \(\delta \le \left( \frac{\varepsilon }{4 }\right) ^{2}\). \(\square \)

3 Lipschitz modulus of \(\mathcal {F}\) in the Hausdorff setting

In this section we prove that \(\mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) \) coincides with its counterpart in the Chebyshev setting (6), \(\mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) \), where from now on \(T={\mathbb {R}}^{n+1},\) \(\sigma _{0}=P_{U_{0}}\) (see Sect. 2.1) and \(\mathcal {F}^{T}:\left( {\mathbb {R}} ^{n+1}\right) ^{T}\rightrightarrows {\mathbb {R}}^{n}\) is given by

$$\begin{aligned} \mathcal {F}^{T}\left( \sigma \right) :=\left\{ x\in {\mathbb {R}} ^{n}:\left\langle a_{t},x\right\rangle \le b_{t},\text { }t\in T\right\} , \end{aligned}$$
(14)

for \(\sigma =\left( a_{t},b_{t}\right) _{t\in T}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}.\)

To do this, the following lemma constitutes a key step. In it we construct appropriate indexations of sets \(U_{1},U_{2}\in CL\left( {\mathbb {R}} ^{n+1}\right) \), denoted by \(\sigma _{1},\sigma _{2}\), which preserve the distance between them; i.e., \(d_{H}\left( U_{1},U_{2}\right) =d_{\infty }\left( \sigma _{1},\sigma _{2}\right) ,\) and we obtain Lipschitz estimates for each \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) \) in terms of \( d_{H}\left( U_{j},U_{0}\right) ,\) \(j=1,2.\)

Remark 3.1

The easily checked fact that, for any \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\), one has

$$\begin{aligned} d_{H}\left( \mathrm {cl\,conv\,rge\,}\sigma _{1},\mathrm {cl\,conv\,rge\,} \sigma _{2}\right) \le d_{H}\left( \mathrm {cl\,rge\,}\sigma _{1},\mathrm { cl\,rge\,}\sigma _{2}\right) \le d_{\infty }\left( \sigma _{1},\sigma _{2}\right) , \end{aligned}$$
(15)

could be used to develop our study for ‘closed convex systems’; i.e., U belonging to the family of closed convex subsets of \({\mathbb {R}} ^{n+1}\) instead of \(CL\left( {\mathbb {R}}^{n+1}\right) \). The advantages of dealing with closed, non necessarily convex, sets U is that this allows more general perturbations, as for instance discretizations of \(U_{0}\) by grids (see, e.g. [12, Chapter 11], [21], and more recently [13], as well as references therein).

For simplicity in the notation, in the sequel we write \(P_{i}\left( t\right) \) instead of \(P_{U_{i}}\left( t\right) \), for \(t\in {\mathbb {R}}^{n+1}\) and \( i=0,1,2\).

Lemma 3.1

Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(T:={\mathbb {R}} ^{n+1},\) and \(\sigma _{0}:=P_{0}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\). Associated with each pair \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \) let us define a pair of functions \(\sigma _{1},\sigma _{2}\in \left( \mathbb { R}^{n+1}\right) ^{T}\) as follows: for each \(t\in {\mathbb {R}}^{n+1}\),

$$\begin{aligned} \sigma _{1}\left( t\right) :=\left\{ \begin{array}{ll} P_{1}\left( t\right) , &{} \quad \text {if }t\in U_{1}\cup U_{2}, \\ (P_{1}\circ P_{0})\left( t\right) , &{} \quad \text {if }t\notin U_{1}\cup U_{2}, \end{array} \right. \end{aligned}$$
(16)

and

$$\begin{aligned} \sigma _{2}\left( t\right) :=\left\{ \begin{array}{ll} P_{2}\left( t\right) , &{} \quad \text {if }t\in U_{1}\cup U_{2}, \\ (P_{2}\circ P_{1}\circ P_{0})\left( t\right) , &{} \quad \text {if }t\notin U_{1}\cup U_{2}. \end{array} \right. \end{aligned}$$
(17)

Then we have

$$\begin{aligned} d_{\infty }\left( \sigma _{i},\sigma _{0}\right)\le & {} 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) \},\text { }i=1,2, \\ d_{\infty }\left( \sigma _{1},\sigma _{2}\right)= & {} d_{H}\left( U_{1},U_{2}\right) . \end{aligned}$$

Proof

Take any pair \(U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) and the associated functions \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\) defined in (16) and (17). First, let us see that

$$\begin{aligned} d_{\infty }\left( \sigma _{1},\sigma _{0}\right) \le 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) \}. \end{aligned}$$
(18)

For \(t\in U_{1}\) we have

$$\begin{aligned} \left\| \sigma _{1}\left( t\right) -\sigma _{0}\left( t\right) \right\| =\left\| t-P_{0}\left( t\right) \right\| \le e\left( U_{1},U_{0}\right) \le d_{H}\left( U_{1},U_{0}\right) . \end{aligned}$$

For \(t\in U_{2}\) we have

$$\begin{aligned} \left\| \sigma _{1}\left( t\right) -\sigma _{0}\left( t\right) \right\|\le & {} \left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| +\left\| \sigma _{2}\left( t\right) -\sigma _{0}\left( t\right) \right\| \\= & {} \left\| P_{1}\left( t\right) -t\right\| +\left\| t-P_{0}\left( t\right) \right\| \\\le & {} e\left( U_{2},U_{1}\right) +e\left( U_{2},U_{0}\right) \le 2d_{H}\left( U_{2},U_{0}\right) +d_{H}\left( U_{1},U_{0}\right) \\\le & {} 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) \}. \end{aligned}$$

For \(t\notin U_{1}\cup U_{2}\) we have

$$\begin{aligned} \left\| \sigma _{1}\left( t\right) -\sigma _{0}\left( t\right) \right\| =\left\| P_{1}(P_{0}\left( t\right) )-P_{0}\left( t\right) \right\| \le e\left( U_{0},U_{1}\right) \le d_{H}\left( U_{1},U_{0}\right) . \end{aligned}$$

In summary, (18) holds in any case.

Now, let us check that

$$\begin{aligned} d_{\infty }\left( \sigma _{2},\sigma _{0}\right) \le 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) \}. \end{aligned}$$
(19)

For \(t\in U_{1}\cup U_{2}\) the arguments are completely analogous to those of \(\sigma _{1}.\) For \(t\notin U_{1}\cup U_{2}\) we have

$$\begin{aligned} \left\| \sigma _{2}\left( t\right) -\sigma _{0}\left( t\right) \right\|\le & {} \left\| \sigma _{2}\left( t\right) -\sigma _{1}\left( t\right) \right\| +\left\| \sigma _{1}\left( t\right) -\sigma _{0}\left( t\right) \right\| \\= & {} \left\| P_{2}(P_{1}(P_{0}\left( t\right) ))-P_{1}(P_{0}\left( t\right) )\right\| +\left\| P_{1}(P_{0}\left( t\right) )-P_{0}\left( t\right) \right\| \\\le & {} e\left( U_{1},U_{2}\right) +e\left( U_{0},U_{1}\right) \le 2d_{H}\left( U_{1},U_{0}\right) +d_{H}\left( U_{2},U_{0}\right) \\\le & {} 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) \}. \end{aligned}$$

So, we have established (19).

The last step consists of checking \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) .\) On the one hand,

$$\begin{aligned} \sup _{t\in U_{1}}\left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| =\sup _{t\in U_{1}}\left\| t-P_{2}\left( t\right) \right\| =e\left( U_{1},U_{2}\right) , \end{aligned}$$

and, analogously, \(\sup _{t\in U_{2}}\left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| =e\left( U_{2},U_{1}\right) .\) On the other hand, for all \(t\notin U_{1}\cup U_{2}\) we have

$$\begin{aligned} \left\| \sigma _{1}\left( t\right) -\sigma _{2}\left( t\right) \right\| =\left\| P_{1}(P_{0}\left( t\right) )-P_{2}(P_{1}(P_{0}\left( t\right) ))\right\| \le e\left( U_{1},U_{2}\right) . \end{aligned}$$

\(\square \)

Theorem 3.1

Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) and let \( \sigma _{0}:=P_{0}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}.\) We have, for every \(x_{0}\in \mathcal {F}\left( U_{0}\right) \),

$$\begin{aligned} \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) =\mathrm {lip\,}\mathcal {F} ^{T}\left( \sigma _{0},x_{0}\right) . \end{aligned}$$
(20)

Proof

In order to prove ‘\(\le \)’ in (20) we assume the nontrivial case \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) <+\infty .\) Let \(\varepsilon >0\) be arbitrarily given. By the definition of \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) \), there exists \(\delta >0\) such that, for all \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) <\delta ,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}^{T}\left( \sigma _{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) one has

$$\begin{aligned} d\left( x^{1},\mathcal {F}^{T}\left( \sigma _{2}\right) \right) \le \left( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) +\varepsilon \right) d_{\infty }\left( \sigma _{1},\sigma _{2}\right) . \end{aligned}$$
(21)

We are going to prove that, for all \(U_{1},U_{2}\in CL\left( {\mathbb {R}} ^{n+1}\right) \) with \(d_{H}\left( U_{i},U_{0}\right) <\delta /3,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}\left( U_{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) one has

$$\begin{aligned} d\left( x_{1},\mathcal {F}\left( U_{2}\right) \right) \le \left( \mathrm { lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) +\varepsilon \right) d_{H}\left( U_{1},U_{2}\right) . \end{aligned}$$
(22)

Once this is proved, we will conclude that \(\mathrm {lip\,}\mathcal { F}\left( U_{0},x_{0}\right) \le \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) .\) Let \(U_{1},\) \(U_{2},\) and \(x_{1}\) be given as above. Associated with the pair \(U_{1},U_{2}\) consider the pair of indexations \( \sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) proposed in the previous lemma. Then, we have \(d_{\infty }\left( \sigma _{1},\sigma _{2}\right) =d_{H}\left( U_{1},U_{2}\right) \) and

$$\begin{aligned} d_{\infty }\left( \sigma _{i},\sigma _{0}\right) \le 3\max \{d_{H}\left( U_{1},U_{0}\right) ,d_{H}\left( U_{2},U_{0}\right) <\delta . \end{aligned}$$

Then, the aimed result follows straightforwardly from (21). Specifically,

$$\begin{aligned} d\left( x_{1},\mathcal {F}\left( U_{2}\right) \right)= & {} d\left( x_{1}, \mathcal {F}^{T}\left( \sigma _{2}\right) \right) \le \left( \mathrm {lip\,} \mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) +\varepsilon \right) d_{\infty }\left( \sigma _{1},\sigma _{2}\right) \\= & {} \left( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) +\varepsilon \right) d_{H}\left( U_{1},U_{2}\right) . \end{aligned}$$

This finishes the proof of

$$\begin{aligned} \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) \le \mathrm {lip\,} \mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) . \end{aligned}$$
(23)

To prove the opposite inequality, we suppose again the nontrivial case \( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) <+\infty \). Then, for an arbitrary \(\varepsilon >0,\) there exists \(\delta >0\) such that for all \( U_{1},U_{2}\in CL\left( {\mathbb {R}}^{n+1}\right) \) with \(d_{H}\left( U_{i},U_{0}\right) <\delta ,\) \(i=1,2,\) and all \(x_{1}\in \mathcal {F}\left( U_{1}\right) \) with \(\left\| x_{1}-x_{0}\right\| <\delta ,\) the following inequality holds

$$\begin{aligned} d\left( x_{1},\mathcal {F}\left( U_{2}\right) \right) \le \left( \mathrm { lip\,}\mathcal {F}\left( U_{0},x_{0}\right) +\varepsilon \right) d_{H}\left( U_{1},U_{2}\right) . \end{aligned}$$
(24)

For the same \(\delta \), consider any pair \(\sigma _{1},\sigma _{2}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right) <\delta ,\) \(i=1,2,\) and any \(x_{1}\in \mathcal {F}^{T}\left( \sigma _{1}\right) \) with \(\Vert x_{1}-x_{0}\Vert < \delta \). Then, appealing to (15) we have

$$\begin{aligned} d_{H}\left( \mathrm {cl\,rge\,}\sigma _{i},\mathrm {cl\,rge\,}\sigma _{0}\right) \le d_{\infty }\left( \sigma _{i},\sigma _{0}\right) <\delta ,\ i=1,2, \end{aligned}$$

and we conclude, from (15) and (24), that

$$\begin{aligned} d\left( x_{1},\mathcal {F}^{T}\left( \sigma _{2}\right) \right)= & {} d\left( x_{1},\mathcal {F}\left( \mathrm {cl\,rge\,}\sigma _{2}\right) \right) \\\le & {} \left( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) +\varepsilon \right) d_{H}\left( \mathrm {cl\,rge\,}\sigma _{1},\mathrm { cl\,rge\,}\sigma _{2}\right) \\\le & {} \left( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) +\varepsilon \right) d_{\infty }\left( \sigma _{1},\sigma _{2}\right) . \end{aligned}$$

Hence

$$\begin{aligned} \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) \le \mathrm { lip\,}\mathcal {F}\left( U_{0},x_{0}\right) , \end{aligned}$$

and we are done. \(\square \)

The following corollary characterizes the Aubin property of \(\mathcal {F}\) at \(\left( U_{0},x_{0}\right) \) in terms of the well-known strong Slater condition (SSC, in brief), which in this context reads as follows: \( U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) satisfies the SSC when there exists \(\widehat{x}\in {\mathbb {R}}^{n}\) (called an SS point of \(U_{0}\) ) such that \(\sup _{\left( a,b\right) \in U_{0}}\left( \left\langle a, \widehat{x}\right\rangle -b\right) <0\).

Corollary 3.1

Let \(\left( U_{0},x_{0}\right) \in \mathrm {gph}\mathcal {F} \). The following statements are equivalent:

  1. (i)

    \(\mathcal {F}\) has the Aubin property at \(\left( U_{0},x_{0}\right) \);

  2. (ii)

    \(U_{0}\) satisfies the SSC;

  3. (iii)

    \(0_{n+1}\notin {\mathrm{cl}}{\mathrm{conv}}U_{0}\).

Proof

From the previous theorem, it is clear that \(\mathcal {F}\) has the Aubin property at \(\left( U_{0},x_{0}\right) \) if and only if \(\mathcal {F}^{T}\) enjoys the same property at \(\left( \sigma _{0},x_{0}\right) ,\) with \(\sigma _{0}{:}{=}P_{0},\) which is known to be equivalent to the SSC for \(\sigma _{0};\) i.e., there exists \(\widehat{x}\in {\mathbb {R}}^{n}\) such that \(\sup _{t\in T}\left( \left\langle a_{t}^{0},\widehat{x}\right\rangle -b_{t}^{0}\right) <0,\) with \(\sigma _{0}=\left( a_{t}^{0},b_{t}^{0}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) (see, e.g., [12, Theorem 6.1] and [5, Corollary 5]). Finally, the SSC for \(\sigma _{0}\) is trivially equivalent to the same property for \(U_{0}.\) This proves the equivalence between (i) and (ii). The equivalence with (iii) comes from [12, Theorem 6.1]. \(\square \)

In the particular case when \(\left\{ a\in {\mathbb {R}}^{n}:\left( a,b\right) \in U_{0}\right\} \) is bounded, we can provide a data-based formula for \( \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) \) by applying the following result.

Theorem 3.2

(see [3, Theorem 1]) Let \(\left( \sigma _{0},x_{0}\right) \in \mathrm {gph}\mathcal {F}^{T}\), with \(\sigma _{0}=\) \(\left( a_{t}^{0},b_{t}^{0}\right) _{t\in T}\in \left( {\mathbb {R}} ^{n+1}\right) ^{T}\). Assume that \(\left\{ a_{t}^{0},\text { }t\in T\right\} \) is bounded. Then

$$\begin{aligned} \mathrm {lip}\,\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =\frac{ \left\| x_{0}\right\| +1}{d_{*}\left( 0_{n},C_{0}\right) }, \end{aligned}$$

where \(d_{*}\) represents the distance associated with \(\Vert \cdot \Vert _{*}\) and

$$\begin{aligned} C_{0}:=\left\{ u\in {\mathbb {R}}^{n}:\left( u,\left\langle u,x_{0}\right\rangle \right) \in \mathrm {cl\,conv\,}\left\{ \left( a_{t}^{0},b_{t}^{0}\right) ,~t\in T\right\} \right\} . \end{aligned}$$

Remark 3.2

Up to the authors’ knowledge, the validity of the previous theorem without the boundedness assumption on \(\left\{ a_{t}^{0},\,t\in T\right\} \) remains as an open problem. The theorem includes the cases \(\mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\), i.e. \(C_{0}=\emptyset ,\) under the convention \(d_{*}\left( 0_{n},\emptyset \right) =+\infty ,\) and \(\mathrm {lip\,}\mathcal {F} ^{T}\left( \sigma _{0},x_{0}\right) =+\infty ,\) i.e. \(0_{n}\in C_{0}\) (or, equivalently, the SSC fails at \(\sigma _{0}\)). When \(\left\{ a_{t}^{0},\,t\in T\right\} \) is bounded, \( \mathrm {lip\,}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\) is equivalent to the fact that \(x_{0}\) is a SS point of \(\sigma _{0},\) i.e. \(\sup _{t\in T}\left( \left\langle a_{t}^{0},x_{0}\right\rangle -b_{t}^{0}\right) <0\) (see [3, Theorem 1] for details). This is no longer true when \(\left\{ a_{t}^{0},\,t\in T\right\} \) is unbounded, as the following example shows.

Example 3.1

Consider the system, in \({\mathbb {R}}\) with the usual metric, \( \sigma _{0}{:}{=}\{ tx\le 1/t,~t\in T{:}{=}[ 1,+\infty [ \} .\) One can easily check that \(x_{0}{:}{=}0\) is not a SS point of \( \sigma _{0}\) whereas \(C_{0}=\emptyset \). In order to see that \(\mathrm {lip}\,\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\), consider any \(\sigma _{i}=\left( a_{t}^{i},b_{t}^{i}\right) _{t\in T}\in \left( {\mathbb {R}}^{n+1}\right) ^{T}\) with \(d_{\infty }\left( \sigma _{i},\sigma _{0}\right)<\varepsilon <1\), for \(i=1,2\); then \( \mathcal {F}^{T}\left( \sigma _{i}\right) =] -\infty ,u\left( \sigma _{i}\right) ] \), where \(u\left( \sigma _{i}\right) {:}{=}\inf _{t\ge 1/\varepsilon }\left( b_{t}^{i}/a_{t}^{i}\right) \le 0\). Denoting by \(\delta {:}{=}d_{\infty }\left( \sigma _{1},\sigma _{2}\right) \) and writing \(u\left( \sigma _{2}\right) =\lim _{r\rightarrow \infty }\left( b_{t_{r}}^{2}/a_{t_{r}}^{2}\right) \) for some sequence \(\left\{ t_{r}\right\} _{r\in \mathbb {N}}\subset [ 1/\varepsilon ,+\infty [\), we can easily check that

$$\begin{aligned} u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right)\le & {} \underset{ r\rightarrow \infty }{\limsup }\left( \frac{b_{t_{r}}^{1}}{a_{t_{r}}^{1}}- \frac{b_{t_{r}}^{2}}{a_{t_{r}}^{2}}\right) \\= & {} \underset{r\rightarrow \infty }{\limsup }\left( \frac{b_{t_{r}}^{1}\left( a_{t_{r}}^{2}-a_{t_{r}}^{1}\right) }{a_{t_{r}}^{1}a_{t_{r}}^{2}}+\frac{ b_{t_{r}}^{1}-b_{t_{r}}^{2}}{a_{t_{r}}^{2}}\right) \\\le & {} \frac{2\varepsilon \delta }{\left( \frac{1}{\varepsilon }-\varepsilon \right) ^{2}}+\frac{ \delta }{\frac{1}{\varepsilon }-\varepsilon }=\frac{\varepsilon +\varepsilon ^{3}}{\left( 1-\varepsilon ^{2}\right) ^{2}}\delta . \end{aligned}$$

By symmetry we can replace \(u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right) \) with \(\left| u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right) \right| \) in the previous inequality. Accordingly, for any \(x_{1}\in \mathcal {F}\left( \sigma _{1}\right) \), one has

$$\begin{aligned} d\left( x_{1},\mathcal {F}\left( \sigma _{2}\right) \right) \le \left| u\left( \sigma _{1}\right) -u\left( \sigma _{2}\right) \right| \le \frac{\varepsilon +\varepsilon ^{3}}{\left( 1-\varepsilon ^{2}\right) ^{2}} d_{\infty }\left( \sigma _{1},\sigma _{2}\right) . \end{aligned}$$

Letting \(\varepsilon \searrow 0\) and recalling (7) we conclude \(\mathrm {lip}\mathcal {F}^{T}\left( \sigma _{0},x_{0}\right) =0\).

The following corollary comes straightforwardly from the two previous theorems.

Corollary 3.2

Let \(U_{0}\in CL\left( {\mathbb {R}}^{n+1}\right) \) be such that \(\left\{ a\in {\mathbb {R}}^{n}: \exists b \in \mathbb {R} \hbox { with } \left( a,b\right) \in U_{0}\right\} \) is bounded. Then,

$$\begin{aligned} \mathrm {lip\,}\mathcal {F}\left( U_{0},x_{0}\right) =\frac{\left\| x_{0}\right\| +1}{d_{*}\left( 0_{n},C_{U_{0}}\right) }, \end{aligned}$$

where

$$\begin{aligned} C_{U_{0}}{:}{=}\left\{ u\in {\mathbb {R}}^{n}:\left( u,\left\langle u,x_{0}\right\rangle \right) \in \mathrm {cl\,conv\,}U_{0}\right\} . \end{aligned}$$

4 Application to convex inequalities

This section is devoted to apply the previous results about linear systems to the convex inequality (2). Throughout this section \({{\mathbb {R}}}^{n}\) is endowed with the Euclidean norm, also denoted by \(\left\| \cdot \right\| \) for simplicity,  and \({\mathbb {B}}\) is the corresponding closed unit ball.

We consider the feasible set mapping \(\mathcal {L}:\Gamma \rightrightarrows {{\mathbb {R}}}^{n}\) assigning to each function \(f\in \Gamma \) its zero-(sub)level set

$$\begin{aligned} \mathcal {L}\left( f\right) {:}{=}\left\{ x\in {\mathbb {R}}^{n}:f\left( x\right) \le 0\right\} . \end{aligned}$$

It is well-known that, for each \(f\in \Gamma ,\) \(\mathcal {L}\left( f\right) \subset {\mathbb {R}}^{n}\) is a closed convex set and, as commented in Sect. 1, via linearization, it can be written as the feasible set of a linear semi-infinite inequality system of the form (3); i.e.,

$$\begin{aligned} \mathcal {L}\left( f\right) =\left\{ x\in {\mathbb {R}}^{n}:\left\langle a,x\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) ,\text { ~~}z\in {\mathbb {R}}^{n}\text {,\ }a\in \partial f\left( z\right) \right\} . \end{aligned}$$
(25)

First, let us see that we can reduce the index set of system (25) to a certain subset of \(\mathrm {gph}\partial f\), provided that \(\mathcal {L} \left( f\right) \ne \emptyset .\)

Lemma 4.1

Let \(f\in \Gamma \) with \(\mathcal {L}\left( f\right) \ne \emptyset \) and let \(X\subset {\mathbb {R}}^{n}\) be a neighborhood of \(\mathcal {L}\left( f\right) \) (i.e., X contains an open set containing \( \mathcal {L}\left( f\right) \)). Then,

$$\begin{aligned} \mathcal {L}\left( f\right) =\left\{ x\in {\mathbb {R}}^{n}:\left\langle a,x\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) ,\text { ~~}z\in X,\ a\in \partial f\left( z\right) \right\} . \end{aligned}$$
(26)

Proof

The inclusion ‘\(\subset \)’ is trivial from (25). Let us prove ‘\( \supset \)’ reasoning by contradiction. Assume the existence of \(x_{0}\in {\mathbb {R}}^{n}\) such that

$$\begin{aligned} \left\langle a,x_{0}\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) ,~\text {whenever }a\in \partial f\left( z\right) ,\text { with }z\in X, \end{aligned}$$
(27)

and \(x_{0}\notin \mathcal {L}\left( f\right) ,\) which entails \(f\left( x_{0}\right) >0.\) Indeed, we have \(x_{0}\notin X;\) otherwise, taking \( z=x_{0} \) in (27), we would have \(\left\langle a,x_{0}\right\rangle \le \left\langle a,x_{0}\right\rangle -f\left( x_{0}\right) ,\) for any \(a\in \partial f\left( x_{0}\right) \), yielding the contradiction \(f\left( x_{0}\right) \le 0.\) Once we know that \(x_{0}\notin X,\) let \(x_{1}\) be the Euclidean projection of \(x_{0}\) on \(\mathcal {L}\left( f\right) \subset X,\) and define \(x^{\lambda }{:}{=}\left( 1-\lambda \right) x_{1}+\lambda x_{0}\), which does not belong to \(\mathcal {L}\left( f\right) \) for any \(0<\lambda <1. \) Observe that for each \(0<\lambda <1,\) \(x^{\lambda }\) also verifies the linear inequalities of the right member in (27), i.e.,

$$\begin{aligned} \left\langle a,x^{\lambda }\right\rangle \le \left\langle a,z\right\rangle -f\left( z\right) ,\quad \text {whenever }a\in \partial f\left( z\right) ,\quad \text {with }z\in X. \end{aligned}$$

Then, arguing as in the previous paragraph, \(x^{\lambda }\notin X\), \( 0<\lambda <1\), which represents a contradiction since we can choose \( x^{\lambda }\) sufficiently close to \(x^{1}\) to ensure \(x^{\lambda }\in X\).

Remark 4.1

The assumption \(\mathcal {L}\left( f\right) \ne \emptyset \) in the previous lemma is not superfluous. Just consider \(f:\mathbb { R\longrightarrow }{\mathbb {R}}\) given by \(f\left( x\right) {:}{=}e^{x}\) and any nonempty and bounded from below subset \(X\subset {\mathbb {R}}.\) Indeed, if \(\beta {:}{=}\inf X,\) we can check that the set in the right hand side of (26) equals \(] -\infty ,\beta -1] .\)

From now on \(f_{0}\in \Gamma \) is our nominal convex function, with \( \mathcal {L}(f_{0})\) assumed to be nonempty, \(\alpha _{0}>0\) is a fixed scalar, and \(E_{0}\subset {\mathbb {R}}^{n}\) is the \(\alpha _{0}\)-enlargement of the nominal feasible set \(\mathcal {L}(f_{0});\) i.e.,

$$\begin{aligned} E_{0}{:}{=}\,\mathcal {L}(f_{0})+\alpha _{0}{\mathbb {B}}. \end{aligned}$$
(28)

Observe that \(E_{0}\) is a closed convex set. As a consequence of the previous lemma, we have

$$\begin{aligned} \mathcal {L}\left( f_{0}\right) =\left\{ x\in {\mathbb {R}}^{n}:\left\langle a,x\right\rangle \le \left\langle a,z\right\rangle -f_{0}\left( z\right) , \text {~~}a\in \partial f_{0}\left( z\right) ,\,z\in E_{0}\right\} . \end{aligned}$$

Note that, in the previous expression, \(E_{0}\) cannot be replaced with \( \mathcal {L}\left( f_{0}\right) ,\) as the reader can easily see by just considering \(f_{0}:\mathbb {R\longrightarrow }{\mathbb {R}}\) given by \( f_{0}\left( x\right) {:}{=}x^{2}.\) Going further, the following result ensures that we can keep the same \(E_{0}\) in the linear representation of \(\mathcal {L}(f)\) provided that \(f\in \Gamma \) is close enough to \(f_{0}\) in relation to the pseudo-distance \(d_{E_{0}}\) defined in (12).

Theorem 4.1

Let \(f_{0}\in \Gamma \) be such that

$$\begin{aligned} m{:}{=}\inf \left\{ f_{0}\left( x\right) :x\in \mathrm {bd}\,E_{0}\right\} >0, \end{aligned}$$

with \(E_{0}\) defined in (28). Then

$$\begin{aligned} \mathcal {L}(f)=\{x\in {\mathbb {R}}^{n}:\left\langle u,x\right\rangle \le \left\langle u,z\right\rangle -f\left( z\right) ,\text {~}z\in E_{0},\, u\in \partial f\left( z\right) \mathbb {\}}, \end{aligned}$$

whenever \(f\in \Gamma \) such that \(d_{E_{0}}\left( f,f_{0}\right) <m/2.\)

Proof

Consider any convex function \(f\in \Gamma \) such that \(d_{E_{0}}\left( f,f_{0}\right) <m/2\). Let us see that

$$\begin{aligned} \mathcal {L}\left( f\right) \subset \mathrm {int}\mathbf {\,}E_{0}. \end{aligned}$$

Then, the statement of the theorem follows from the previous lemma.

For any \(x\in \mathrm {bd}\,E_{0}\) we have

$$\begin{aligned} f\left( x\right) =f_{0}\left( x\right) +f\left( x\right) -f_{0}\left( x\right) >m-\frac{m}{2}=\frac{m}{2}, \end{aligned}$$
(29)

while, for \(x\in \mathcal {L}\left( f_{0}\right) \) one has

$$\begin{aligned} f\left( x\right) \le f\left( x\right) -f_{0}\left( x\right) <\frac{m}{2}. \end{aligned}$$

Now, arguing by contradiction, assume that there exists \(x_{1}\in \mathcal {L} \left( f\right) \diagdown \mathrm {int}\mathbf {\,}E_{0},\) take any \(x_{0}\in \mathcal {L}\left( f_{0}\right) \subset \mathrm {int}\,E_{0}\), and let \( \lambda \in ] 0,1]\) be such that

$$\begin{aligned} \left( 1-\lambda \right) x_{0}+\lambda x_{1}\in \mathrm {bd}\,E_{0}. \end{aligned}$$

Then, we attain the contradiction, with (29),

$$\begin{aligned} f\left( \left( 1-\lambda \right) x_{0}+\lambda x_{1}\right) \le \left( 1-\lambda \right) f\left( x_{0}\right) +\lambda f\left( x_{1}\right) \le \left( 1-\lambda \right) \frac{m}{2}<\frac{m}{2}. \end{aligned}$$

\(\square \)

Remark 4.2

With the notation of the previous theorem, \(m>0\) whenever \(\mathcal {L}\left( f_{0}\right) \) is bounded, since in such a case m is the minimum of the continuous function \(f_{0}\) on the compact set \(\mathrm {bd}\,E_{0}{,}\) where the function is positive.

The following example shows that the assertion of the previous theorem may fail when \(m=0.\)

Example 4.1

Let us consider the continuously differentiable convex function \( g(x,y)=\frac{x^{2}}{y}\) defined on \(S{:}{=}\{(x,y):y>|x|\}\subset {\mathbb {R}}^{2}\). Since its gradient is bounded by \(\sqrt{5} \) on S, the Mean Value theorem allows us to see that g is \(\sqrt{5}\)-Lipschitz on S, and Theorem 4.1.7 in [8] establishes that

$$\begin{aligned} f_{0}(x,y){:}{=}\inf _{\left| u\right| <v}\left\{ g(u,v)+\sqrt{5} ||(x-u,y-v)||\right\} \end{aligned}$$

is a convex extension of g on the whole space \({\mathbb {R}} ^{2}\), and the Lipschitz constant of \(\sqrt{5}\) is maintained in the extension (see, also, Theorem 1 in [9]). The function \(f_{0}\) is not everywhere differentiable.

First, let us see that, with the notation of the previous theorem, \( m=0.\) It is easy to see that \(f_{0}\) only takes nonnegative values, \(\mathcal {L}\left( f_{0}\right) =\left\{ 0\right\} \times {\mathbb {R}} _{+},\) and \(\left( \alpha _{0},r\right) \in \mathrm {bd}\,E_{0}\) for all \(r\in \mathbb {N}\). Moreover, for all \(r>\alpha _{0}\) one has \(f_{0}\left( \alpha _{0},r\right) =\frac{\alpha _{0}^{2}}{r} \rightarrow 0\) as \(r\rightarrow \infty ,\) which shows that \(m=0\). Now, for all \(\varepsilon >0\) and all \(\left( x,y\right) \in {\mathbb {R}}^{2}\), define

$$\begin{aligned} f_{\varepsilon }\left( x,y\right) =f_{0}\left( x,y\right) -\varepsilon \text { and }\widetilde{f}_{\varepsilon }\left( x,y\right) =f_{\varepsilon }\left( x,y\right) +\left[ \left| x\right| -\alpha _{0}-\varepsilon \right] _{+}, \end{aligned}$$

where \(\left[ z\right] _{+}{:}{=}\max \left\{ z,0\right\} \) denotes the positive part of \(z\in {\mathbb {R}}\). Observe that \([ \left| x\right| -\alpha _{0}-\varepsilon ] _{+}\) is nothing else but the distance from \(\left( x,y\right) \) to the strip \(\left[ -\alpha _{0}-\varepsilon ,\alpha _{0}+\varepsilon \right] \times {\mathbb {R}}\) which is a neighborhood of \(E_{0}\). Therefore, \(\widetilde{f}_{\varepsilon }\) and \(f_{\varepsilon }\) coincide on \(\left[ -\alpha _{0}-\varepsilon ,\alpha _{0}+\varepsilon \right] \times {\mathbb {R}}\) and their subdifferentials coincide on \(E_{0}\). Then

$$\begin{aligned} d_{E_{0}}\left( \widetilde{f}_{\varepsilon },f_{0}\right) =d_{E_{0}}\left( f_{\varepsilon },f_{0}\right) =\varepsilon . \end{aligned}$$

Reasoning by contradiction, let us suppose that the statement in Theorem 4.1 is true and there exists \(\eta >0\) such that for \(\varepsilon <\eta \) we have

$$\begin{aligned} \mathcal {L}\left( f_{\varepsilon }\right)= & {} \{x\in {\mathbb {R}}^{n}:u^{\prime }x\le u^{\prime }z-f_{\varepsilon }\left( z\right) ,\text {~}u\in \partial f_{\varepsilon }\left( z\right) ,\,z\in E_{0}\mathbb {\}} \nonumber \\= & {} \{x\in {\mathbb {R}}^{n}:u^{\prime }x\le u^{\prime }z-\widetilde{f} _{\varepsilon }\left( z\right) ,\text {~}u\in \partial \widetilde{f} _{\varepsilon }\left( z\right) ,\,z\in E_{0}\mathbb {\}} \\= & {} \mathcal {L}(\widetilde{f}_{\varepsilon }). \nonumber \end{aligned}$$
(30)

Then, on the one hand, \(\left| x\right| >\alpha _{0}+2\varepsilon \) implies \(\widetilde{f}_{\varepsilon }\left( x,y\right) >-\varepsilon +\varepsilon =0\) which entails

$$\begin{aligned} \mathcal {L}\left( \widetilde{f}_{\varepsilon }\right) \subset \left[ -\alpha _{0}-2\varepsilon ,\alpha _{0}+2\varepsilon \right] \times {\mathbb {R}}. \end{aligned}$$

On the other hand,

$$\begin{aligned} \mathcal {L}\left( f_{\varepsilon }\right) \supset \left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:y=\frac{x^{2}}{\varepsilon },\,\left| x\right| >\varepsilon \right\} . \end{aligned}$$

In particular, \(\mathcal {L}\left( f_{\varepsilon }\right) \) contains points outside \(\left[ -\alpha _{0}-2\varepsilon ,\alpha _{0}+2\varepsilon \right] \times {\mathbb {R}},\) contradicting (30).

Remark 4.3

In relation with the previous example we have:

  1. 1)

    Another possible convex extension of g to \({\mathbb {R}}^{2}\) is given by

    $$\begin{aligned} \widetilde{f}_{0}\left( x,y\right) =\sup \limits _{\left| u\right| <v}\left\{ g\left( u,v\right) +\left\langle \nabla g\left( u,v\right) ,\left( x-u,y-v\right) \right\rangle \right\} , \end{aligned}$$

    yielding \(\widetilde{f}_{0}\left( x,y\right) =2\left| x\right| -y\) if \(\left| x\right| \ge y.\)

  2. 2)

    Note that \(\left( x,y\right) \mapsto \dfrac{x^{2}}{y}\) cannot be extended from \(\left\{ \left( x,y\right) \in {\mathbb {R}} ^{2}\mid y>0\right\} \) (where it is also convex) to the whole plane (see [22]).

  3. 3)

    The values of the extensions \(f_{0}\) outside \(S\cup \{(0,0)\}\) are positive, but they are not used in the reasoning above.

The following lemma constitutes a key tool for our purposes. It concerns sets in \({\mathbb {R}}^{n+1}\) of the form

$$\begin{aligned} U(f,K){:}{=}\left\{ \left( a,b\right) :b=\left\langle a,z\right\rangle -f\left( z\right) ,\,a\in \partial f\left( z\right) ,\,z\in K\right\} , \end{aligned}$$

with \(K\subset {\mathbb {R}}^{n}\) and \(f\in \Gamma .\)

Lemma 4.2

Let \(K_{1},K_{2}\subset {\mathbb {R}}^{n}\) be compact sets, \( f_{1},f_{2}\in \Gamma \), and consider the sets \(U(f_{i},K_{i}),\ i=1,2.\) Then,

$$\begin{aligned} d_{H}\left( U(f_{1},K_{1}),U(f_{2},K_{2})\right) \le \rho \, d_{H}\left( \partial f_{1}\left( K_{1}\right) ,\partial f_{2}\left( K_{2}\right) \right) +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) , \end{aligned}$$
(31)

where \(\rho {:}{=}\max \{1+\left\| x\right\| :x\in K_{1}\cup K_{2}\}.\)

Proof

Take \(K_{i},f_{i},U(f_{i},K_{i}),\) for \(i=1,2,\) and \(\rho ,\) as in the statement of the lemma. Let us establish the inequality

$$\begin{aligned} e\left( U(f_{1},K_{1}),U(f_{2},K_{2})\right) \le \rho \,d_{H}\left( \partial f_{1}\left( K_{1}\right) ,\partial f_{2}\left( K_{2}\right) \right) +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) , \end{aligned}$$
(32)

which yields, by symmetry, the aimed inequality (31). For simplicity, in this proof we use the notation

$$\begin{aligned} \xi := d_{H}\left( \partial f_{1}\left( K_{1}\right) ,\partial f_{2}\left( K_{2}\right) \right) . \end{aligned}$$

Since \(\partial f_{1}\left( K_{1}\right) \) and \(\partial f_{2}\left( K_{2}\right) \) are compact subsets in \({\mathbb {R}}^{n}\) (see again [19, Theorem 24.7]), \(\xi \) is finite, and in particular we have that

$$\begin{aligned} \partial f_{1}\left( K_{1}\right) \subset \partial f_{2}\left( K_{2}\right) +\xi {\mathbb {B}}. \end{aligned}$$
(33)

Now, take any \(\left( a_{1},b_{1}\right) \in U(f_{1},K_{1}),\) and let us prove the existence of \(\left( a_{2},b_{2}\right) \in U(f_{2},K_{2})\) such that

$$\begin{aligned} \left\| \left( a_{1},b_{1}\right) -\left( a_{2},b_{2}\right) \right\| \le \rho \,\xi +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) . \end{aligned}$$
(34)

By definition, \(\left( a_{1},b_{1}\right) \in U(f_{1},K_{1})\) entails the existence of \(z_{1}\in K_{1}\) such that

$$\begin{aligned} a_{1}\in \partial f_{1}\left( z_{1}\right) \text { and }b_{1}=\left\langle a_{1},z_{1}\right\rangle -f_{1}\left( z_{1}\right) , \end{aligned}$$

and, by (33), we can write

$$\begin{aligned} a_{1}=a_{2}+\xi w,\text { for some }a_{2}\in \partial f_{2}\left( z_{2}\right) ,\,z_{2}\in K_{2},\text { and }\left\| w\right\| \le 1. \end{aligned}$$

Define

$$\begin{aligned} b_{2}{:}{=}\left\langle a_{2},z_{2}\right\rangle -f_{2}\left( z_{2}\right) , \end{aligned}$$

and let us establish (34) for such an element \(\left( a_{2},b_{2}\right) \). On the one hand,

$$\begin{aligned} \left\langle a_{1},z_{1}\right\rangle -f_{1}\left( z_{1}\right)= & {} \left\langle a_{2},z_{1}\right\rangle +\xi \left\langle w,z_{1}\right\rangle -f_{1}\left( z_{1}\right) \\= & {} \left\langle a_{2},z_{1}\right\rangle -f_{2}\left( z_{1}\right) +\xi \left\langle w,z_{1}\right\rangle +f_{2}\left( z_{1}\right) -f_{1}\left( z_{1}\right) \\\le & {} \left\langle a_{2},z_{2}\right\rangle -f_{2}\left( z_{2}\right) +\xi \left\| z_{1}\right\| +f_{2}\left( z_{1}\right) -f_{1}\left( z_{1}\right) \\\le & {} \left\langle a_{2},z_{2}\right\rangle -f_{2}\left( z_{2}\right) +\xi \rho +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) , \end{aligned}$$

where for the first inequality we have applied the fact that \(a_{2}\in \partial f_{2}\left( z_{2}\right) .\)

On the other hand, since \(a_{1}\in \partial f_{1}\left( z_{1}\right) ,\) we have

$$\begin{aligned} \left\langle a_{1},z_{1}\right\rangle -f_{1}\left( z_{1}\right)\ge & {} \left\langle a_{1},z_{2}\right\rangle -f_{1}\left( z_{2}\right) \\= & {} \left\langle a_{2},z_{2}\right\rangle +\xi \left\langle w,z_{2}\right\rangle -f_{1}\left( z_{2}\right) \\= & {} \left\langle a_{2},z_{2}\right\rangle -f_{2}\left( z_{2}\right) +\xi \left\langle w,z_{2}\right\rangle +f_{2}\left( z_{2}\right) -f_{1}\left( z_{2}\right) \\\ge & {} \left\langle a_{2},z_{2}\right\rangle -f_{2}\left( z_{2}\right) -\xi \rho -d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) . \end{aligned}$$

So, we have established

$$\begin{aligned} \left| b_{1}-b_{2}\right| \le \xi \rho +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) . \end{aligned}$$

Finally, we have (recall that \(\rho >1),\)

$$\begin{aligned} \left\| \left( a_{1},b_{1}\right) -\left( a_{2},b_{2}\right) \right\|= & {} \max \{\left\| a_{1}-a_{2}\right\| ,\left| b_{1}-b_{2}\right| \} \\\le & {} \max \{\xi ,\rho \xi +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) \\= & {} \rho \xi +d_{K_{1}\cup K_{2}}\left( f_{1},f_{2}\right) , \end{aligned}$$

which yields (34) and the proof is complete. \(\square \)

In order to present the announced results about the Lipschitzian behavior of the feasible set of convex inequalities, we consider the set \( U(f_{0},E_{0}),\ \)and appeal to the constant

$$\begin{aligned} \kappa _{0}{:}{=}\frac{\left\| x_{0}\right\| +1}{d\left( 0_{n},C_{U(f_{0},E_{0})}\right) }, \end{aligned}$$
(35)

where, as in Corollary 3.2,

$$\begin{aligned} C_{U(f_{0},E_{0})}{:}{=}\left\{ u\in {\mathbb {R}}^{n}:\left( u,\left\langle u,x_{0}\right\rangle \right) \in \mathrm {cl\,conv\,}U(f_{0},E_{0})\right\} . \end{aligned}$$

Remark 4.4

If \(\mathcal {L}(f_{0})\) is nonempty and bounded, [19, Theorem 24.7] ensures that \(\partial f_{0}\left( E_{0}\right) \) is compact, from where we easily deduce that \( U(f_{0},E_{0})\) also is. More in detail, the b-coordinates in \(U(f_{0},E_{0})\) are clearly bounded because of the compactness of \(\partial f_{0}\left( E_{0}\right) \) and the continuity of \( f_{0} \). In order to verify the closedness of \(U(f_{0},E_{0}),\) if \(U(f_{0},E_{0})\ni \left( a^{r},b^{r}\right) \rightarrow \left( a,b\right) \in {\mathbb {R}}^{n+1}\) with \(a^{r}\in \partial f_{0}\left( z^{r}\right) \), \(b^{r}=\left\langle a^{r},z^{r}\right\rangle -f_{0}\left( z^{r}\right) ,\) \(z^{r}\in E_{0}\) (compact), then we may assume, by taking an appropriate subsequence, that \(z^{r}\rightarrow z\in E_{0}\) whence, by applying [19, Theorem 24.5], \(a\in \partial f_{0}\left( z\right) \) and \(b=\left\langle a,z\right\rangle -f_{0}\left( z\right) ,\) yielding \(\left( a,b\right) \in U(f_{0},E_{0}).\) Once we know that \(U(f_{0},E_{0})\) is compact, it is well-known that its convex hull also is, so that we may remove the closure of \(\mathrm {conv\,}U(f_{0},E_{0})\) in the definition of \(C_{U(f_{0},E_{0})}.\)

The boundedness assumption on \(\mathcal {L}(f_{0})\) in the previous remark is not superfluous for the compactness of \(\mathrm {conv\,}U(f_{0},E_{0}),\) as we see by considering \(f_{0}:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) given by

$$\begin{aligned} f_{0}\left( x\right) =\left\{ \begin{array}{ll} \arctan x, &{} \quad \text {if }x\le 0, \\ x, &{} \quad \text {if }x>0. \end{array} \right. \end{aligned}$$

Clearly \(E_{0}=] -\infty ,\alpha _{0}] \) and

$$\begin{aligned} U(f_{0},E_{0})=\left\{ \left( \left( 1+z^{2}\right) ^{-1},\left( 1+z^{2}\right) ^{-1}z-\arctan z\right) :z\le 0\right\} , \end{aligned}$$

which satisfies \(\left( 0,\pi /2\right) \in \left( \mathrm {cl\,} U(f_{0},E_{0})\right) \backslash U(f_{0},E_{0}).\) Note that \(\left( 0,\pi /2\right) \notin \mathrm {conv\,}U(f_{0},E_{0}),\) since the projection of \( U(f_{0},E_{0})\) on the first coordinate is ]0, 1] .

The next result concerns the Lipschitz behavior of \(\mathcal {L}\) around \( f_{0}\) with respect to the distance in \(\Gamma \) given by

$$\begin{aligned} \mathbf {d}(f_{1},f_{2}){:}{=}\sum _{k=1}^{\infty }{2^{-k}\min \{} 1,\sup _{\left\| x\right\| \le k}|f_{1}(x)-f_{2}(x)|{\}.} \end{aligned}$$

This metric equips \(\Gamma \) with the topology of uniform convergence on bounded sets of \({\mathbb {R}}^{n},\) so it involves the values of functions on the whole space \({\mathbb {R}}^{n}\). In contrast, the subsequent Theorem 4.2 appeals to pseudo-distance \(d_{E}\), E being an appropriate enlargement of \(\mathcal {L}(f_{0}),\) assumed to be bounded.

Proposition 4.1

Let \((f_{0,}x_{0})\in \mathrm {gph}\mathcal {L}.\) The following statements are equivalent:

(i) \(\mathcal {L}:\left( \Gamma ,\mathbf {d}\right) \rightrightarrows {\mathbb {R}}^{n}\) has the Aubin property at \((f_{0,}x_{0}),\)

(ii) There exists \(z_{0}\in {\mathbb {R}}^{n}\) such that \(f_{0}\left( z_{0}\right) <0,\)

(iii) \(0_{n+1}\notin \mathrm {cl\ conv\,}U(f_{0},E_{0});\) in other words, \( 0_{n}\notin C_{U(f_{0},E_{0})}.\)

Moreover, if \(\mathcal {L}(f_{0})\) is bounded, \(\left( iii\right) \) reads as

\((iii^{\prime })\) \(0_{n+1}\notin \mathrm {conv\,}U(f_{0},E_{0}).\)

Proof

\((i)\Leftrightarrow (ii).\) It comes from Theorems 1 and 2 in [10], applied to the simplest case when the abstract constraint set C is the whole space \({\mathbb {R}}^{n}\) and we deal with systems with a unique convex inequality.

\((ii)\Leftrightarrow (iii)\) According to Corollary 3.1 we only have to prove that the existence of \(z_{0}\in {\mathbb {R}}^{n}\) such that \( f_{0}\left( z_{0}\right) <0\) is equivalent to the SSC at \( U(f_{0},E_{0}).\)

Take \(z_{0}\in {\mathbb {R}}^{n}\) such that \(f_{0}\left( z_{0}\right) <0\) and let us see that \(z_{0}\) is a SS point of \(U(f_{0},E_{0}).\) Observe that

$$\begin{aligned} 0>f_{0}\left( z_{0}\right) \ge f_{0}\left( z\right) +\left\langle a,z_{0}-z\right\rangle ,\text { whenever }a\in \partial f_{0}\left( z\right) , \,z\in E_{0}; \end{aligned}$$

yielding

$$\begin{aligned} \left\langle a,z_{0}\right\rangle \le \left\langle a,z\right\rangle -f_{0}\left( z\right) +f_{0}\left( z_{0}\right) ,\text { for all }a\in \partial f_{0}\left( z\right) ,\text { with }z\in E_{0}. \end{aligned}$$

So, \(\sup _{\left( {\begin{array}{c}a\\ v\end{array}}\right) \in U(f_{0},E_{0})}\left( \left\langle a,z_{0}\right\rangle -v\right) \le f_{0}\left( z_{0}\right) <0.\) Conversely, let \(z_{0}\) be an SS point of \(U(f_{0},E_{0})\). Lemma 4.1 yields \(z_{0}\in \mathcal {L}(f_{0})\subset E_{0}\) and, so, taking any \(a\in \partial f_{0}\left( z_{0}\right) ,\) we have that

$$\begin{aligned} \left\langle a,z_{0}\right\rangle <\left\langle a,z_{0}\right\rangle -f_{0}\left( z_{0}\right) , \end{aligned}$$

which entails \(f_{0}\left( z_{0}\right) <0.\)

Finally, if \(\mathcal {L}(f_{0})\) is bounded, \((iii)\Leftrightarrow (iii^{\prime })\) follows trivially from Remark 4.4. \(\square \)

The next result describes the Lipschitz behavior of the feasible set of the convex inequality (2) around a given \(f_{0}\in \Gamma \) and a given solution \(x_{0}\in \mathcal {L}(f_{0})\). Roughly speaking, the variation of feasible points is controlled by the variations of functions and their subdifferentials. We point out that constant \(\kappa _{0}\) therein is (conceptually) computable as far as it depends only the nominal data \( f_{0}\) and \(x_{0}.\)

Theorem 4.2

Assume that \(\mathcal {L}(f_{0})\) is nonempty and bounded, and \(0_{n}\notin C_{U(f_{0},E_{0})}\). Let \(\kappa >\kappa _{0}\) with \(\kappa _{0}\) defined in (35), \(E{:}{=}E_{0}+\alpha {\mathbb {B}}\) with \(\alpha >0,\) \(E_{0}\) defined in (28), and \(\rho {:}{=}\max \{1+\left\| x\right\| :x\in E\}.\) Then, there exists \(\delta _{0}>0\) such that \(0<\delta \le \delta _{0}\) implies

$$\begin{aligned} d\left( x_{1},\mathcal {L}(f_{2})\right) \le \kappa \left( \rho d_{H}\left( \partial f_{1}\left( E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\partial f_{2}\left( E_{0}+\sqrt{\delta }{\mathbb {B}}\right) \right) +d_{E}\left( f_{1},f_{2}\right) \right) , \end{aligned}$$

provided that \(f_{1},f_{2}\in \Gamma \), \(x_{1}\in \mathcal {L}(f_{1}),\) \( d_{_{E}}\left( f_{i},f_{0}\right) \le \delta ,\) \(i=1,2,\) and \(\left\| x_{1}-x_{0}\right\| \le \delta .\)

Proof

Take \(\kappa >\kappa _{0}.\) Corollary 3.2 ensures the existence of \(\delta _{1}>0\) such that \(d_{H}\left( U_{i},U(f_{0},E_{0})\right) \le \delta _{1},\) \(U_{i}\in CL\left( {\mathbb {R}}^{n+1}\right) ,\) \(i=1,2,\) and \( \left\| x_{1}-x_{0}\right\| \le \delta _{1},\) \(x_{1}\in \mathcal {F} \left( U_{1}\right) ,\) imply

$$\begin{aligned} d\left( x_{1},\mathcal {F}\left( U_{2}\right) \right) \le \kappa d_{H}\left( U_{1},U_{2}\right) . \end{aligned}$$
(36)

On the other hand, according to Corollary 2.1\(\left( iii\right) ,\) choose \(\delta _{2}>0\) such that \(d_{_{E}}\left( f,f_{0}\right) \le \delta \le \delta _{2}\) implies

$$\begin{aligned} d_{H}\left( \partial f\left( E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\partial f_{0}\left( E_{0}\right) \right) \le \dfrac{\delta _{1}}{2\rho }. \end{aligned}$$

Let \(m>0\) be as in Theorem 4.1 (see Remark 4.2) and consider

$$\begin{aligned} \delta _{0}=\min \left\{ \dfrac{\delta _{1}}{2},\delta _{2},\frac{m}{2} ,\alpha ^{2}\right\} . \end{aligned}$$

Now take \(0<\delta \le \delta _{0},\) and \(f_{1},f_{2}\in \Gamma \), with \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta .\) Consider also the sets \(U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\ i=1,2.\) Appealing to Lemma 4.2, and taking into account that \(E_{0}+ \sqrt{\delta }{\mathbb {B}}\subset E\), we have, for \(i=1,2,\)

$$\begin{aligned}&d_{H}\left( U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,U(f_{0},E_{0})\right) \\&\quad \le \max _{x\in E_{0}+\sqrt{\delta }{\mathbb {B}}}(1+\left\| x\right\| )d_{H}\left( \partial f_{i}\left( E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\partial f_{0}\left( E_{0}\right) \right) +d_{E_{0}+\sqrt{\delta }{\mathbb {B}} }\left( f_{i},f_{0}\right) \\&\quad \le \rho \dfrac{\delta _{1}}{2\rho }+d_{E}\left( f_{i},f_{0}\right) \le \dfrac{\delta _{1}}{2}+\dfrac{\delta _{1}}{2}=\delta _{1}. \end{aligned}$$

Moreover, since \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta \le m/2,\) we have from Theorem 4.1

$$\begin{aligned} \mathcal {F}\left( U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) \right) =\mathcal {L}\left( f_{i}\right) ,\,i=1,2. \end{aligned}$$

Consequently, applying Lemma 4.2 to the sets \(U\left( f_{i},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,\) \(i=1,2,\) we conclude from ( 36)

$$\begin{aligned} d\left( x_{1},\mathcal {L}(f_{2})\right)\le & {} \kappa d_{H}\left( U\left( f_{1},E_{0}+\sqrt{\delta }{\mathbb {B}}\right) ,U\left( f_{2},E_{0}+\sqrt{ \delta }{\mathbb {B}}\right) \right) \\\le & {} \kappa \left( \rho d_{H}\left( \partial f_{1}\left( E_{0}+\sqrt{ \delta }{\mathbb {B}}\right) ,\partial f_{2}\left( E_{0}+\sqrt{\delta }\mathbb {B }\right) \right) +d_{E}\left( f_{1},f_{2}\right) \right) . \end{aligned}$$

\(\square \)

Remark 4.5

Observe that the inequality established in Theorem 4.2 is based on (36) and Lemma 4.2, the last one used to provide an upper bound to \(d_{H}\left( U\left( f_{1},E_{0}+\sqrt{\delta } {\mathbb {B}}\right) , U\left( f_{2},E_{0}+\sqrt{\delta } {\mathbb {B}}\right) \right) \). This upper bound is a weighted sum of the Hausdorff distance between the subdifferential images \(\partial f_{1}(E_{0}+\sqrt{\delta } \mathbb {B})\) and \(\partial f_{2}(E_{0}+\sqrt{\delta } \mathbb {B})\) and the restricted distance \(d_{E}\left( f_{1},f_{2}\right) \). Thus, the distance \(\mathbf {d}(f_{1},f_{2})\), requiring the values of \(f_{1}\) and \(f_{2}\) at any point of the whole space \({\mathbb {R}}^{n}\), is not involved. Both elements concern the enlargements \(E_{0}+\sqrt{\delta } {\mathbb {B}}\) and E of the nominal feasible set \(\mathcal {L}(f_{0}),\) which can be taken arbitrarily close to \(\mathcal {L}(f_{0})\) by taking \(\alpha _{0}\) and \(\alpha \) sufficiently small. The need of taking two different scalars, \(\alpha _{0}\) and \(\alpha ,\) comes from the proof as it requires the fulfillment of the inclusion \(E_{0}+\sqrt{\delta } {\mathbb {B}}\subset E.\)

Our approach in this section strongly relies on the homogeneous linearization of the involved functions by means of sets of subgradients (see Theorem 4.1), as well as on their stability as studied in Sect. 2.2. Therefore, it is a linear approach in its essence.

4.1 The convex differentiable case

Throughout this subsection we assume that our nominal function \(f_{0}\in \Gamma \) is differentiable everywhere, so that we write \(\nabla f_{0}\) instead of \(\partial f_{0}\). The following theorem provides the counterpart of Corollary 2.1\(\left( iii\right) \) under differentiability of \(f_{0}\).

Theorem 4.3

Let \(K_{0}\subset {\mathbb {R}}^{n}\) be a compact set, \(\alpha >0, \) and \(K{:}{=}K_{0}+\alpha {\mathbb {B}}.\) Given \(\varepsilon >0,\) there exists \(\delta >0\) such that, for any \(f\in \Gamma \) with \(d_{K}\left( f,f_{0}\right) \le \delta ,\) one has

$$\begin{aligned} d_{H}\left( \partial f\left( K_{0}\right) ,\nabla f_{0}\left( K_{0}\right) \right) \le \varepsilon . \end{aligned}$$

Proof

From Corollary 2.1(i) there exists \(\delta _{1}>0 \) such that

$$\begin{aligned} \partial f\left( K_{0}+\delta _{1}{\mathbb {B}}\right) \subset \nabla f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}, \end{aligned}$$

provided that \(d_{K}\left( f,f_{0}\right) \le \delta _{1},\) \(f\in \Gamma \). In particular, \(\partial f\left( K_{0}\right) \subset \nabla f_{0}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\), if \(d_{K}\left( f,f_{0}\right) \le \delta _{1},\) \(f\in \Gamma \).

Let us prove the existence of \(\delta _{2}>0\) such that

$$\begin{aligned} \nabla f_{0}\left( K_{0}\right) \subset \partial f\left( K_{0}\right) +\varepsilon {\mathbb {B}},\text { if }d_{K}\left( f,f_{0}\right) \le \delta _{2},\,f\in \Gamma . \end{aligned}$$

Having obtained \(\delta _2\), just take \(\delta {:}{=}\min \{\delta _{1},\delta _{2}\}\) to finish the proof.

Arguing by contradiction, assume the existence of a sequence \(\{f_{r}\}\subset \Gamma ,\) with \(d_{K}\left( f_{r},f_{0}\right) \le \frac{1 }{r}\) such that \(\nabla f_{0}\left( K_{0}\right) \not \subset \partial f_{r}\left( K_{0}\right) +\varepsilon {\mathbb {B}}\), for all r. For each r,  let \(x_{r}\in K_{0}\) such that

$$\begin{aligned} \nabla f_{0}\left( x_{r}\right) \notin \partial f_{r}\left( K_{0}\right) +\varepsilon {\mathbb {B}}. \end{aligned}$$
(37)

The compactness of \(K_{0},\) and consequently of \(\nabla f_{0}\left( K_{0}\right) \) (since \(\nabla f_{0}\) is continuous) allows us to assume that \(\{x_{r}\}\) and \(\{\nabla f_{0}\left( x_{r}\right) \}\,\) converge to \(\overline{x}\in K_{0}\) and \(\nabla f_{0}\left( \overline{x} \right) ,\) respectively (see [19, Theorem 24.4]). This fact, together with (37) yields the existence of \(r_{0}\in \mathbb {N}\) such that

$$\begin{aligned} \left( \nabla f_{0}\left( \overline{x}\right) +\frac{\varepsilon }{2}\mathbb { B}\right) \cap \partial f_{r}\left( K_{0}\right) =\emptyset ,\text { for } r\ge r_{0}. \end{aligned}$$
(38)

On the other hand, [19, Theorem 24.5] guarantees, for r large enough,

$$\begin{aligned} \partial f_{r}\left( \overline{x}\right) \subset \nabla f_{0}\left( \overline{x}\right) +\frac{\varepsilon }{2}{\mathbb {B}}, \end{aligned}$$

which represents a contradiction. \(\square \)

Following the proof of Theorem 4.2 and appealing to the previous theorem instead of Corollary 2.1\(\left( iii\right) ,\) we derive the following corollary. Recall that \(E_{0}\) and \( \kappa _{0}\) are defined in (28) and (35), respectively. Moreover, the differentiability of \(f_{0}\) entails

$$\begin{aligned} U\left( f_{0},E_{0}\right) =\left\{ \left( \nabla f_{0}\left( z\right) ,\left\langle \nabla f_{0}\left( z\right) ,z\right\rangle -f_{0}\left( z\right) \right) :z\in E_{0}\right\} . \end{aligned}$$

Corollary 4.1

Assume that \(\mathcal {L}(f_{0})\) is nonempty and bounded, and \(0_{n}\notin C_{U(f_{0},E_{0})}\). Let \(\kappa >\kappa _{0}\), \(\alpha >0\), \( E{:}{=}E_{0}+\alpha {\mathbb {B}}\), and \(\rho _{0}{:}{=}\max \{1+\left\| x\right\| :x\in E_{0}\}\). Then there exists \(\delta >0\) such that, for any \(f_{1},f_{2}\in \Gamma \) with \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta \), \(i=1,2\), and any \(x_{1}\in \mathcal {L}(f_{1})\), with \(\left\| x_{1}-x_{0}\right\| \le \delta \), one has

$$\begin{aligned} d\left( x_{1},\mathcal {L}(f_{2})\right) \le \kappa \left( \rho _{0}d_{H}\left( \partial f_{1}\left( E_{0}\right) ,\partial f_{2}\left( E_{0}\right) \right) +d_{E_{0}}\left( f_{1},f_{2}\right) \right) . \end{aligned}$$
(39)

Proof

(sketch) Take the same \(\delta _{1}\) as in the proof of Theorem 4.2. From Theorem 4.3 take \(\delta _{2}>0\) such that

$$\begin{aligned} d_{E}\left( f,f_{0}\right) \le \delta _{2}\Rightarrow d_{H}\left( \partial f\left( E_{0}\right) ,\nabla f_{0}\left( E_{0}\right) \right) \le \frac{ \delta _{1}}{2\rho _{0}}. \end{aligned}$$

Set \(\delta _{0}=\min \left\{ \dfrac{\delta _{1}}{2},\delta _{2},\dfrac{m}{2} \right\} ,\) where m comes from Theorem 4.1. Then, as in the proof of Theorem 4.2, Lemma 4.2 specified at \( K_{1}=K_{2}=E_{0}\) entails \(d_{H}\left( U\left( f_{i},E_{0}\right) ,U\left( f_{0},E_{0}\right) \right) \le \delta _{1},\) whenever \(d_{_{E}}\left( f_{i},f_{0}\right) \le \delta ,\ i=1,2.\) Finally, appealing again to Lemma 4.2 together with (36), we obtain (39). \(\square \)