Abstract
We consider a robust version of the full information best choice problem: there is model uncertainty, represented by a set of priors, about the measure driving the observed process. We propose a general construction of the set of priors that we use to solve the problem in the setting of Riedel (Econometrica 77(3):857–908, 2009). As in the classical case, it is optimal to stop if the current observation is a running maximum that exceeds certain decreasing thresholds. We characterize the history dependent minimizing measure and perform sensitivity analysis on two examples.

Similar content being viewed by others
Notes
Thus allowing for the original case of independent and uniformly distributed random variables.
See Levy (2015).
The existence of the prior is established in theorem 2 in Riedel (2009).
Indeed, if the distribution \(F=F_{X_t}\) was not uniform, a simple transformation would suffice: \(X'_t = F^{-1}(X_t)\).
Although we use i.i.d. random variables the arguments that follow can readily be adjusted to the case when random variables are not identically distributed (but are still independent).
Again, careful reading of the arguments that follow shows that one does not lose on generality by fixing an identical set of beliefs at every step.
See theorem 2 in Riedel (2009).
See section 4 in Riedel (2009).
For details see section 4 in Riedel (2009).
All the graphs and data for the tables were made using Wolfram Matematica, Research (2015).
Lemma 4 in Riedel (2009).
With respect to \(L^1\) metric.
Note that the first claim could have been formulated for functions on any interval [a, b], and with total weight of densities being equal to any number (as opposed to 1); we chose not to do so for the sake of readability.
References
Artzner P, Delbaen F, Eber JM, Heath D (1999) Coherent measures of risk. Math Finance 9(3):203–228
Babaioff M, Immorlica N, Kempe D, Kleinberg R (2008) Online auctions and generalized secretary problems. ACM SIGecom Exchanges 7(2):1–11
Bojdecki T (1978) On optimal stopping of a sequence of independent random variables—probability maximizing approach. Stoch Process Appl 6(2):153–163
Campbell G (1982) The maximum of a sequence with prior information. Seq Anal 1(3):177–191
Chudjakow T, Riedel F (2013) The best choice problem under ambiguity. Econ Theory 54(1):77–97
Dehaene S (2003) The neural basis of the weber-fechner law: a logarithmic mental number line. Trends Cognit Sci 7(4):145–147
Ferguson TS (1989) Who solved the secretary problem? Stat Sci 4(3):282–289
Föllmer H, Schied A (2011) Stochastic finance: an introduction in discrete time. Walter de Gruyter, Berlin
Freeman P (1983) The secretary problem and its extensions: a review. Int Stat Rev 51(2):189–206
Gilbert JP, Mosteller F (1966) Recognizing the maximum of a sequence. J Am Stat Assoc 61(313):35–73
Gilboa I, Schmeidler D (1989) Maxmin expected utility with non-unique prior. J Math Econ 18(2):141–153
Gnedin A (2007) Recognising the last record of sequence. Stoch Int J Probab Stoch Process 79(3–4):199–209
Huber PJ (1981) Robust statistics. Wiley, New-York
Kuchta M (2017) Iterated full information secretary problem. Math Methods Oper Res 86(2):277–292
Levy H (2015) Stochastic dominance: investment decision making under uncertainty. Springer, Berlin
Maccheroni F, Marinacci M, Rustichini A (2006) Ambiguity aversion, robustness, and the variational representation of preferences. Econometrica 74(6):1447–1498
Parlar M, Perry D, Stadje W (2007) Optimal shopping when the sales are on—a markovian full-information best-choice problem. Stoch Models 23(3):351–371
Petruccelli JD (1980) On a best choice problem with partial information. Ann Stat 8(5):1171–1174
Porosiński Z (1987) The full-information best choice problem with a random number of observations. Stoch Process Appl 24(2):293–307
Research W (2015) Mathematica 10.3
Riedel F (2004) Dynamic coherent risk measures. Stoch Process Appl 112(2):185–200
Riedel F (2009) Optimal stopping with multiple priors. Econometrica 77(3):857–908
Samuels S (1982) Exact solutions for the full information best choice problem. Stat Dept Mimea Ser 82–17
Samuels SM (1981) Minimax stopping rules when the underlying distribution is uniform. J Am Stat Assoc 76(373):188–197
Skarupski M (2019) Full-information best choice game with hint. Math Methods Oper Res 90(2):1–16
Tamaki M (1986) A full-information best-choice problem with finite memory. J Appl Probabil 23(3):718–735
Tamaki M (2009) Optimal choice of the best available applicant in full-information models. J Appl Probab 46(4):1086–1099
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Financial support by the CRC 1283 “Taming Uncertainty...” as well as the Center for Mathematical Economics at Bielefeld University is gratefully acknowledged. This research was also supported by the “Robust Finance...” research group which took place at Bielefeld’s Center for Interdisciplinary Studies (ZIF) in 2015.
Appendices
Appendix A: Applicability of the theory of optimal stopping under multiple priors
For the theory of optimal stopping under multiple priors (Riedel 2009) to be applied to processes with bounded payoffs the set of priors \(\mathcal {P}\) has to satisfy three assumptions. It should be \(L^1\) weakly closed and all the measures within the set \(\mathcal {P}\) should be equivalent. The set \(\mathcal {P}\) should also be time consistent: for any two measures, the measure that allows the agent to “switch” between them at some (possibly random) time must also be in the set \(\mathcal {P}\); see assumptions \(A2-4\) in Riedel (2009). The following lemma shows that the set \(\mathcal {P}\) satisfies those assumptions once we impose mild conditions on the set \(\mathcal {V}^A\).
Lemma 2
Assume the set \(\mathcal {V}^A\) satisfies:
-
1.
\(v_0 \in \mathcal {V}^A\)
-
2.
All the densities \(\dfrac{d v_{a}}{d v_0}\), \(a\in A\), are strictly positive and bounded
-
3.
The set \(\mathcal {V}^A\) is weakly closed in \(L^1(S,\mathcal {S},v_0)\)
Then the set of measures \(\mathcal {P}(\mathcal {V}^A)\) satisfies assumptions A2, A3 and A4 in Riedel (2009).
Proof
The assumption A2 is satisfied because all the densities in \(\mathcal {V}^A\) are strictly positive and bounded.
For the weak compactness it is sufficient to show that the set \(\mathcal {P}\) is closed and bounded by a uniformly integrable random variable. Since all the densities are bounded, the latter is obvious. Closedness is a consequence of the third assumption in the formulation of the lemma: weakly closed sets are also strongly closed, thus the closedness is inherited from weak closedness in each period by pasting. To see this, it suffices to recall that a sequence of positive functions convergent in \(L_1\) has a subsequence that converges pointwise (almost everywhere). With this the closedness can be proven using the classical argument (that a limit of a sequence of elements of the set also belongs to the set) by exploiting the previous remarks.
It remains to prove the time consistency. Due to the predictability of each of the functions \(a_k\) this is straightforward: Let \(P^a\) and \(P^b\) be two measures with densities \(\left. \frac{dP^a}{dP_0} \right| _{\mathcal {F}_t}= \prod _{s=1}^t \frac{d v_{a_s}}{d v_0}\) and \(\left. \frac{dP^b}{dP_0} \right| _{\mathcal {F}_t}= \prod _{s=1}^t \frac{d v_{b_s}}{d v_0}\) respectably and let \(\tau \) be a stopping time. Define \(c_t=a_t\) when \(t\le \tau \) and \(c_t=b_t\) when \(t>\tau \). The resulting measure from the property A4 coincides with the measure \(P^c\) with density \(\left. \frac{dP^c}{dP_0} \right| _{\mathcal {F}_t}= \prod _{s=1}^t \frac{d v_{c_s}}{d v_0}\) which obviously belongs to \(\mathcal {P}\); this is exactly what was supposed to be proven.Footnote 13\(\square \)
The theory of optimal stopping under multiple priors guarantees the existence of the stopping time \(\tau ^* \in \mathcal {T}\) such that:
where \(\mathcal {E}_t\) is a bounded payoff process adapted to the filtration \(\mathcal {F}_t\). The minimal optimal stopping time \(\tau ^*\) is given with
where U is the recurcively defined multiple priors value process:
Furthermore, the theory guarantees the existence of the measure \(Q^* \in \mathcal {P}\) such that the value process under multiple priors of the optimal stopping problem under multiple priors coincides with the value process of the (single-prior) optimal stopping problem of the process \(\mathcal {E}_t\) under the measure \(Q^*\); this allows the possibility of reducing the multiple priors problems to the classical ones. For further details see Theorems 1 and 2 in Riedel (2009).
Appendix B: Details on extremal measures
It is easy to prove that the inequality:
holds for any \(t>0\), \(x \in \mathbb {R}\) and \(P \in \mathcal {P}\), and a characterization in terms of monotone functions is straightforward along the lines of the classical proofs of theorems on first order stochastic dominance (Levy 2015). Specifically, the measure \(\underline{P} \in \mathcal {P}\) satisfies the inequality
for each \(t>0\), each \(P \in \mathcal {P}\) and each bounded, increasing real function h.
We note an immediate consequence of the monotone characterization of the extremal measures (9):
Lemma 3
For any function \(g:S^{t+1} \rightarrow \mathbb {R}\), \(t<T\) that is bounded, measurable and increasing in its last argument the following equality holds:
Proof
Since the filtration \(\mathcal {F}\) is generated by \(X_1,\ldots ,X_t\) it suffices to show that, for an arbtirary history \(X_1=x_1\), \(X_2=x_2\), \(\ldots \), \(X_t=x_t\), the following inequality holds:
This, however, is true because of the monotone characterization of the extremal measures (9). Indeed, once we fixed the values of random variables \(X_1\), \(X_2\), ..., \(X_t\), the function g can be interpreted as a function of a single variable \(X_{t+1}\) and the inequality follows directly from the inequality (9). \(\square \)
An analogous result holds for the decreasing functions.
Appendix C: Proof of Theorem 1
For the sake of convenience, we begin by defining a sequence of functions
for \(t<T\). Note that this allows the random variable \(\mathbb {1}_{X_{t+1}\le M_t}\) to be written in terms of the function \(i_{t+1}\) as follows:
As a preparation for the proof of Theorem 1 we prove a result on the representation of the payoff process \(Z_t\).
Since \(X_1,\ldots ,X_t\) are independent under \(\overline{P}\) we can derive the following representation for functions \(r_t\):
where the second equality is due to the definition of the measure \(\overline{v}\). It is now obvious that \(\overline{r}_t\) is an increasing function.
The next lemma describes the expected (ambiguous) \(Z_t\) in terms of the function \(\overline{r}_t\):
Lemma 4
For each \(t\in \lbrace 1,\ldots ,T \rbrace \) the following representation holds:
Proof
Note that:
Define the process:
and the function:
Clearly, the following equalities hold:
Thus it suffices to show the following:
Claim: For each t the equality \(R_t=\overline{r}_t(M_t)\) holds almost surely.
The claim is proven by backward induction.
Since \(R_T=\overline{r}_T(M_T)=1\) the claim trivially holds in the last period so we turn to the case \(t<T\).
We begin by deriving a recursive expression for \(R_t\) (using the law of iterated expectations for multiple priorsFootnote 14) as follows:
If we denote the realization of \(M_t\) with \(m_t\) (i.e. \(m_t=\max (x_1,\ldots ,x_t)\)) we can rewrite the last equality in terms of the functions \(r_t\) and \(\overline{r}_t\) using (12) and the induction hypothesis as follows:
In the last equality above we used the fact that on the set \(\lbrace X_{t+1}\le m_t \rbrace \) the equality \(M_{t+1}=M_t\) holds.
Since \(\mathbb {1}_{X_{t+1}\le M_t}=i_{t+1}(X_1,\ldots ,X_t,X_{t+1})\) and the function \(i_{t+1}\) is decreasing in its last variable we can use Lemma 3 to identify \(\overline{P}\) as the minimizing measure in the last expression:
the last equality is due to the definition of \(\overline{r}_t\). \(\square \)
We note that Lemma 4 proves that infimum in the definition of the adapted payoff \(Z_t\) is attained for \(\overline{v}\).
For the sake of convenience we also state a simple result about monotonicity of integral functions in the setting of our problem.
Lemma 5
Let \(g(x_1,\ldots ,x_t,x_{t+1})\), \(t<T\), be a function increasing (decreasing) in each of the first t arguments. For any \(P \in \mathcal {P}\) the function
is increasing (decreasing) in every argument, as is the function
Proof
The elementary proof of the first part of the lemma is omitted. Once one notices that \(h={\mathrm{ess \,inf}}\,_{P\in \mathcal {P}} h^P\) the second part follows immediately from the first part and the properties of the essential infimum. \(\square \)
We turn to proving the core of the theorem and for that purpose we define the value process U of the RBC optimal stopping problem under multiple priors:
The analysis will focus on the properties of the second argument in the maximum above so we define:
As can be seen from the value process, the random variable \(W_t\) describes the expected value (under multiple priors) of the payoff the agent will receive if she does not stop at time t given the available information. The definition above implies:
If we introduce the sequence of functions:
the equality \(W_t=w^*_t(X_1,\ldots ,X_t)\) clearly holds. Furthermore:
Lemma 6
For each \(t \in \lbrace 0,1,\ldots ,T-1 \rbrace \) the function \(w^*_t\) is decreasing in every variable.
Proof
The proof is by backward induction.
We first consider \(w^*_{T-1}\). Notice that:
Since \(i_T\) is obviously decreasing in first \(T-1\) variables, we can use the above Lemma 5 to conclude that \(w^*_{T-1}\) is decreasing in every variable.
For \(t<T-1\) we have:
The function \(i_{t+1}\) is decreasing in its first t arguments and the function \(\overline{r}_{t+1}\) is decreasing in every argument. The function \(w^*_{t+1}\) is decreasing in every argument by assumption. Thus, the result now follows from the fact that the maximum of decreasing function is a decreasing function and the Lemma 5. \(\square \)
The last result allows us to formulate a simple representation of the process \(W_t\):
Lemma 7
For each \(t \in \lbrace 0,1,\ldots ,T-1 \rbrace \) there exists a decreasing function \(w_t(m)\) such that \(W_t=w_t(M_t)\).
Proof
We begin the proof by backward induction by noting that, since \(Z_T=\mathbb {1}_{X_T=M_T}\) and
we have, due to the definition of \(w^*_{T-1}\),
where the second equality is due to Lemma 3. It thus suffices to define \(w_{T-1}(m)=\underline{P}(X_{T} \ge m)\). Indeed, the function \(w_{T-1}\) is clearly decreasing and the equality \(w_{T-1}(M_t)=W_T\) holds because of the previous considerations.
Suppose that for \(t<T\) there exists a decreasing function \(w_{t+1}\) such that \(w_{t+1}(M_{t+1})=W_{t+1}\). This allows us to rewrite \(W_t\) in terms of \(w_{t+1}\) and \(\overline{r}_{t+1}\):
where the last equality is due to:
Since \(W_t=w^*_t(X_1,\ldots ,X_t)\), and \(M_t=M_{t+1}\) on the set \(X_{t+1}\), we can write:
where \(m_t=\max (x_1,\ldots ,x_t)\) and the last equality is due to the definition of the set \(\mathcal {P}\) in Sect. 3. Thus, by setting
for \(t<T\), we get \(w_t(m_t)=w^*_t(x_1,\ldots ,x_t)\) which, due to the definition of \(w^*_t\), implies \(w_t(M_t)=W_t\).
Finally, since \(w_t(\max (x_1,\ldots ,x_t))=w^*_t(x_1,\ldots ,x_t)\), the function \(w^*_t\) is symmetric; thus, the monotonicity of the function \(w_t\) is a consequence of the monotonicity of the function \(w^*_t\) as described by the Lemma 6. \(\square \)
We now turn to proving that the stopping time is of the threshold type.
The proof of the last lemma reveals that the functions \(w_t\) are defined by the recursion \(w_{T-1}(m)=\underline{P}(X_{T} \ge m)\) and, for \(t<T-1\), the Eq. (13). Equivalently, we can expand the definition to include the final period by setting \(w_{T}(m)=0\) and \(w_t\) as defined by the expression in the Eq. (13) for \(t<T\).
It is clear that, for each \(t<T\), the equalities \(w_t(1)=\overline{r}_t(0)=0\) hold and that the functions \(r_t\) are strictly increasing, while the functions \(w_t\) are (weakly) decreasing. Thus, for \(t<T\), there exists a unique \(b_{t}\in [0,1)\) such that \(w_t(b_t)=\overline{r}_t(b_t)\). Additionally, we define \(b_T=0\). We record the previous considerations, along with the proof of the monotonicity of the sequence \((b_t)\), in the following lemma.
Lemma 8
For each \(t < T\) there exists a unique \(b_t \in [0,1]\) such that the equality \(w_t(b_t)=\overline{r}_t(b_t)\) holds. Furthermore, for each \(t<T\) the inequality \(b_t>b_{t+1}\) holds.
Proof
Suppose \(t<T\). Note that, due to the definition of the sequence \((b_t)\) and the fact that the function \(\overline{r}_{t+1}\) is strictly increasing, the following (in)equalities hold
for each \(x\in (b_{t+1},1]\). Hence:
Note, also, that the definition of \(\overline{r}_t\) implies \(\overline{r}_t(x)<\overline{r}_{t+1}(x)\) for any \(x\in (0,1)\). Thus, given the previously obtained inequality (14), we get:
With the inequality (15) proven we can turn to proving the inequality stated in the formulation of the lemma.
Suppose the opposite: \(b_t \le b_{t+1}\). The definition of \(b_t\) and the monotonicity of \(r_t\) imply: \(w_t(b_t)=\overline{r}_t(b_t) \le \overline{r}_t(b_{t+1})<w_t(b_{t+1})\), where the last inequality is due to the previously proven inequality (15). This, however, is in contradiction with the monotonicity of \(w_t\).
\(\square \)
To complete the proof of the first two parts of the theorem it remains to prove the equality (4); we do so in the following lemma:
Lemma 9
Proof
In the context of RBC problem the optimal stopping time is given with:
Using the representations for \(Z_t\) and \(W_t\) obtained in Lemmas 4 and 7 respectfully, the inequality \(Z_t \ge W_t = w_{t}(M_t)\) can only be satisfied when \(X_t=M_t\) (in which case \(Z_t=\overline{r}_t(M_t)\)), hence:
Finally, due to the monotonicity of \(\overline{r}_t\) and \(w_t\) and Lemma 8, the inequality \(\overline{r}_t(X_t) \ge w_{t}(X_t)\) is satisfied only when \(b_{t}\le M_t=X_t\). \(\square \)
It remains to note that the essential infimum in (13) is attained (see Lemma 10 in Riedel (2009)). This, with the definitions of \(W_t\) and \(U_t\), and Lemma 4 proves the third part of the theorem. Indeed, before stopping the minimizing measure is the one attained in (13), and once the agent stops the her payoff is \(Z_t\), and Lemma 4 implies that the minimizing measure is \(\overline{v}\).
Appendix D: Proof of Lemma 1
Proof of claim 1 of Lemma 1
We define an operator \(G: L^1([0,1]) \rightarrow \mathbb {R}\) with
and note that it is (Lipschitz) \(L^1\)-continous. Indeed, using the fact that g is increasing and bounded:
where C is a positive constant that bounds |g(x)|.
Let \(D^\lambda _S\) be the set of all the step functions within the set \(D^\lambda _{CLA}\). We will prove that \(D^\lambda _S\) is denseFootnote 15 in \(D^\lambda _{CLA}\). For an arbitrary \(\varphi \in D^\lambda _S\) and an arbitrary \(\varepsilon >0\) one can choose a step function \(\varphi _1\) such that \(\frac{1}{\lambda } \le \varphi _1(x) \le \varphi (x) \le \lambda \) and:
If one defines \(I=\int _0^1 \varphi _1(x) \, dx \le 1\) and:
for \(A=\lbrace \varphi _1(x) \le I \rbrace \) and \(B=[0,1] \backslash A\) it is easy to check that, for sufficiently small \(\varepsilon \), the function \(\varphi _S=\gamma \varphi _1 \mathbb {1}_A + \varphi _1 \mathbb {1}_B\) is a function that belongs to \(D^\lambda _{CLA}\). Furthermore, direct calculations show that the inequality
holds.Footnote 16 Combining the inequalities (16) and (17) gives:
which proves the density.
As the operator G is continuous and the set \(D^\lambda _S\) (which contains \({\underline{\varphi }}\)) is dense in \(D^\lambda _{CLA}\), for the claim to hold it suffices to show that for any \(\varphi \in D^\lambda _S\) the inequality \(G\varphi \ge G {\underline{\varphi }}\) holds. We do so in the remainder of the proof.
Let us fix \(\varphi \in D^\lambda _S\):
Without loss of generality we can assume that there is an index \(m \in \lbrace 1, \ldots , n \rbrace \) such that \(c_m=1/(1+\lambda )\).
Set \(\varphi _0=\varphi \). The idea is to create a finite sequence of functions \((\varphi _i)\) in which the last element is \({\underline{\varphi }}\) with the inequality \(G\varphi _i \le G\varphi _{i-1}\) being satisfied for any \(i>0\).
If \(\varphi _0={\underline{\varphi }}\) the proof is done. If not, we choose the step function \(\varphi _1\) such that it differs from \(\varphi _0\) by the value it takes on two appropriately chosen intervals. For that purpose we define:
Note that since \(\varphi _0 \ne {\underline{\varphi }}\) we have \(j<m<j'\). We now focus on the intervals \([c_{j-1},c_j]\) and \([c_{j'-1},c_{j'}]\) and set the value of \(\varphi _1\) to be \(\lambda \) on the former interval or \(1/ \lambda \) on the latter by “repositioning the weight” of \(\varphi _0\).
If \((\lambda - d_j)(c_j-c_{j-1}) \le (d_{j'}-\frac{1}{\lambda })(c_{j'-1}-c_{j'})\) we “reposition the excess weight” from the interval \([c_{j'-1},c_{j'}]\) to the interval \([c_{j-1},c_j]\), that is we define:
The inequality \(G\varphi _1 \le G\varphi _0\) is satisfied. Indeed, direct calculation yields
and one can use the monotonicity of the function g and the inequalities \(j<m<j'\) to make the following estimation:
When the inequality \((\lambda - d_j)(c_j-c_{j-1}) > (d_{j'}-\frac{1}{\lambda })(c_{j'-1}-c_{j'})\) holds one can construct the function \(\varphi _1\) using an analogous “weight repositioning”.
If \(\varphi _1={\underline{\varphi }}\) the proof is done. If not, one can create \(\varphi _2\) from \(\varphi _1\) as above. As the step function \(\varphi \) has finitely many steps the procedure ends after finitely many iterations. \(\square \)
Proof of claim 2 of Lemma 1
We begin by fixing \(\varphi \in D^\lambda _{CLA}\) and defining:
We will identify two functions \(\psi _1\) and \(\psi _2\), defined on [0, k] and [k, 1] respectively, such that the function \(\psi :=\psi _1\mathbb {1}_{[0,k]}+ \psi _2\mathbb {1}_{(k,1]}\) is the one that satisfies the claim. These will be the functions that “reposition the weight” \(\mu _1\) and \(\mu _2\) within the intervals [0, k] and [k, 1], such that most weight is on the upper part of the former and the lower part of the latter.
First we focus on the interval [0, k]. The first claim showed how to identify the step function \(\overline{\varphi }\) that, for a fixed, decreasing and bounded function g, minimizes the integral on the right hand side of (D) among all the functions \(\varphi \) whose range is within the interval \([1/\lambda ,\lambda ]\) and whose total weight is equal to 1. Note that \(\overline{\varphi }\) was simply the function that put the most weight possible on the upper part of the interval [0, 1]. Focusing on the interval [0, k], where the function h is decreasing, we are in a similar situation: among all the functions with a range within \([1/\lambda ,\lambda ]\) and whose integral is equal to \(\mu _1\) we are looking for a function \(\psi _1\) that minimizes the integral \(\int _0^k h(x) \psi _1(x) \,dx\). Analogous reasoning to the one in the proof of the first claimFootnote 17 will lead us to the conclusion that \(\psi _1\) has to be the function that puts the most weight as possible on the upper part of the interval [0, k]:
for an appropriately chosen \(c_1\). Identifying the precise value of \(c_1\) is not difficult: since the inequalities clearly \(\frac{k}{\lambda }\le \mu _1 \le k\lambda \) hold, there exists \(c_1\in [0,k]\) such that:
This proves the inequality:
Similarly, by focusing on the interval [k, 1] one can identify the function \(\psi _2\) (and the corresponding \(c_2\)) which puts the most weight on the lower part of the interval, such that:
Direct calculations show that \(D^\lambda _{U} \ni \psi :=\psi _1\mathbb {1}_{[0,k]}+ \psi _2\mathbb {1}_{(k,1]}\) and the claim follows by combining (19) and (20). \(\square \)
Rights and permissions
About this article
Cite this article
Obradović, L. Robust best choice problem. Math Meth Oper Res 92, 435–460 (2020). https://doi.org/10.1007/s00186-020-00719-5
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00186-020-00719-5