Skip to main content
Log in

Risk measures based on behavioural economics theory

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

Coherent risk measures (Artzner et al. in Math. Finance 9:203–228, 1999) and convex risk measures (Föllmer and Schied in Finance Stoch. 6:429–447, 2002) are characterized by desired axioms for risk measures. However, concrete or practical risk measures could be proposed from different perspectives. In this paper, we propose new risk measures based on behavioural economics theory. We use rank-dependent expected utility (RDEU) theory to formulate an objective function and propose the smallest solution that minimizes the objective function as a risk measure. We also employ cumulative prospect theory (CPT) to introduce a set of acceptable regulatory capitals and define the infimum of the set as a risk measure. We show that the classes of risk measures derived from RDEU theory and CPT are equivalent, and they are all monetary risk measures. We present the properties of the proposed risk measures and give sufficient and necessary conditions for them to be coherent and convex, respectively. The risk measures based on these behavioural economics theories not only cover important risk measures such as distortion risk measures, expectiles and shortfall risk measures, but also produce new interesting coherent risk measures and convex, but not coherent risk measures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Throughout the paper, for an increasing function \(g: \mathbb{R}\to\mathbb{R}\) and a function \(f: \mathbb {R}\to\mathbb{R}\), the Lebesgue–Stieltjes (L-S) integral \(\int _{\mathbb{R}} f(x) \mathrm{d}g(x)\) is defined as \(\int_{\mathbb{R}}f(x) \mathrm{d} g_{+}(x)\) or \(\int_{\mathbb{R}}f(x) \mu_{g}(\mathrm{d}x)\), where \(g_{+}(x)=g(x+)\) and \(\mu_{g}\) is the measure defined by \(\mu_{g}([a, b]) = g(b+)-g(a-)\) for any \(a \leq b\). See for instance Merkle et al. [17].

  2. A set \(\mathcal{X}\) is said to be closed under translation if \(X+c \in\mathcal{X}\) for any \(X \in\mathcal{X}\) and \(c \in\mathbb{R}\). We point out that in some cases, when we say that a risk measure \(\rho\) defined on a set \(\mathcal{X}\) is a coherent/convex/monetary risk measure or satisfies one of the properties (P2)–(P5), to simplify the expressions, we may not specify the required structure of the set \(\mathcal{X}\). However, in these cases, it is implied that \(\mathcal{X}\) has the corresponding structure.

References

  1. Aczél, J.: Lectures on Functional Equations and Their Applications. Academic Press, New York/London (1966)

    MATH  Google Scholar 

  2. Allais, M.: Le comportement de l’homme rationnel devant le risque. Econometrica 21, 503–546 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  3. Arrow, K.J.: Essays in the Theory of Risk-Bearing. North-Holland, New York (1974)

    MATH  Google Scholar 

  4. Artzner, P., Delbaen, F., Eber, J.-M., Heath, D.: Coherent measures of risk. Math. Finance 9, 203–228 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bäuerle, N., Müller, A.: Stochastic orders and risk measures: consistency and bounds. Insur. Math. Econ. 38, 132–148 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bellini, F., Klar, B., Müller, A., Rosazza Gianin, E.: Generalized quantiles as risk measures. Insur. Math. Econ. 54, 41–48 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cai, J., Mao, T.: Risk measures derived from a regulator’s perspective on the regulatory capital requirements for insurers. Preprint (2018). Available online at: https://ssrn.com/abstract=3127285

  8. Chew, S.H., Karni, E., Safra, Z.: Risk aversion in the theory of expected utility with rank dependent probabilities. J. Econ. Theory 42, 370–381 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., Vyncke, D.: The concept of comonotonicity in actuarial science and finance: theory. Insur. Math. Econ. 31, 3–33 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Föllmer, H., Schied, A.: Convex measures of risk and trading constraints. Finance Stoch. 6, 429–447 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  11. Föllmer, H., Schied, A.: Stochastic Finance: An Introduction in Discrete Time, 3rd edn. de Gruyter, Berlin (2011)

    Book  MATH  Google Scholar 

  12. Frittelli, M., Rosazza Gianin, E.: Putting order in risk measures. J. Bank. Finance 26, 1473–1486 (2002)

    Article  Google Scholar 

  13. Jouini, E., Schachermayer, W., Touzi, N.: Law invariant risk measures have the Fatou property. Adv. Math. Econ. 9, 49–71 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291 (1979)

    Article  MATH  Google Scholar 

  15. Kaluszka, M., Krzeszowiec, M.: Mean-value principle under cumulative prospect theory. ASTIN Bull. 42, 103–122 (2012)

    MathSciNet  MATH  Google Scholar 

  16. Kaluszka, M., Krzeszowiec, M.: Pricing insurance contracts under cumulative prospect theory. Insur. Math. Econ. 50, 159–166 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. Merkle, M., Marinescu, D., Merkle, M.M.R., Monea, M., Stroe, M.: Lebesgue–Stieltjes integral and Young’s inequality. Appl. Anal. Discrete Math. 8, 60–72 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Newey, W.K., Powell, J.L.: Asymmetric least squares estimation and testing. Econometrica 55, 819–847 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  19. Niculescu, C., Persson, L.-E.: Convex Functions and Their Applications: A Contemporary Approach. Springer, New York (2006)

    Book  MATH  Google Scholar 

  20. Pratt, J.W.: Risk aversion in the small and in the large. Econometrica 32, 122–136 (1964)

    Article  MATH  Google Scholar 

  21. Quiggin, J.: A theory of anticipated utility. J. Econ. Behav. Organ. 3, 323–343 (1982)

    Article  Google Scholar 

  22. Quiggin, J.: Generalized Expected Utility Theory: The Rank-Dependent Model. Kluwer Academic, Dordrecht (1993)

    Book  MATH  Google Scholar 

  23. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57, 571–587 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  24. Schmidt, U., Zank, H.: Risk aversion in cumulative prospect theory. Manag. Sci. 54, 208–216 (2008)

    Article  Google Scholar 

  25. Shaked, M., Shanthikumar, J.G.: Stochastic Orders. Springer Series in Statistics (2007)

    Book  MATH  Google Scholar 

  26. Svindland, G.: Convex Risk Measures Beyond Bounded Risks. Doctoral dissertation, Ludwig Maximilian Univeristy (2009). Available online at https://edoc.ub.uni-muenchen.de/9715/1/Svindland_Gregor.pdf

  27. Tsanakas, A.: To split or not to split: capital allocation with convex risk measures. Insur. Math. Econ. 44, 268–277 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Tversky, A., Kahneman, D.: Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5, 297–323 (1992)

    Article  MATH  Google Scholar 

  29. Yaari, M.E.: The dual theory of choice under risk. Econometrica 55, 95–115 (1987)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the two anonymous referees, the Associate Editor, and the Editor for insightful suggestions that improved the presentation of the paper. Tiantian Mao is grateful for the support from National Science Foundation of China (grant Nos. 71671176, 11371340). Jun Cai is grateful to the support from the Natural Sciences and Engineering Research Council (NSERC) of Canada (grant No. RGPIN-2016-03975).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Cai.

Appendix

Appendix

Proposition A.1

For \(u_{1},u_{2}\in\mathcal {U}_{\mathrm{icx}}^{+}\), \(v\in\mathcal {U}\) and \(h_{1}, h_{2}\in\mathcal {H}\), define \(\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) and \(\mathcal {X}_{v}^{h_{1},h_{2}}\) by (2.1) and (2.6), respectively, where the sets \(U_{\mathrm{icx}}^{+}\), \(\mathcal {U}\) andare defined near the end of Sect2.1.

(i) If \(h_{1}\) is convex and \(h_{2}\) is concave, then \(\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) is a convex set.

(ii) If \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) and \(h_{1}\) and \(h_{2}\) are convex, then \(\mathcal {X}_{v}^{h_{1},h_{2}}\) is a convex set.

Proof

We only prove (i) as (ii) follows from (i) and the equation

$$H_{v,h_{1},h_{2}}(X-x) = \mathrm{H}_{v_{+},h_{1}}\big((X-x)^{+}\big) + \mathrm {H}_{v_{-},h_{2}^{*}}\big((X-x)^{-}\big), $$

where \(v_{+}(x)=v(x)\), \(v_{-}(x)=-v(-x)\), \(x\in\mathbb{R}_{+}\), and \(h_{2}^{*}(p)=1-h_{2}(1-p)\), \(p\in[0,1]\).

To prove (i), let \(X,Y\in\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) and \(\lambda\in(0,1)\). Set \(Z=\lambda X+(1-\lambda)Y\). Note that \(u_{i}\) is convex and hence, for any \(x\in\mathbb{R}\),

$$\begin{aligned} u_{1}\big((Z-x)^{+}\big)&\leq u_{1}\big(\lambda(X-x)^{+} +(1-\lambda )(Y-x)^{+}\big)\\ & \leq\lambda u_{1}\big((X-x)^{+}\big)+(1-\lambda )u_{1}\big((Y-x)^{+}\big). \end{aligned}$$

Then noting that \(H_{u_{1},h_{1}}(X) = \rho_{h_{1}^{*}} (u_{1}(X))\) for any random variable \(X\), where \(\rho_{h}\) is the distortion risk measure defined by (3.7), we obtain

$$\begin{aligned} {\mathrm{H}}_{u_{1},h_{1}}\big((Z-x)^{+}\big) & = \rho_{h_{1}^{*}} \Big(u_{1}\big((Z-x)^{+}\big)\Big)\\ & \leq \rho_{h_{1}^{*}}\Big(\lambda u_{1}\big((X-x)^{+}\big)+(1-\lambda )u_{1}\big((Y-x)^{+}\big)\Big)\\ & \leq \lambda\rho_{h_{1}^{*}}\Big( u_{1}\big((X-x)^{+}\big)\Big)+(1-\lambda)\rho_{h_{1}^{*}}\Big(u_{1}\big((Y-x)^{+}\big)\Big)\\ & =\lambda{\mathrm{H}}_{u_{1},h_{1}}\big((X-x)^{+}\big)+(1-\lambda )\mathrm{H}_{u_{1},h_{1}}\big((Y-x)^{+}\big)< \infty, \end{aligned}$$

where the second inequality uses that \(\rho_{h}\) is a convex risk measure if and only if \(h\) is concave. Similarly, one can show that \(\mathrm{H}_{u_{2},h_{2}}((x-Z)^{-})\) is also finite for any \(x\in \mathbb{R}\). □

The following two lemmas are useful in the proof of Proposition 2.2 and Theorem 3.1, respectively.

Lemma A.2

For \(u\in\mathcal {U}_{\mathrm{icx}}^{+}\) and \(h\in\mathcal {H}\), let \(X\) be a random variable with distribution function \(F\) and such that for any \(c\in\mathbb{R}\),

$$g(c):=\mathrm{H}_{u, \, h}\big((X-c)^{+}\big)\textit{ and }f(c):=\mathrm{H}_{u, \, h}\big((X-c)^{-}\big)\textit{ are finite}. $$

Then:

(i) \(g\) and \(f\) have finite left and right derivatives at any \(c \in \mathbb{R}\), and

$$\begin{aligned} g'_{+}(c)& = - \!\int_{\mathbb{R}}u'_{-}\left((x-c)^{+}\right) \mathbf{1}_{\{(x- c)^{+}>0\}} \,\mathrm{d}h\big(F(x)\big),\\ g'_{-}(c) &= -\!\int_{\mathbb{R}} u'_{+}\left((x-c)^{+}\right) \, \mathrm{d}h\big(F(x)\big),\\ f'_{+}(c)& = \int_{\mathbb{R}}u'_{+}\left((x-c)^{-}\right) \,\mathrm {d}h\big(F(x)\big),\\ f'_{-}(c)& = \int_{\mathbb{R}}u'_{-}\left((x-c)^{-}\right) \mathbf {1}_{\{(x-c)^{-} > 0\}} \,\mathrm{d}h\big(F(x)\big). \end{aligned}$$

(ii) If in addition \(u\) is differentiable with \(u'(0)=0\), then \(g\) and \(f\) are differentiable and

$$\begin{aligned} g'(c) = -\mathrm{H}_{u', \, h}\big((X-c)^{+}\big)~~~\textit{and}~~~f'(c) = \mathrm{H}_{u', \, h}\big((X-c)^{-}\big). \end{aligned}$$
(A.1)

Proof

Note that (ii) follows immediately from (i). We only give the proof for \(g'_{+}\) in (i); the other formulas can be proved similarly. For any \(c, x \in\mathbb{R}\) and \(\delta\not= 0\), set

$$\begin{aligned} w_{\delta}(c, x)= \frac{u((x-(c+\delta))^{+})-u((x-c)^{+})}{\delta}. \end{aligned}$$

Due to \(u \in\mathcal {U}_{\mathrm{icx}}^{+}\), we can verify that \(\lim _{\delta\rightarrow0{+}} = - u'_{-}((x-c)^{+})\, \mathbf{1}_{\{x> c\}}\) and

$$\begin{aligned} |w_{\delta}(c, x)|\leq-w_{-1}(c, x)\qquad \text{ for all }c,x\in \mathbb{R}\text{ and }\delta> -1, \delta \not=0. \end{aligned}$$
(A.2)

Note that \(- w_{-1}(c, x)=-u((x-c)^{+})+u((x-(c-1))^{+})\) is integrable with respect to \(\mathrm{d}h(F(x))\). Then by (A.2), applying the dominated convergence theorem, we see that \(g'_{+}(c)\) exists and is finite with its expression given in (i). □

Lemma A.3

For \(h\in\mathcal {H}\) and a random variable \(X\) with distribution function \(F\) and survival function \(S= 1-F\), it holds that for any \(x \in\mathbb{R}\),

$$\begin{aligned} \lim_{y \downarrow x} h \big(F(y)\big) = \lim_{y \downarrow x} h (\mathbb{P}[X < y])\quad \ \ {\mathrm{and}}\quad \ \ \lim_{y \downarrow x} h (\mathbb{P}[X \geq y]) = \lim_{y \downarrow x} h \big(S(y)\big). \end{aligned}$$
(A.3)

Moreover, for any \(-\infty\leq a < b\leq\infty\),

$$\begin{aligned} \int_{a}^{b} u(x)\,\mathrm{d}h \big(F(x)\big) &= \int_{a}^{b} u(x)\, \mathrm{d}h (\mathbb{P}[X < x]), \end{aligned}$$
(A.4)
$$\begin{aligned} \int_{a}^{b} u(x)\,\mathrm{d}h \big(S(x)\big)&=\int_{a}^{b} u(x)\, \mathrm{d}h (\mathbb{P}[X \geq x]). \end{aligned}$$
(A.5)

Proof

We only prove the first equation of (A.3) since the second follows from the first and \(h (\mathbb{P}[X \geq y]) = 1- h^{\ast}(\mathbb{P}[X< y])\) and \(h (\mathbb{P}[X > y]) = 1- h^{\ast}(\mathbb{P}[X\leq y])\), where \(h^{\ast}(p) =1-h(1-p)\) is the dual distortion function of \(h\). For \(y > x\), we have

$$\begin{aligned} F(x) = \mathbb{P}[X \leq x] \leq\mathbb{P}[X < y] \leq\mathbb{P}[X \leq y]=F(y) \end{aligned}$$
(A.6)

and thus \(h(F(x)) \leq h(\mathbb{P}[X < y]) \leq h(\mathbb{P}[X \leq y]) = h(F(y))\). Hence if there exists \(y_{0}> x\) such that \(F(y_{0})=F(x)\), then \(F(y)=F(x)\) for \(y \in[x, y_{0}]\). Then \(h(\mathbb {P}[X < y]) = h(F(y))=h(F(x))\) for \(y \in[x, y_{0}]\) and thus

$$\lim_{y \downarrow x} h (\mathbb{P}[X < y]) = \lim_{y \downarrow x} h (\mathbb{P}[X \leq y]) = h\big(F(x)\big). $$

If \(F(y) > F(x)\) for all \(y > x\), then by (A.6), both \(\mathbb{P}[X < y]\) and \(\mathbb{P}[X \leq y]=F(y)\) converge to \(F(x)\) when \(y \downarrow x\). Hence

$$\lim_{y \downarrow x} h (\mathbb{P}[X < y]) = \lim_{t \, \downarrow \, F(x)} h (t)= \lim_{y \downarrow x} h (\mathbb{P}[X \leq y]) = h\big(F(x)+\big). $$

In addition, (A.4) and (A.5) follow directly from the definition of the L-S integrals; see Footnote 1 after (1.3). □

Before proving Theorem 2.7, we present the following lemma (from Schmidt and Zank [24, Theorem 1]), which will be used in the proof of Theorem 2.7.

Lemma A.4

Under the assumptions and notations of Theorem 2.7, if the statement (iii) of Theorem 2.7 holds, then \(\mathrm{H} =H_{v,\,h_{1},\,h_{2}}\) defined by (1.6) preserves stop-loss order on \(\mathcal {X}\).

Proof of Theorem 2.7

We introduce the auxiliary statement

\((\mathrm{ii})'\) :

\(\rho\) preserves stop-loss order on \(L^{\infty}\).

We prove the theorem by showing the implications

$$(\mathrm{i}) \Rightarrow(\mathrm{ii})' \Rightarrow(\mathrm{iii}) \Rightarrow (i) \qquad {\mathrm{and}}\qquad (\mathrm{iii}) \Rightarrow(\mathrm{ii}) \Rightarrow(\mathrm{ii})'. $$

(i) ⇒ (ii)′ Since \(L^{\infty}\subseteq\mathcal{X}\), \(\rho\) is convex on \(L^{\infty}\) as well. It follows from Jouini et al. [13] that a law-invariant convex risk measure on \(L^{\infty}\) has the Fatou property. Because \(\rho\) is law-invariant, it has the Fatou property on \(L^{\infty}\). Hence, by Lemma 2.17 of Svindland [26] or Theorem 4.3 of Bäuerle and Müller [5], \(\rho\) is stop-loss order preserving on \(L^{\infty}\).

(ii)′ ⇒ (iii) We show this implication by contradiction in three steps.

Step 1. We claim that \(h_{1}\) and \(h_{2}\) are both convex. To show this, let

$$q_{1}^{*}=\inf\{p\in[0,1]: h_{1}(p)=1\}. $$

We assert that \(q^{*}_{1}=1\) when (ii) holds. To see this, suppose \(q^{*}_{1}<1\). Then there exists \(p\in (q_{1}^{*},1)\) such that \(h_{2}(p) >0\) since \(h_{2}(1-)=1\). Define \(X\equiv0\) and \(Y\) such that \(\mathbb{P}[Y=-(1-p)x]=p\) and \(\mathbb{P}[Y=px]=1-p\) for some \(x>0\). It is easy to prove that \(X\prec_{\mathrm{sl}} Y\). Moreover, \(\rho(X)=0\) and due to \(h_{1}(p) = 1\) for \(p\in(q_{1}^{*},1)\), we have

$$\mathrm{H}(Y) = h_{2}(p) v\big(-(1-p)x\big) + \big(1-h_{1}(p)\big)v(px)= h_{2}(p) v\big(-(1-p)x\big) < 0, $$

which implies \(\rho(Y)<0\). Hence \(\rho(X) > \rho(Y)\). This contradicts the assumption that \(\rho\) is stop-loss order preserving. Thus \(q^{*}_{1}=1\), that is, \(h_{1}(p)<1\) for \(0 \leq p<1\).

Note that the monotone function \(h_{2}\) is convex on \([0, 1]\) if and only if it is mid-point convex (Niculescu and Persson [19, Theorem 1.1.4]), that is, for any \(p_{1}, p_{2} \in[0, 1]\), it holds that

$$\begin{aligned} 2 h_{2} \bigg(\frac{p_{1}+p_{2}}{2} \bigg) \leq h_{2}(p_{1})+h_{2}(p_{2}). \end{aligned}$$
(A.7)

Suppose (A.7) does not hold. Then there exist \(0 \leq p_{1} < p_{2} < 1\) such that

$$\begin{aligned} 2 h_{2}(\overline{p}) > h_{2}(p_{1})+h_{2}(p_{2}), \end{aligned}$$
(A.8)

where \(\overline{p}=(p_{1}+p_{2})/2\). Thus

$$\begin{aligned} r:= \frac{h_{2}(\overline{p})-h_{2}(p_{1})}{h_{2}(p_{2})-h_{2}(\overline{p})} \in (1,\infty]. \end{aligned}$$

Since the function \(v\) is strictly increasing on ℝ, it is differentiable almost everywhere on ℝ and hence almost everywhere on any neighbourhood of 0. Thus there exist \(x_{n}\uparrow0\) as \(n\to\infty\) such that \(v\) is differentiable in \(x_{n}\) with \(v'(x_{n})>0\), \(n\in \mathbb{N}\). Hence for each \(n\in\mathbb{N}\),

$$\begin{aligned} \lim_{\delta\to0+} \frac{v(x_{n}+\delta)-v(x_{n})}{v(x_{n})-v(x_{n}-\delta )} = \lim_{\delta \to0+} \frac{(v(x_{n}+\delta)-v(x_{n}))/\delta}{(v(x_{n})-v(x_{n}-\delta ))/\delta} =1, \end{aligned}$$

and hence there exists \(\delta_{n}\in(0,-x_{n})\) such that \(\delta _{n}\downarrow0\) as \(n\to\infty\) and

$$ 0 < \, \frac{v(x_{n}+\delta_{n})-v(x_{n})}{v(x_{n})-v(x_{n}-\delta_{n})} < r, \qquad n\in\mathbb{N}. $$
(A.9)

Take \(y_{n1} = x_{n}-2\delta_{n} <0\) for \(n\in\mathbb{N}\). Then observing that \(x_{n}\to0-\), \(\delta_{n}\to0+\) as \(n\to\infty\), \(1-h_{1}(p_{2})>0\) and \(v\) is continuous, we have that

$$\lim_{n\to\infty} \bigg(-\frac{h_{2}(p_{1})}{1-h_{1}(p_{2})} v(y_{n1}) - \frac{h_{2}(p_{2})-h_{2}(p_{1})}{1-h_{1}(p_{2})} v(x_{n})\bigg)=0. $$

Moreover, (A.8) implies that \(h_{2}(p_{2})>0\) and \(h_{2}(p_{2}) > h_{2}(p_{1})\). Then we can find \(n_{0}\in\mathbb{N}\) such that

$$-\frac{h_{2}(p_{1})}{1-h_{1}(p_{2})} v(y_{1}) - \frac {h_{2}(p_{2})-h_{2}(p_{1})}{1-h_{1}(p_{2})} v(x_{n_{0}})\in \big(0,\max\{v(x):x>0\} \big) $$

with \(y_{1}=y_{n_{0}1}\). Thus because \(v\) is continuous, there exists \(y_{2}>0\) such that

$$v(y_{2})=-\frac{h_{2}(p_{1})}{1-h_{1}(p_{2})} v(y_{1}) - \frac {h_{2}(p_{2})-h_{2}(p_{1})}{1-h_{1}(p_{2})} v(x_{n_{0}}), $$

that is,

$$ h_{2}(p_{1}) v(y_{1}) + \big(h_{2}(p_{2})-h_{2}(p_{1})\big) v(x_{n_{0}}) + \big(1-h_{1}(p_{2})\big)v(y_{2}) = 0. $$
(A.10)

Define two random variables \(X\) and \(Y\) such that

$$\begin{aligned} \mathbb{P}[X=y_{1}] &= p_{1}, \quad \mathbb{P}[X=x_{n_{0}}]=p_{2}-p_{1},\quad \mathbb{P}[X=y_{2}] = 1-p_{2}, \\ \mathbb{P}[Y=y_{1}] &= p_{1}, \quad \mathbb{P}[Y=x_{n_{0}}-\delta _{n_{0}}]=\mathbb{P}[Y=x_{n_{0}}+\delta_{n_{0}}]=p_{2}-\overline{p},\\ \mathbb{P}[Y=y_{2}] &= 1-p_{2}. \end{aligned}$$

It is easy to check that \(X\prec_{\mathrm{sl}} Y\). In fact, (A.10) is equivalent to \(\mathrm{H}(X)=0\), which implies \(\rho(X)= 0\) since \(v\) is strictly increasing. By (A.9), we have \(\mathrm{H}(Y)< \mathrm{H}(X)=0\), which implies \(\rho (Y)<0=\rho(X)\). This contradicts the property of stop-loss order preservation of \(\rho\), and so \(h_{2}\) is convex. Similarly, we can show that \(h_{1}\) is convex.

Step 2. We aim to show that \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) under the condition that \(h_{1}\) and \(h_{2}\) are convex. To this end, we first assume that \(v\) is not convex on \(\mathbb{R}_{-}\). Thus there exist \(x_{1},~x_{2}\in\mathbb{R}_{-}\) such that \(x_{1}< x_{2} <0\) and \(2 \, v(\overline{x}) > v(x_{1})+ v(x_{2})\) with \(\overline{x}=(x_{1}+ x_{2})/2 <0\). Thus

$$\begin{aligned} s:=\frac{v(\overline{x})-v(x_{1})}{v(x_{2})-v(\overline{x})}>1. \end{aligned}$$

Set \(q^{*}_{2}=\inf\{p\in[0,1]: h_{2}(p)=1\}\) and \(p_{2}^{*}=\sup\{p\in[0,1]: h_{2}(p)=0\}\). Note that \(0 \leq p^{*}_{2} < q^{*}_{2} = 1\) when \(h_{2}\) is convex. Moreover, \(h_{2}\) is strictly convex and strictly increasing on \([p^{*}_{2}, 1]\). Hence \(h_{2}\) has a positive derivative almost everywhere on \((p^{*}_{2}, 1)\). Then for each \(p\in (p^{*}_{2}, 1)\) such that \(h_{2}\) is differentiable at \(p\), there exists \(\varepsilon_{p}>0\) such that

$$\begin{aligned} \frac{h_{2}'(p)+\varepsilon_{p}}{h_{2}'(p)-\varepsilon_{p}}< s. \end{aligned}$$

Thus by the definition of derivatives, there exists \(\delta_{p}>0\) such that for all \(0<\delta\leq\delta_{p}\), it holds that \(\delta+ p \in (p^{*}_{2}, 1)\), \(\delta- p \in (p^{*}_{2}, 1)\),

$$\begin{aligned} \frac{h_{2}(p+\delta)-h_{2}(p)}{\delta}< h_{2}'(p)+\varepsilon_{p} \qquad \mbox{and} \qquad \frac{h_{2}(p)-h_{2}(p-\delta)}{\delta }>h_{2}'(p)-\varepsilon_{p}, \end{aligned}$$

which implies

$$ \frac{h_{2}(p+\delta)-h_{2}(p)}{h_{2}(p)-h_{2}(p-\delta)} < \frac{h_{2}'(p)+\varepsilon_{p}}{h_{2}'(p)-\varepsilon_{p}}< s. $$

Note that we can choose \(p\in(p_{2}^{*}, 1)\) and \(\delta_{p}>0\) such that \(p+\delta_{p}\) is arbitrarily close to \(p_{2}^{*}\) and the ratio \(h_{2}(p+\delta_{p})/(1-h_{1}(p+\delta_{p}))>0\) can be made arbitrarily small in \(\mathbb{R}_{+}\). Then for \(\overline{x}<0\) and any \(y_{1}< x_{1}<0\), since \(v(y_{1})<0\), \(v(\overline{x})<0\), there exist \(p\in(p_{2}^{*}, 1)\) and \(\delta_{p}>0\) such that \(p -\delta_{p} \in(p_{2}^{*}, 1)\), \(p + \delta_{p} \in(p_{2}^{*}, 1)\) and

$$ -\frac{h_{2}(p-\delta_{p})}{1-h_{1}(p+\delta_{p})} v(y_{1}) - \frac {h_{2}(p+\delta_{p})-h_{2}(p-\delta_{p})}{1-h_{1}(p+\delta_{p})} v(\overline {x}) \in\big(0, \max\{v(x):x>0\}\big). $$

Hence by the fact that \(v\) is continuous, one can find \(y_{2}>0\) such that

$$ v(y_{2}) = -\frac{h_{2}(p-\delta_{p})}{1-h_{1}(p+\delta_{p})} v(y_{1}) - \frac {h_{2}(p+\delta_{p})-h_{2}(p-\delta_{p})}{1-h_{1}(p+\delta_{p})} v(\overline{x}), $$

i.e.,

$$ h_{2}(p-\delta_{p}) v(y_{1}) + \big(h_{2}(p+\delta _{p})-h_{2}(p-\delta_{p})\big) v(\overline{x}) + \big(1-h_{1}(p+\delta _{p})\big)v(y_{2}) = 0. $$

Define random variables \(X\) and \(Y\) such that

$$\begin{aligned} & \mathbb{P}[X=y_{1}] = p-\delta_{p}, \quad \mathbb{P}[X=\overline {x}]=2\delta_{p}, \quad \mathbb{P}[X=y_{2}] = 1-p-\delta_{p}, \\ & \mathbb{P}[Y=y_{1}] = p-\delta_{p}, \quad \mathbb{P}[Y=x_{1}]=\mathbb {P}[Y=x_{2}]=\delta_{p},\quad \mathbb{P}[Y=y_{2}] = 1-p-\delta_{p}. \end{aligned}$$

Clearly, \(\rho(X)=0>\rho(Y)\), \(X\prec_{\mathrm{cx}} Y\), and thus \(X\prec_{\mathrm{sl}} Y\), which contradicts the property of stop-loss preservation of \(\rho\). Hence \(v\) is convex on \(\mathbb{R}_{-}\). Similarly, we can show that \(v\) is convex on \(\mathbb{R}_{+}\).

Step 3. We show the inequality (2.8) by way of contradiction. Steps 1 and 2 have proved that \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) and \(h_{1}\) and \(h_{2}\) are convex. Hence the distortion functions \(h_{1}\) and \(h_{2}\) satisfy \(h_{1}(p) < 1\) and \(h_{2}(p) < 1\) for \(0 \leq p <1\). Assume that (2.8) does not hold. Then there exists \(p\in(0,1)\) such that

$$\begin{aligned} \frac{v'_{+}(0)}{v'_{-}(0)} < \frac{(h_{2})'_{-}(p)}{(h_{1})_{+}'(p)}. \end{aligned}$$
(A.11)

Hence (A.11) implies that \(p \in(p_{2}^{*}, 1)\), \((h_{2})'_{-}(p) > 0\) and

$$\begin{aligned} \frac{(h_{2})'_{-}(p)}{(h_{1})_{+}'(p)} \frac{v'_{-}(0)}{v'_{+}(0)} > 1. \end{aligned}$$

Then there exists \(\varepsilon>0\) such that

$$\begin{aligned} \frac{(1+\varepsilon)^{2}}{(1-\varepsilon)^{2}} < \frac {(h_{2})'_{-}(p)}{(h_{1})_{+}'(p)} \frac{v'_{-}(0)}{v'_{+}(0)}. \end{aligned}$$

Thus there exist \(\delta_{0}>0\) and \(x_{0}>0\) such that for all \(x\in(0, x_{0}]\) and \(\delta\in(0, \delta_{0}]\), we have \(p\pm\delta \in(p_{2}^{*}, 1)\) and

$$\begin{aligned} \frac{v(x)-v(0)}{v(0)-v(-x)} &< \frac{v'_{+}(0)\, x \, (1+\varepsilon)}{v'_{-}(0) \, x \, (1-\varepsilon)} \\ &< \frac{(h_{2})'_{-}(p) \, \delta\, (1-\varepsilon)}{(h_{1})'_{+}(p) \, \delta\, (1+\varepsilon)} < \frac{h_{2}(p)-h_{2}(p-\delta )}{h_{1}(p+\delta)-h_{1}(p)}, \end{aligned}$$
(A.12)

where the first and third inequalities follow from the definitions of left and right derivatives. Indeed, noting that

$$\lim_{\delta\downarrow0} \frac{h_{2}(p)-h_{2}(p-\delta)}{\delta} =(h_{2})'_{-}(p),~~~\lim_{\delta\downarrow0} \frac{h_{1}(p+\delta )-h_{1}(p)}{\delta} =(h_{1})'_{+}(p), $$

we know that there exists \(\delta_{0}>0\) such that for all \(\delta\in (0, \delta_{0}]\),

$$\begin{aligned} h_{2}(p)-h_{2}(p-\delta) &>(h_{2})'_{-}(p)(1-\varepsilon)\delta, \\ h_{1}(p+\delta)-h_{1}(p) &< (h_{1})'_{+}(p)(1+\varepsilon)\delta. \end{aligned}$$

This yields that for \(\delta\in(0,\delta_{0})\),

$$\frac{(h_{2})'_{-}(p) \, \delta\, (1-\varepsilon)}{(h_{1})_{+}'(p) \, \delta\, (1+\varepsilon)} < \frac{h_{2}(p)-h_{2}(p-\delta )}{h_{1}(p+\delta)-h_{1}(p)}. $$

The first inequality in (A.12) follows from the same arguments. Thus for this \(p\) and any \(\delta\in (0,\delta_{0})\), noticing that \(h_{2}(p-\delta)>0\), \(1-h_{1}(p+\delta)>0\) and \(v\) is continuous and strictly increasing with \(v(0)=0\), one can easily find \(x_{1} < 0\) and \(x_{2}>0\) such that

$$ v(x_{1}) h_{2}(p-\delta) + v(x_{2})\big(1-h_{1}(p+\delta)\big) =0. $$
(A.13)

Now define random variables \(X\) and \(Y\) such that

$$\begin{aligned} &\mathbb{P}[X=x_{1}]=p-\delta,\quad \mathbb{P}[X=0]=2\delta,\quad \mathbb{P}[X=x_{2}]=1-p-\delta,\\ &\mathbb{P}[Y=x_{1}]=p-\delta,\quad \mathbb{P}[Y=-x]=\mathbb {P}[Y=x]=\delta,\quad \mathbb{P}[Y=x_{2}]=1-p-\delta. \end{aligned}$$

Then \(X\prec_{\mathrm{cx}} Y\), thus \(X\prec_{\mathrm{sl}} Y\), and (A.13) is reduced to \(\mathrm{H}(X) = 0\), which together with (A.12) implies \(\mathrm{H}(Y) < 0\). It follows that \(\rho(Y)<0=\rho(X)\), which contradicts the property of stop-loss order preservation of \(\rho \). Hence (2.8) holds.

(iii) ⇒ (i) First note that if \(\rm H\) is convex, then \(\rho\) is convex. To see this, define the risk set \(\mathcal{A}_{H} =\{X\in\mathcal{X}: \mathrm{H}(X)\leq0\}\) which is called the acceptance set of the risk measure \(H\) and is convex because \(H\) is convex. Using \(\mathcal{A}_{H}\) to define a risk measure \(\rho\) as in Föllmer and Schied [10] or in (2.7) then implies that \(\rho\) is convex.

Next, it suffices to show that \(\rm H\) is convex. Let \(X\), \(Y\), \(X^{c}\), \(Y^{c}\in\mathcal{X}\) and \(\lambda\in(0,1)\) such that \(X^{c}\stackrel{d}{=}X\), \(Y^{c} \stackrel{d}{=}Y\) and \(X^{c}\), \(Y^{c}\) are comonotonic. By Dhaene et al. [9], we have that \(\lambda X + (1-\lambda) Y \prec_{\mathrm{cx}} \lambda X^{c} + (1-\lambda) Y^{c}\). By Lemma A.4, it holds that \(\mathrm{H} ( \lambda X + (1-\lambda) Y ) \leq{\mathrm{H}} ( \lambda X^{c} + (1-\lambda) Y^{c} )\). Thus without loss of generality, we can assume that \(X\) and \(Y\) are comonotonic. Set

$$\begin{aligned} d= \frac{v'_{+}(0)}{v'_{-}(0)} \qquad \mbox{and} \qquad v_{d}(x) = \textstyle\begin{cases} v(x),&\text{$x< 0$,}\\ v(x) / d, &\text{$x\geq0$}. \end{cases}\displaystyle \end{aligned}$$

It is easy to verify that \(v_{d}\) is convex on ℝ. The rest of the proof involves the following two steps:

Step 1. We first show that \(\rm H\) is convex for the case when \(X\) and \(Y\) are two comonotonic random variables with finite ranges. Without loss of generality, we assume that \(X\) and \(Y\) are defined on a probability space \((\mathbb{S},2^{\mathbb{S}},\mathbb{P}_{\mathbb{S}})\) with \(\mathbb{S}=\{1,\ldots,n\}\), \(\mathbb{P}_{\mathbb{S}}[\{i\}] = p_{i} > 0\) for \(i \in{\mathbb{S}}\) and \(\sum_{i\in{\mathbb{S}}} p_{i} =1\) such that \(X(i)=x_{i}\) and \(Y(i)=y_{i}\) satisfying \(x_{i}< x_{i+1}\) and \(y_{i}< y_{i+1}\) for \(i\in \mathbb{S}\).

Let \(k=\inf\{i\in\mathbb{S}: \lambda x_{i} + (1-\lambda) y_{i}\geq0\}\), \(k_{x}=\inf\{i\in\mathbb{S}: x_{i} \geq0\}\) and \(k_{y}=\inf\{i\in\mathbb{S}: y_{i}\geq0\}\), with the convention \(\inf \emptyset=\infty\). We only need to deal with the case \(k_{x}\leq k_{y}\), since the case \(k_{y} < k_{x}\) can be proved similarly.

(a) If \(k_{x}=k_{y}\), then \(k=k_{x}\) and \((\lambda X + (1-\lambda) Y)^{+} = \lambda X^{+} + (1-\lambda) Y^{+}\) and \((\lambda X + (1-\lambda) Y)^{-} = \lambda X^{-} + (1-\lambda) Y^{-}\). Note that the rank-dependent utility \(\mathrm{H}_{u,h}(X)\) is convex if and only if \(u\) is convex and \(h\) is convex; see Chew et al. [8]. It follows that

$$\begin{aligned} &{\mathrm{H}\big(\lambda X + (1-\lambda) Y\big)}\\ &= \mathrm{H}_{v, \, h_{1}}\Big(\big(\lambda X + (1-\lambda) Y\big) \vee0\Big) + \mathrm{H}_{v, \, h_{2}}\Big(\big(\lambda X + (1-\lambda) Y\big) \wedge0\Big) \\ & \leq \lambda\, \mathrm{H}_{v, \, h_{1}}(X^{+}) + (1-\lambda) \, \mathrm{H}_{v, \, h_{1}}(Y^{+}) + \lambda\, \mathrm{H}_{v, \, h_{2}}(X \wedge0) + (1-\lambda) \, \mathrm{H}_{v, \, h_{2}}(Y \wedge0) \\ & = \lambda\, \mathrm{H} (X) + (1-\lambda) \, \mathrm{H}(Y). \end{aligned}$$

(b) If \(k_{x}< k_{y}\), then \(k_{x}\leq k\leq k_{y}\). Set \(\overline{p}_{i} = \sum _{j\leq i, \, j \in{\mathbb{S}}} p_{j}\), \(i\in{\mathbb{S}}\). Then we have

$$\begin{aligned} {\mathrm{H}}\big(\lambda X + (1-\lambda) Y\big) & = \sum_{i\leq k-1, \, i \in{\mathbb{S}}} \! v_{d}\big(\lambda x_{i}+(1-\lambda)y_{i}\big) \big(h_{2}(\overline{p}_{i}) -h_{2}(\overline {p}_{i-1})\big)\\ &\phantom{=:}+ \sum_{i\geq k, \, i \in{\mathbb{S}}} \! d \, v_{d} \big(\lambda x_{i}+(1-\lambda)y_{i}\big) \big(h_{1}(\overline{p}_{i}) -h_{1}(\overline{p}_{i-1})\big)\\ &=:\sum_{i\in{\mathbb{S}}} \xi_{\alpha}(i), \\ {\mathrm{H}}(X) &= \sum_{i\leq k_{x}-1, \, i \in{\mathbb{S}}} v_{d} (x_{i}) \big(h_{2}(\overline{p}_{i}) -h_{2}(\overline{p}_{i-1})\big) \\ &\phantom{=:} + \sum_{i\geq k_{x}, \, i \in{\mathbb{S}}}d \, v_{d} (x_{i}) \big(h_{1}(\overline{p}_{i}) -h_{1}(\overline{p}_{i-1})\big)\\ & =:\sum_{i\in{\mathbb{S}}} \xi_{\alpha,x}(i), \\ {\mathrm{H}}(Y) & = \sum_{i\leq k_{y}-1, \, i \in{\mathbb{S}}} v_{d} (y_{i}) \big(h_{2}(\overline{p}_{i}) -h_{2}(\overline{p}_{i-1})\big) \\ &\phantom{=:} + \sum_{i\geq k_{y}, \, i \in{\mathbb{S}}} d \, v_{d} (y_{i}) \big(h_{1}(\overline{p}_{i}) -h_{1}(\overline{p}_{i-1})\big)\\ &=:\sum_{i\in{\mathbb{S}}} \xi_{\alpha,y}(i). \end{aligned}$$

We assert that for \(i\in{\mathbb{S}}\),

$$ \xi_{\alpha}(i) \leq\lambda\xi_{\alpha,x}(i) + (1-\lambda) \xi _{\alpha,y}(i). $$
(A.14)

Then it follows immediately that \(\mathrm{H}(\lambda X + (1-\lambda) Y) \leq\lambda {\mathrm{H}}(X) + (1-\lambda) \mathrm{H}(Y)\). To prove (A.14), first note that it holds trivially for \(i < k_{x}\) and \(i> k_{y}\) since \(v_{d}\) is convex. For \(k_{x}\leq i\leq k\), we have \(v_{d} (x_{i})\geq0\). Since \(v_{d}\) is convex, we have \(v_{d} (\lambda x_{i}+(1-\lambda)y_{i}) \leq (1-\lambda) v_{d} (y_{i}) + \lambda v_{d} (x_{i})\), i.e.,

$$\begin{aligned} v_{d} \big(\lambda x_{i}+(1-\lambda)y_{i}\big) - (1-\lambda) v_{d} (y_{i})\leq\lambda v_{d} (x_{i}). \end{aligned}$$
(A.15)

On the other hand, by (2.8) and recalling that \(d=v'_{+}(0)/v'_{-}(0)\), we also have \((h_{2})'_{-}(p)\leq d (h_{1})_{+}'(p)\) for all \(p\in(0,1)\). Hence, noticing that \(h_{1}\) and \(h_{2}\) are convex and have derivatives almost everywhere on \([0,1]\), we have for any \(\overline{p}_{i-1}<\overline{p}_{i}\) that

$$\begin{aligned} 0\leq h_{2}(\overline{p}_{i}) - h_{2}t(\overline{p}_{i-1}) & = \int _{\overline{p}_{i-1}}^{\overline{p}_{i}} (h_{2})'_{-}(p)\,\mathrm{d}p \leq d\int_{\overline {p}_{i-1}}^{\overline{p}_{i}} (h_{1})'_{+}(p)\,\mathrm{d}p \\ &= d \big(h_{1}(\overline{p}_{i}) -h_{1}(\overline{p}_{i-1})\big), \end{aligned}$$

which, together with (A.15), implies that

$$\begin{aligned} &\Big(v_{d} \big(\lambda x_{i}+(1-\lambda)y_{i}\big) - (1-\lambda) v_{d} (y_{i})\Big)\big(h_{2}(\overline{p}_{i}) - h_{2}(\overline{p}_{i-1})\big) \\ &\leq\lambda \, v_{d} (x_{i}) \, d \, \big(h_{1}(\overline{p}_{i}) -h_{1}(\overline{p}_{i-1})\big), \end{aligned}$$

i.e., (A.14) holds. Similarly, it can be verified that (A.14) holds for \(k< i\leq k_{y}\).

Step 2. We now show that \(\rm H\) is convex in the general case where \(X\) and \(Y\) are two general comonotonic random variables. For any nonnegative random variable \(Z\), there exists a sequence of nonnegative discrete random variables \(\{Z_{n},n\in\mathbb{N}\}\) each with finite range such that \(Z_{n}\) increases to \(Z\) almost surely as \(n\to\infty\). As \(X\) and \(Y\) are comonotonic, there exist a random variable \(Z\) and two nondecreasing functions \(f\) and \(g\) such that \(X=f(Z)\) and \(Y=g(Z)\) a.s. Thus we can construct \(X_{n}\) and \(Y_{n}\) as nondecreasing functions of \(Z\) such that \((X_{n}^{+})\), \((X_{n}^{-})\), \((Y_{n}^{+})\) and \((Y_{n}^{-})\) increase to \(X^{+}\), \(X^{-}\), \(Y^{+}\) and \(Y^{-}\), respectively, as \(n\to\infty\), and \(X_{n}\) and \(Y_{n}\) are comonotonic. Then by the monotone convergence theorem, we have

$$ {\mathrm{H}}(X_{n}) \longrightarrow {\mathrm{H}} (X)\quad {\mathrm {and}}\quad {\mathrm{H}} (Y_{n})\longrightarrow {\mathrm {H}}(Y)\qquad {\mathrm{as}}~n\to\infty. $$
(A.16)

Note that \(0\leq{\mathrm{H}}_{v, \, h_{1}}((\lambda X_{n} + (1-\lambda) Y_{n})^{+}) \leq{\mathrm{H}}_{v, \, h_{1}}(\lambda(X_{n})^{+} + (1-\lambda) (Y_{n})^{+})\) and \(0\geq {\mathrm{H}}_{v, \, h_{2}}((\lambda X_{n} + (1-\lambda) Y_{n})\wedge0) \geq{\mathrm{H}}_{v, \, h_{2}}(\lambda X_{n}\wedge0 + (1-\lambda)Y_{n}\wedge0)\). Moreover, \(\mathrm{H}_{v, \, h_{1}}(\lambda(X_{n})^{+} + (1-\lambda) (Y_{n})^{+})\) and \(\mathrm{H}_{v, \, h_{2}}(\lambda X_{n}\wedge0 + (1-\lambda)Y_{n}\wedge0)\) are both monotone. Thus by the monotone convergence theorem, they converge to \(\mathrm{H}_{v, \, h_{1}}(\lambda X^{+} + (1-\lambda) Y^{+})\) and \(\mathrm{H}_{v, \, h_{2}}(\lambda X\wedge0 + (1-\lambda) Y\wedge0)\), respectively, as \(n\to\infty\). Then by the dominated convergence theorem, we have that

$$ {\mathrm{H}}\big(\lambda X_{n} + (1-\lambda) Y_{n}\big) \longrightarrow {\mathrm{H}}\big(\lambda X + (1-\lambda) Y\big)\qquad {\mathrm{as}}~n\to\infty. $$
(A.17)

On the other hand, by Step 1, we have

$$\mathrm{H}\big(\lambda X_{n} + (1-\lambda) Y_{n}\big) \leq \lambda {\mathrm{H}}(X_{n}) + (1-\lambda) \mathrm{H}(Y_{n})\qquad {\mathrm {for~all}}~n\in\mathbb{N}. $$

Combining this with (A.16) and (A.17) yields that

$$\mathrm{H}\big(\lambda X + (1-\lambda) Y\big) \leq \lambda {\mathrm{H}}(X) + (1-\lambda) \mathrm{H}(Y). $$

This completes the proof of (iii)⇒(i).

(iii) ⇒ (ii) By Lemma A.4, \(\mathrm{H}\) is stop-loss preserving on \(\mathcal{X}\). Then for any \(X, Y \in\mathcal{X}\) such that \(X\prec_{\mathrm{sl}} Y\), we have \(X-x\prec_{\mathrm{sl}} Y-x\) for all \(x\in\mathbb{R}\). Thus \(\mathrm{H}(X-x)\leq {\mathrm{H}}(Y-x)\) for all \(x\in\mathbb{R}\), which implies that \(\{ x\in\mathbb{R}: \mathrm{H}(Y-x)\leq0\} \subseteq\{x\in\mathbb{R}: \mathrm{H}(X-x)\leq0\}\). Hence \(\rho(X)\leq\rho(Y)\) by the definition of \(\rho\), and so \(\rho\) is stop-loss preserving on \(\mathcal{X}\).

(ii) ⇒ (ii)′ This is obvious since \(L^{\infty}\subseteq \mathcal{X}\).

Combining all the above arguments, we complete the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mao, T., Cai, J. Risk measures based on behavioural economics theory. Finance Stoch 22, 367–393 (2018). https://doi.org/10.1007/s00780-018-0358-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-018-0358-6

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation