Abstract
Coherent risk measures (Artzner et al. in Math. Finance 9:203–228, 1999) and convex risk measures (Föllmer and Schied in Finance Stoch. 6:429–447, 2002) are characterized by desired axioms for risk measures. However, concrete or practical risk measures could be proposed from different perspectives. In this paper, we propose new risk measures based on behavioural economics theory. We use rank-dependent expected utility (RDEU) theory to formulate an objective function and propose the smallest solution that minimizes the objective function as a risk measure. We also employ cumulative prospect theory (CPT) to introduce a set of acceptable regulatory capitals and define the infimum of the set as a risk measure. We show that the classes of risk measures derived from RDEU theory and CPT are equivalent, and they are all monetary risk measures. We present the properties of the proposed risk measures and give sufficient and necessary conditions for them to be coherent and convex, respectively. The risk measures based on these behavioural economics theories not only cover important risk measures such as distortion risk measures, expectiles and shortfall risk measures, but also produce new interesting coherent risk measures and convex, but not coherent risk measures.
Similar content being viewed by others
Notes
Throughout the paper, for an increasing function \(g: \mathbb{R}\to\mathbb{R}\) and a function \(f: \mathbb {R}\to\mathbb{R}\), the Lebesgue–Stieltjes (L-S) integral \(\int _{\mathbb{R}} f(x) \mathrm{d}g(x)\) is defined as \(\int_{\mathbb{R}}f(x) \mathrm{d} g_{+}(x)\) or \(\int_{\mathbb{R}}f(x) \mu_{g}(\mathrm{d}x)\), where \(g_{+}(x)=g(x+)\) and \(\mu_{g}\) is the measure defined by \(\mu_{g}([a, b]) = g(b+)-g(a-)\) for any \(a \leq b\). See for instance Merkle et al. [17].
A set \(\mathcal{X}\) is said to be closed under translation if \(X+c \in\mathcal{X}\) for any \(X \in\mathcal{X}\) and \(c \in\mathbb{R}\). We point out that in some cases, when we say that a risk measure \(\rho\) defined on a set \(\mathcal{X}\) is a coherent/convex/monetary risk measure or satisfies one of the properties (P2)–(P5), to simplify the expressions, we may not specify the required structure of the set \(\mathcal{X}\). However, in these cases, it is implied that \(\mathcal{X}\) has the corresponding structure.
References
Aczél, J.: Lectures on Functional Equations and Their Applications. Academic Press, New York/London (1966)
Allais, M.: Le comportement de l’homme rationnel devant le risque. Econometrica 21, 503–546 (1953)
Arrow, K.J.: Essays in the Theory of Risk-Bearing. North-Holland, New York (1974)
Artzner, P., Delbaen, F., Eber, J.-M., Heath, D.: Coherent measures of risk. Math. Finance 9, 203–228 (1999)
Bäuerle, N., Müller, A.: Stochastic orders and risk measures: consistency and bounds. Insur. Math. Econ. 38, 132–148 (2006)
Bellini, F., Klar, B., Müller, A., Rosazza Gianin, E.: Generalized quantiles as risk measures. Insur. Math. Econ. 54, 41–48 (2014)
Cai, J., Mao, T.: Risk measures derived from a regulator’s perspective on the regulatory capital requirements for insurers. Preprint (2018). Available online at: https://ssrn.com/abstract=3127285
Chew, S.H., Karni, E., Safra, Z.: Risk aversion in the theory of expected utility with rank dependent probabilities. J. Econ. Theory 42, 370–381 (1987)
Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., Vyncke, D.: The concept of comonotonicity in actuarial science and finance: theory. Insur. Math. Econ. 31, 3–33 (2002)
Föllmer, H., Schied, A.: Convex measures of risk and trading constraints. Finance Stoch. 6, 429–447 (2002)
Föllmer, H., Schied, A.: Stochastic Finance: An Introduction in Discrete Time, 3rd edn. de Gruyter, Berlin (2011)
Frittelli, M., Rosazza Gianin, E.: Putting order in risk measures. J. Bank. Finance 26, 1473–1486 (2002)
Jouini, E., Schachermayer, W., Touzi, N.: Law invariant risk measures have the Fatou property. Adv. Math. Econ. 9, 49–71 (2006)
Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291 (1979)
Kaluszka, M., Krzeszowiec, M.: Mean-value principle under cumulative prospect theory. ASTIN Bull. 42, 103–122 (2012)
Kaluszka, M., Krzeszowiec, M.: Pricing insurance contracts under cumulative prospect theory. Insur. Math. Econ. 50, 159–166 (2012)
Merkle, M., Marinescu, D., Merkle, M.M.R., Monea, M., Stroe, M.: Lebesgue–Stieltjes integral and Young’s inequality. Appl. Anal. Discrete Math. 8, 60–72 (2014)
Newey, W.K., Powell, J.L.: Asymmetric least squares estimation and testing. Econometrica 55, 819–847 (1987)
Niculescu, C., Persson, L.-E.: Convex Functions and Their Applications: A Contemporary Approach. Springer, New York (2006)
Pratt, J.W.: Risk aversion in the small and in the large. Econometrica 32, 122–136 (1964)
Quiggin, J.: A theory of anticipated utility. J. Econ. Behav. Organ. 3, 323–343 (1982)
Quiggin, J.: Generalized Expected Utility Theory: The Rank-Dependent Model. Kluwer Academic, Dordrecht (1993)
Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica 57, 571–587 (1989)
Schmidt, U., Zank, H.: Risk aversion in cumulative prospect theory. Manag. Sci. 54, 208–216 (2008)
Shaked, M., Shanthikumar, J.G.: Stochastic Orders. Springer Series in Statistics (2007)
Svindland, G.: Convex Risk Measures Beyond Bounded Risks. Doctoral dissertation, Ludwig Maximilian Univeristy (2009). Available online at https://edoc.ub.uni-muenchen.de/9715/1/Svindland_Gregor.pdf
Tsanakas, A.: To split or not to split: capital allocation with convex risk measures. Insur. Math. Econ. 44, 268–277 (2009)
Tversky, A., Kahneman, D.: Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5, 297–323 (1992)
Yaari, M.E.: The dual theory of choice under risk. Econometrica 55, 95–115 (1987)
Acknowledgements
The authors thank the two anonymous referees, the Associate Editor, and the Editor for insightful suggestions that improved the presentation of the paper. Tiantian Mao is grateful for the support from National Science Foundation of China (grant Nos. 71671176, 11371340). Jun Cai is grateful to the support from the Natural Sciences and Engineering Research Council (NSERC) of Canada (grant No. RGPIN-2016-03975).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proposition A.1
For \(u_{1},u_{2}\in\mathcal {U}_{\mathrm{icx}}^{+}\), \(v\in\mathcal {U}\) and \(h_{1}, h_{2}\in\mathcal {H}\), define \(\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) and \(\mathcal {X}_{v}^{h_{1},h_{2}}\) by (2.1) and (2.6), respectively, where the sets \(U_{\mathrm{icx}}^{+}\), \(\mathcal {U}\) and ℋ are defined near the end of Sect. 2.1.
(i) If \(h_{1}\) is convex and \(h_{2}\) is concave, then \(\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) is a convex set.
(ii) If \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) and \(h_{1}\) and \(h_{2}\) are convex, then \(\mathcal {X}_{v}^{h_{1},h_{2}}\) is a convex set.
Proof
We only prove (i) as (ii) follows from (i) and the equation
where \(v_{+}(x)=v(x)\), \(v_{-}(x)=-v(-x)\), \(x\in\mathbb{R}_{+}\), and \(h_{2}^{*}(p)=1-h_{2}(1-p)\), \(p\in[0,1]\).
To prove (i), let \(X,Y\in\mathcal {X}_{u_{1},u_{2}}^{h_{1},h_{2}}\) and \(\lambda\in(0,1)\). Set \(Z=\lambda X+(1-\lambda)Y\). Note that \(u_{i}\) is convex and hence, for any \(x\in\mathbb{R}\),
Then noting that \(H_{u_{1},h_{1}}(X) = \rho_{h_{1}^{*}} (u_{1}(X))\) for any random variable \(X\), where \(\rho_{h}\) is the distortion risk measure defined by (3.7), we obtain
where the second inequality uses that \(\rho_{h}\) is a convex risk measure if and only if \(h\) is concave. Similarly, one can show that \(\mathrm{H}_{u_{2},h_{2}}((x-Z)^{-})\) is also finite for any \(x\in \mathbb{R}\). □
The following two lemmas are useful in the proof of Proposition 2.2 and Theorem 3.1, respectively.
Lemma A.2
For \(u\in\mathcal {U}_{\mathrm{icx}}^{+}\) and \(h\in\mathcal {H}\), let \(X\) be a random variable with distribution function \(F\) and such that for any \(c\in\mathbb{R}\),
Then:
(i) \(g\) and \(f\) have finite left and right derivatives at any \(c \in \mathbb{R}\), and
(ii) If in addition \(u\) is differentiable with \(u'(0)=0\), then \(g\) and \(f\) are differentiable and
Proof
Note that (ii) follows immediately from (i). We only give the proof for \(g'_{+}\) in (i); the other formulas can be proved similarly. For any \(c, x \in\mathbb{R}\) and \(\delta\not= 0\), set
Due to \(u \in\mathcal {U}_{\mathrm{icx}}^{+}\), we can verify that \(\lim _{\delta\rightarrow0{+}} = - u'_{-}((x-c)^{+})\, \mathbf{1}_{\{x> c\}}\) and
Note that \(- w_{-1}(c, x)=-u((x-c)^{+})+u((x-(c-1))^{+})\) is integrable with respect to \(\mathrm{d}h(F(x))\). Then by (A.2), applying the dominated convergence theorem, we see that \(g'_{+}(c)\) exists and is finite with its expression given in (i). □
Lemma A.3
For \(h\in\mathcal {H}\) and a random variable \(X\) with distribution function \(F\) and survival function \(S= 1-F\), it holds that for any \(x \in\mathbb{R}\),
Moreover, for any \(-\infty\leq a < b\leq\infty\),
Proof
We only prove the first equation of (A.3) since the second follows from the first and \(h (\mathbb{P}[X \geq y]) = 1- h^{\ast}(\mathbb{P}[X< y])\) and \(h (\mathbb{P}[X > y]) = 1- h^{\ast}(\mathbb{P}[X\leq y])\), where \(h^{\ast}(p) =1-h(1-p)\) is the dual distortion function of \(h\). For \(y > x\), we have
and thus \(h(F(x)) \leq h(\mathbb{P}[X < y]) \leq h(\mathbb{P}[X \leq y]) = h(F(y))\). Hence if there exists \(y_{0}> x\) such that \(F(y_{0})=F(x)\), then \(F(y)=F(x)\) for \(y \in[x, y_{0}]\). Then \(h(\mathbb {P}[X < y]) = h(F(y))=h(F(x))\) for \(y \in[x, y_{0}]\) and thus
If \(F(y) > F(x)\) for all \(y > x\), then by (A.6), both \(\mathbb{P}[X < y]\) and \(\mathbb{P}[X \leq y]=F(y)\) converge to \(F(x)\) when \(y \downarrow x\). Hence
In addition, (A.4) and (A.5) follow directly from the definition of the L-S integrals; see Footnote 1 after (1.3). □
Before proving Theorem 2.7, we present the following lemma (from Schmidt and Zank [24, Theorem 1]), which will be used in the proof of Theorem 2.7.
Lemma A.4
Under the assumptions and notations of Theorem 2.7, if the statement (iii) of Theorem 2.7 holds, then \(\mathrm{H} =H_{v,\,h_{1},\,h_{2}}\) defined by (1.6) preserves stop-loss order on \(\mathcal {X}\).
Proof of Theorem 2.7
We introduce the auxiliary statement
- \((\mathrm{ii})'\) :
-
\(\rho\) preserves stop-loss order on \(L^{\infty}\).
We prove the theorem by showing the implications
(i) ⇒ (ii)′ Since \(L^{\infty}\subseteq\mathcal{X}\), \(\rho\) is convex on \(L^{\infty}\) as well. It follows from Jouini et al. [13] that a law-invariant convex risk measure on \(L^{\infty}\) has the Fatou property. Because \(\rho\) is law-invariant, it has the Fatou property on \(L^{\infty}\). Hence, by Lemma 2.17 of Svindland [26] or Theorem 4.3 of Bäuerle and Müller [5], \(\rho\) is stop-loss order preserving on \(L^{\infty}\).
(ii)′ ⇒ (iii) We show this implication by contradiction in three steps.
Step 1. We claim that \(h_{1}\) and \(h_{2}\) are both convex. To show this, let
We assert that \(q^{*}_{1}=1\) when (ii) holds. To see this, suppose \(q^{*}_{1}<1\). Then there exists \(p\in (q_{1}^{*},1)\) such that \(h_{2}(p) >0\) since \(h_{2}(1-)=1\). Define \(X\equiv0\) and \(Y\) such that \(\mathbb{P}[Y=-(1-p)x]=p\) and \(\mathbb{P}[Y=px]=1-p\) for some \(x>0\). It is easy to prove that \(X\prec_{\mathrm{sl}} Y\). Moreover, \(\rho(X)=0\) and due to \(h_{1}(p) = 1\) for \(p\in(q_{1}^{*},1)\), we have
which implies \(\rho(Y)<0\). Hence \(\rho(X) > \rho(Y)\). This contradicts the assumption that \(\rho\) is stop-loss order preserving. Thus \(q^{*}_{1}=1\), that is, \(h_{1}(p)<1\) for \(0 \leq p<1\).
Note that the monotone function \(h_{2}\) is convex on \([0, 1]\) if and only if it is mid-point convex (Niculescu and Persson [19, Theorem 1.1.4]), that is, for any \(p_{1}, p_{2} \in[0, 1]\), it holds that
Suppose (A.7) does not hold. Then there exist \(0 \leq p_{1} < p_{2} < 1\) such that
where \(\overline{p}=(p_{1}+p_{2})/2\). Thus
Since the function \(v\) is strictly increasing on ℝ, it is differentiable almost everywhere on ℝ and hence almost everywhere on any neighbourhood of 0. Thus there exist \(x_{n}\uparrow0\) as \(n\to\infty\) such that \(v\) is differentiable in \(x_{n}\) with \(v'(x_{n})>0\), \(n\in \mathbb{N}\). Hence for each \(n\in\mathbb{N}\),
and hence there exists \(\delta_{n}\in(0,-x_{n})\) such that \(\delta _{n}\downarrow0\) as \(n\to\infty\) and
Take \(y_{n1} = x_{n}-2\delta_{n} <0\) for \(n\in\mathbb{N}\). Then observing that \(x_{n}\to0-\), \(\delta_{n}\to0+\) as \(n\to\infty\), \(1-h_{1}(p_{2})>0\) and \(v\) is continuous, we have that
Moreover, (A.8) implies that \(h_{2}(p_{2})>0\) and \(h_{2}(p_{2}) > h_{2}(p_{1})\). Then we can find \(n_{0}\in\mathbb{N}\) such that
with \(y_{1}=y_{n_{0}1}\). Thus because \(v\) is continuous, there exists \(y_{2}>0\) such that
that is,
Define two random variables \(X\) and \(Y\) such that
It is easy to check that \(X\prec_{\mathrm{sl}} Y\). In fact, (A.10) is equivalent to \(\mathrm{H}(X)=0\), which implies \(\rho(X)= 0\) since \(v\) is strictly increasing. By (A.9), we have \(\mathrm{H}(Y)< \mathrm{H}(X)=0\), which implies \(\rho (Y)<0=\rho(X)\). This contradicts the property of stop-loss order preservation of \(\rho\), and so \(h_{2}\) is convex. Similarly, we can show that \(h_{1}\) is convex.
Step 2. We aim to show that \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) under the condition that \(h_{1}\) and \(h_{2}\) are convex. To this end, we first assume that \(v\) is not convex on \(\mathbb{R}_{-}\). Thus there exist \(x_{1},~x_{2}\in\mathbb{R}_{-}\) such that \(x_{1}< x_{2} <0\) and \(2 \, v(\overline{x}) > v(x_{1})+ v(x_{2})\) with \(\overline{x}=(x_{1}+ x_{2})/2 <0\). Thus
Set \(q^{*}_{2}=\inf\{p\in[0,1]: h_{2}(p)=1\}\) and \(p_{2}^{*}=\sup\{p\in[0,1]: h_{2}(p)=0\}\). Note that \(0 \leq p^{*}_{2} < q^{*}_{2} = 1\) when \(h_{2}\) is convex. Moreover, \(h_{2}\) is strictly convex and strictly increasing on \([p^{*}_{2}, 1]\). Hence \(h_{2}\) has a positive derivative almost everywhere on \((p^{*}_{2}, 1)\). Then for each \(p\in (p^{*}_{2}, 1)\) such that \(h_{2}\) is differentiable at \(p\), there exists \(\varepsilon_{p}>0\) such that
Thus by the definition of derivatives, there exists \(\delta_{p}>0\) such that for all \(0<\delta\leq\delta_{p}\), it holds that \(\delta+ p \in (p^{*}_{2}, 1)\), \(\delta- p \in (p^{*}_{2}, 1)\),
which implies
Note that we can choose \(p\in(p_{2}^{*}, 1)\) and \(\delta_{p}>0\) such that \(p+\delta_{p}\) is arbitrarily close to \(p_{2}^{*}\) and the ratio \(h_{2}(p+\delta_{p})/(1-h_{1}(p+\delta_{p}))>0\) can be made arbitrarily small in \(\mathbb{R}_{+}\). Then for \(\overline{x}<0\) and any \(y_{1}< x_{1}<0\), since \(v(y_{1})<0\), \(v(\overline{x})<0\), there exist \(p\in(p_{2}^{*}, 1)\) and \(\delta_{p}>0\) such that \(p -\delta_{p} \in(p_{2}^{*}, 1)\), \(p + \delta_{p} \in(p_{2}^{*}, 1)\) and
Hence by the fact that \(v\) is continuous, one can find \(y_{2}>0\) such that
i.e.,
Define random variables \(X\) and \(Y\) such that
Clearly, \(\rho(X)=0>\rho(Y)\), \(X\prec_{\mathrm{cx}} Y\), and thus \(X\prec_{\mathrm{sl}} Y\), which contradicts the property of stop-loss preservation of \(\rho\). Hence \(v\) is convex on \(\mathbb{R}_{-}\). Similarly, we can show that \(v\) is convex on \(\mathbb{R}_{+}\).
Step 3. We show the inequality (2.8) by way of contradiction. Steps 1 and 2 have proved that \(v\) is convex on both \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) and \(h_{1}\) and \(h_{2}\) are convex. Hence the distortion functions \(h_{1}\) and \(h_{2}\) satisfy \(h_{1}(p) < 1\) and \(h_{2}(p) < 1\) for \(0 \leq p <1\). Assume that (2.8) does not hold. Then there exists \(p\in(0,1)\) such that
Hence (A.11) implies that \(p \in(p_{2}^{*}, 1)\), \((h_{2})'_{-}(p) > 0\) and
Then there exists \(\varepsilon>0\) such that
Thus there exist \(\delta_{0}>0\) and \(x_{0}>0\) such that for all \(x\in(0, x_{0}]\) and \(\delta\in(0, \delta_{0}]\), we have \(p\pm\delta \in(p_{2}^{*}, 1)\) and
where the first and third inequalities follow from the definitions of left and right derivatives. Indeed, noting that
we know that there exists \(\delta_{0}>0\) such that for all \(\delta\in (0, \delta_{0}]\),
This yields that for \(\delta\in(0,\delta_{0})\),
The first inequality in (A.12) follows from the same arguments. Thus for this \(p\) and any \(\delta\in (0,\delta_{0})\), noticing that \(h_{2}(p-\delta)>0\), \(1-h_{1}(p+\delta)>0\) and \(v\) is continuous and strictly increasing with \(v(0)=0\), one can easily find \(x_{1} < 0\) and \(x_{2}>0\) such that
Now define random variables \(X\) and \(Y\) such that
Then \(X\prec_{\mathrm{cx}} Y\), thus \(X\prec_{\mathrm{sl}} Y\), and (A.13) is reduced to \(\mathrm{H}(X) = 0\), which together with (A.12) implies \(\mathrm{H}(Y) < 0\). It follows that \(\rho(Y)<0=\rho(X)\), which contradicts the property of stop-loss order preservation of \(\rho \). Hence (2.8) holds.
(iii) ⇒ (i) First note that if \(\rm H\) is convex, then \(\rho\) is convex. To see this, define the risk set \(\mathcal{A}_{H} =\{X\in\mathcal{X}: \mathrm{H}(X)\leq0\}\) which is called the acceptance set of the risk measure \(H\) and is convex because \(H\) is convex. Using \(\mathcal{A}_{H}\) to define a risk measure \(\rho\) as in Föllmer and Schied [10] or in (2.7) then implies that \(\rho\) is convex.
Next, it suffices to show that \(\rm H\) is convex. Let \(X\), \(Y\), \(X^{c}\), \(Y^{c}\in\mathcal{X}\) and \(\lambda\in(0,1)\) such that \(X^{c}\stackrel{d}{=}X\), \(Y^{c} \stackrel{d}{=}Y\) and \(X^{c}\), \(Y^{c}\) are comonotonic. By Dhaene et al. [9], we have that \(\lambda X + (1-\lambda) Y \prec_{\mathrm{cx}} \lambda X^{c} + (1-\lambda) Y^{c}\). By Lemma A.4, it holds that \(\mathrm{H} ( \lambda X + (1-\lambda) Y ) \leq{\mathrm{H}} ( \lambda X^{c} + (1-\lambda) Y^{c} )\). Thus without loss of generality, we can assume that \(X\) and \(Y\) are comonotonic. Set
It is easy to verify that \(v_{d}\) is convex on ℝ. The rest of the proof involves the following two steps:
Step 1. We first show that \(\rm H\) is convex for the case when \(X\) and \(Y\) are two comonotonic random variables with finite ranges. Without loss of generality, we assume that \(X\) and \(Y\) are defined on a probability space \((\mathbb{S},2^{\mathbb{S}},\mathbb{P}_{\mathbb{S}})\) with \(\mathbb{S}=\{1,\ldots,n\}\), \(\mathbb{P}_{\mathbb{S}}[\{i\}] = p_{i} > 0\) for \(i \in{\mathbb{S}}\) and \(\sum_{i\in{\mathbb{S}}} p_{i} =1\) such that \(X(i)=x_{i}\) and \(Y(i)=y_{i}\) satisfying \(x_{i}< x_{i+1}\) and \(y_{i}< y_{i+1}\) for \(i\in \mathbb{S}\).
Let \(k=\inf\{i\in\mathbb{S}: \lambda x_{i} + (1-\lambda) y_{i}\geq0\}\), \(k_{x}=\inf\{i\in\mathbb{S}: x_{i} \geq0\}\) and \(k_{y}=\inf\{i\in\mathbb{S}: y_{i}\geq0\}\), with the convention \(\inf \emptyset=\infty\). We only need to deal with the case \(k_{x}\leq k_{y}\), since the case \(k_{y} < k_{x}\) can be proved similarly.
(a) If \(k_{x}=k_{y}\), then \(k=k_{x}\) and \((\lambda X + (1-\lambda) Y)^{+} = \lambda X^{+} + (1-\lambda) Y^{+}\) and \((\lambda X + (1-\lambda) Y)^{-} = \lambda X^{-} + (1-\lambda) Y^{-}\). Note that the rank-dependent utility \(\mathrm{H}_{u,h}(X)\) is convex if and only if \(u\) is convex and \(h\) is convex; see Chew et al. [8]. It follows that
(b) If \(k_{x}< k_{y}\), then \(k_{x}\leq k\leq k_{y}\). Set \(\overline{p}_{i} = \sum _{j\leq i, \, j \in{\mathbb{S}}} p_{j}\), \(i\in{\mathbb{S}}\). Then we have
We assert that for \(i\in{\mathbb{S}}\),
Then it follows immediately that \(\mathrm{H}(\lambda X + (1-\lambda) Y) \leq\lambda {\mathrm{H}}(X) + (1-\lambda) \mathrm{H}(Y)\). To prove (A.14), first note that it holds trivially for \(i < k_{x}\) and \(i> k_{y}\) since \(v_{d}\) is convex. For \(k_{x}\leq i\leq k\), we have \(v_{d} (x_{i})\geq0\). Since \(v_{d}\) is convex, we have \(v_{d} (\lambda x_{i}+(1-\lambda)y_{i}) \leq (1-\lambda) v_{d} (y_{i}) + \lambda v_{d} (x_{i})\), i.e.,
On the other hand, by (2.8) and recalling that \(d=v'_{+}(0)/v'_{-}(0)\), we also have \((h_{2})'_{-}(p)\leq d (h_{1})_{+}'(p)\) for all \(p\in(0,1)\). Hence, noticing that \(h_{1}\) and \(h_{2}\) are convex and have derivatives almost everywhere on \([0,1]\), we have for any \(\overline{p}_{i-1}<\overline{p}_{i}\) that
which, together with (A.15), implies that
i.e., (A.14) holds. Similarly, it can be verified that (A.14) holds for \(k< i\leq k_{y}\).
Step 2. We now show that \(\rm H\) is convex in the general case where \(X\) and \(Y\) are two general comonotonic random variables. For any nonnegative random variable \(Z\), there exists a sequence of nonnegative discrete random variables \(\{Z_{n},n\in\mathbb{N}\}\) each with finite range such that \(Z_{n}\) increases to \(Z\) almost surely as \(n\to\infty\). As \(X\) and \(Y\) are comonotonic, there exist a random variable \(Z\) and two nondecreasing functions \(f\) and \(g\) such that \(X=f(Z)\) and \(Y=g(Z)\) a.s. Thus we can construct \(X_{n}\) and \(Y_{n}\) as nondecreasing functions of \(Z\) such that \((X_{n}^{+})\), \((X_{n}^{-})\), \((Y_{n}^{+})\) and \((Y_{n}^{-})\) increase to \(X^{+}\), \(X^{-}\), \(Y^{+}\) and \(Y^{-}\), respectively, as \(n\to\infty\), and \(X_{n}\) and \(Y_{n}\) are comonotonic. Then by the monotone convergence theorem, we have
Note that \(0\leq{\mathrm{H}}_{v, \, h_{1}}((\lambda X_{n} + (1-\lambda) Y_{n})^{+}) \leq{\mathrm{H}}_{v, \, h_{1}}(\lambda(X_{n})^{+} + (1-\lambda) (Y_{n})^{+})\) and \(0\geq {\mathrm{H}}_{v, \, h_{2}}((\lambda X_{n} + (1-\lambda) Y_{n})\wedge0) \geq{\mathrm{H}}_{v, \, h_{2}}(\lambda X_{n}\wedge0 + (1-\lambda)Y_{n}\wedge0)\). Moreover, \(\mathrm{H}_{v, \, h_{1}}(\lambda(X_{n})^{+} + (1-\lambda) (Y_{n})^{+})\) and \(\mathrm{H}_{v, \, h_{2}}(\lambda X_{n}\wedge0 + (1-\lambda)Y_{n}\wedge0)\) are both monotone. Thus by the monotone convergence theorem, they converge to \(\mathrm{H}_{v, \, h_{1}}(\lambda X^{+} + (1-\lambda) Y^{+})\) and \(\mathrm{H}_{v, \, h_{2}}(\lambda X\wedge0 + (1-\lambda) Y\wedge0)\), respectively, as \(n\to\infty\). Then by the dominated convergence theorem, we have that
On the other hand, by Step 1, we have
Combining this with (A.16) and (A.17) yields that
This completes the proof of (iii)⇒(i).
(iii) ⇒ (ii) By Lemma A.4, \(\mathrm{H}\) is stop-loss preserving on \(\mathcal{X}\). Then for any \(X, Y \in\mathcal{X}\) such that \(X\prec_{\mathrm{sl}} Y\), we have \(X-x\prec_{\mathrm{sl}} Y-x\) for all \(x\in\mathbb{R}\). Thus \(\mathrm{H}(X-x)\leq {\mathrm{H}}(Y-x)\) for all \(x\in\mathbb{R}\), which implies that \(\{ x\in\mathbb{R}: \mathrm{H}(Y-x)\leq0\} \subseteq\{x\in\mathbb{R}: \mathrm{H}(X-x)\leq0\}\). Hence \(\rho(X)\leq\rho(Y)\) by the definition of \(\rho\), and so \(\rho\) is stop-loss preserving on \(\mathcal{X}\).
(ii) ⇒ (ii)′ This is obvious since \(L^{\infty}\subseteq \mathcal{X}\).
Combining all the above arguments, we complete the proof. □
Rights and permissions
About this article
Cite this article
Mao, T., Cai, J. Risk measures based on behavioural economics theory. Finance Stoch 22, 367–393 (2018). https://doi.org/10.1007/s00780-018-0358-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00780-018-0358-6
Keywords
- Distortion risk measure
- Expectile
- Coherent risk measure
- Convex risk measure
- Monetary risk measure
- Stop-loss order preserving
- Rank-dependent expected utility theory
- Cumulative prospect theory