Skip to main content
Log in

Utility maximisation in a factor model with constant and proportional transaction costs

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

We study the problem of maximising expected utility of terminal wealth under constant and proportional transaction costs in a multidimensional market with prices driven by a factor process. We show that the value function is the unique viscosity solution of the associated quasi-variational inequalities and construct optimal strategies. While the value function turns out to be truly discontinuous, we are able to establish a comparison principle for discontinuous viscosity solutions which is strong enough to argue that the value function is unique, globally upper semicontinuous, and continuous if restricted to either borrowing or non-borrowing portfolios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 2.1
Fig. 2.2
Fig. 3.1
Fig. 3.2
Fig. 3.3

Similar content being viewed by others

Notes

  1. See http://quantpde.org/. We owe our gratitude to Parsiad Azimzadeh for making the code publicly available.

References

  1. Altarovici, A., Muhle-Karbe, J., Soner, H.M.: Asymptotics for fixed transaction costs. Finance Stoch. 19, 363–414 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Altarovici, A., Reppen, M., Soner, H.M.: Optimal consumption and investment with fixed and proportional transaction costs. SIAM J. Control Optim. 55, 1673–1710 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  3. Azimzadeh, P., Forsyth, P.A.: Weakly chained matrices, policy iteration, and impulse control. SIAM J. Numer. Anal. 54, 1341–1364 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bayraktar, E., Sîrbu, M.: Stochastic Perron’s method and verification without smoothness using viscosity comparison: the linear case. Proc. Am. Math. Soc. 140, 3645–3654 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bayraktar, E., Sîrbu, M.: Stochastic Perron’s method for Hamilton–Jacobi–Bellman equations. SIAM J. Control Optim. 51, 4274–4294 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bayraktar, E., Sîrbu, M.: Stochastic Perron’s method and verification without smoothness using viscosity comparison: Obstacle problems and Dynkin games. Proc. Am. Math. Soc. 142, 1399–1412 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Belak, C., Christensen, S., Seifried, F.T.: A general verification result for stochastic impulse control problems. SIAM J. Control Optim. 55, 627–649 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  8. Belak, C., Sass, J.: Finite-horizon optimal investment with transaction costs: Construction of the optimal strategies. Preprint (2018). Available online at: https://ssrn.com/abstract=2636341

  9. Bielecki, T.R., Pliska, S.R.: Risk sensitive asset management with transaction costs. Finance Stoch. 4, 1–33 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bouchard, B., Touzi, N.: Weak dynamic programming principle for viscosity solutions. SIAM J. Control Optim. 49, 948–962 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Christensen, S.: On the solution of general impulse control problems using superharmonic functions. Stoch. Process. Appl. 124, 709–729 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Crandall, M.G., Ishii, H., Lions, P.-L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27, 1–67 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  13. Eastham, J.F., Hastings, K.J.: Optimal impulse control of portfolios. Math. Oper. Res. 13, 588–605 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  14. Feodoria, M.-R.: Optimal investment and utility indifference pricing in the presence of small fixed transaction costs. Ph.D. Thesis, Christian-Albrechts-Universität Kiel (2016). Available online at: https://macau.uni-kiel.de/receive/dissertation_diss_00019558

  15. Ishii, K.: Viscosity solutions of nonlinear second order elliptic PDEs associated with impulse control problems. Funkc. Ekvacioj 36, 123–141 (1993)

    MathSciNet  MATH  Google Scholar 

  16. Korn, R.: Portfolio optimisation with strictly positive transaction costs and impulse control. Finance Stoch. 2, 85–114 (1998)

    Article  MATH  Google Scholar 

  17. Korn, R., Laue, S.: Portfolio optimisation with transaction costs and exponential utility. In: Buckdahn, R., et al. (eds.) Stochastic Processes and Related Topics. Proceedings of the 12th Winter School, Siegmundsburg, Germany, pp. 171–188. Taylor & Francis, London (2002)

    Google Scholar 

  18. Liu, H.: Optimal consumption and investment with transaction costs and multiple risky assets. J. Finance 59, 289–338 (2004)

    Article  Google Scholar 

  19. Øksendal, B., Sulem, A.: Optimal consumption and portfolio with both fixed and proportional transaction costs. SIAM J. Control Optim. 40, 1765–1790 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  20. Palczewski, J., Stettner, Ł.: Impulsive control of portfolios. Appl. Math. Optim. 56, 67–103 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Peskir, G., Shiryaev, A.N.: Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel (2006)

    MATH  Google Scholar 

  22. Schäl, M.: A selection theorem for optimization problems. Arch. Math. 25, 219–224 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  23. Schroder, M.: Optimal portfolio selection with fixed transaction costs: numerical solutions. Preprint (1995). Available online at: https://msu.edu/~schrode7/numerical.pdf

  24. Seydel, R.C.: Existence and uniqueness of viscosity solutions for QVI associated with impulse control of jump-diffusions. Stoch. Process. Appl. 119, 3719–3748 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Touzi, N.: Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE. Springer, New York (2013)

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christoph Belak.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Auxiliary results

Lemma A.1

Let\(x\in {\overline{\mathcal{S}}}\). Then the sets\(\mathcal{D}(x)\)and\(\Xi (x)\)are compact.

Proof

Clearly, it suffices to show that \(\mathcal{D}(x)\) is compact. Using the definitions of \(\Gamma \), \(\operatorname{L}\) and \({\overline{ \mathcal{S}}}\), it is easily seen that

D ( x ) = { Δ × i = 1 n [ x i , ) : x 0 i = 1 n ( Δ i + γ i | Δ i | ) K = : { Δ i = 1 n [ x i , ) : + ( i = 1 n ( 1 γ i ) ( x i + Δ i ) K ) + 0 } .

Clearly, this set is closed. Moreover, whenever \(\Delta _{i}\to \infty \) for some \(i\in \{1,\ldots ,n\}\), the left-hand side of the above inequality tends to \(-\infty \) since \(\gamma _{i}>0\). Hence this set is also bounded, i.e., compact. □

Lemma A.2

Let\((x^{k})_{k\in \mathbb{N}}\subseteq {\overline{\mathcal{S}}} \setminus {\mathcal{S}_{\emptyset }}\)converge to some\(x\in {\overline{ \mathcal{S}}}\setminus {\mathcal{S}_{\emptyset }}\). Then:

  1. (1)

    The set\(\bigcup _{k\in \mathbb{N}}\Xi (x^{k})\)is bounded.

  2. (2)

    Let\(\xi ^{k}\in \Xi (x^{k})\)for each\(k\in \mathbb{N}\). Then there exist\(\xi \in \Xi (x)\)and a subsequence\((\xi ^{k_{j}})_{j\in \mathbb{N}}\)such that\(\xi ^{k_{j}}\to \xi \)as\(j\to \infty \).

  3. (3)

    Let\(\xi \in \Xi (x)\)with\(\xi \notin {\mathcal{S} _{{\mathrm{b}}}^{\mathrm{liq}}}\). Then there exist a subsequence\((k_{j})_{j\in \mathbb{N}}\)and a corresponding sequence\((\xi ^{k_{j}})_{j \in \mathbb{N}}\)with\(\xi ^{k_{j}}\in \Xi (x^{k_{j}})\)and\(\xi ^{k _{j}}\to \xi \)as\(j\to \infty \). Moreover, if\(\circ \in \{{\mathrm{b}}, {\mathrm{nb}}\}\)is chosen such that\(\xi \in {\widehat{\mathcal{S}} _{\circ }}\), then\((\xi ^{k_{j}})_{j\in \mathbb{N}}\)can be chosen such that it is contained in\({\widehat{\mathcal{S}}_{\circ }}\)as well.

Proof

(1) Define \(\bar{x}\in {\overline{\mathcal{S}}}\) by

$$ \bar{x}_{i} \mathrel{:=}\sup _{k\in \mathbb{N}} x_{i}^{k} \qquad \text{for all }i=0,1,\ldots ,n. $$

Since \(x^{k}\leq \bar{x}\), it follows that \(\Gamma (x^{k},\Delta ) \leq \Gamma (\bar{x},\Delta )\) for each \(\Delta \in \mathcal{D}(x^{k})\) and \(k\in \mathbb{N}\). In particular, this shows that \(\mathcal{D}(x ^{k})\subseteq \mathcal{D}(x)\) for all \(k\in \mathbb{N}\) and therefore

$$ \xi _{i} \leq \xi _{i}^{\max } \mathrel{:=}\sup \{\bar{\xi }_{i} : \bar{ \xi }\in \Xi (\bar{x})\} < \infty \qquad \text{for }i=0,1\ldots ,n, \xi \in \Xi (x^{k}), k\in \mathbb{N}. $$

The short-selling constraint implies moreover that \(\xi _{i}^{\min } \mathrel{:=}0 \leq \xi _{i}\) for all \({i=1,\ldots ,n}\), \(\xi \in \Xi (x^{k})\) and \(k\in \mathbb{N}\). Hence, if we can show that

$$ -\infty < \xi _{0}^{\min } \mathrel{:=}\inf \{\bar{\xi }_{0} : \bar{ \xi }\in \Xi (\bar{x})\} \leq \xi _{0} \qquad \text{for all }\xi \in \Xi (x^{k})\text{ and }k\in \mathbb{N}, $$
(A.1)

we can conclude that \(\bigcup _{k\in \mathbb{N}} \Xi (x^{k})\) is bounded since then

$$ \bigcup _{k\in \mathbb{N}} \Xi (x^{k}) \subseteq \times _{i=0}^{n} \bigl[\xi _{i}^{\min },\xi _{i}^{\max }\bigr]. $$

To verify (A.1), we fix \(k\in \mathbb{N}\), \(\Delta \in \mathcal{D}(x^{k})\) and set \(\xi \mathrel{:=}\Gamma (x ^{k},\Delta )\). Since sell orders only increase the value of \(\xi _{0}\), we may without loss of generality assume that \(\Delta _{i} \geq 0\) for all \(i=1,\ldots ,n\). Now define \(\bar{\Delta }\in \mathbb{R}^{n+1}\) by

$$ \bar{\Delta }_{i} \mathrel{:=}\Delta _{i} + \frac{\bar{x}_{0} - x^{k} _{0}}{n(1+\gamma _{i})} \geq \Delta _{i} \qquad \text{for all }i=1,\ldots ,n. $$

Setting \(\bar{\xi }\mathrel{:=}\Gamma (\bar{x},\bar{\Delta })\) and using that \(\Delta _{i},\bar{\Delta }_{i}\geq 0\), it follows that

$$ \xi _{0} = x^{k}_{0} - \sum _{i=1}^{n} (1+\gamma _{i})\Delta _{i} - K = \bar{x}_{0} - \sum _{i=1}^{n}(1+\gamma _{i})\bar{\Delta }_{i} - K = \bar{ \xi }_{0}. $$

Now, it is clear that \(\bar{\xi }_{i} \geq \xi _{i}\geq 0\) for all \(i=1,\ldots ,n\), and hence, in order to conclude, we are only left with verifying that \(\operatorname{L}(\bar{\xi })\geq 0\). But using \({\bar{x}_{i}+\bar{\Delta }_{i} \geq x_{i} + \Delta _{i}}\) yields

$$\begin{aligned} \operatorname{L}(\bar{\xi }) & = \bar{x}_{0} - \sum _{i=1}^{n} (1+ \gamma _{i})\bar{\Delta }_{i} - K + \bigg(\sum _{i=1}^{n}(1-\gamma _{i})( \bar{x}_{i} + \bar{\Delta }_{i}) - K\bigg)^{+} \\ & \geq \bar{x}_{0} + x_{0}^{k} - x_{0}^{k} - \sum _{i=1}^{n} (1+\gamma _{i})\bigg(\Delta _{i} + \frac{\bar{x}_{0}-x_{0}^{k}}{n(1+\gamma _{i})} \bigg) - K \\ & \phantom{=:}+ \bigg(\sum _{i=1}^{n}(1-\gamma _{i})(x_{i}^{k} + \Delta _{i}) - K\bigg)^{+} \\ & = \bar{x}_{0} - x_{0}^{k} - \sum _{i=1}^{n} (1+\gamma _{i})\frac{ \bar{x}_{0}-x_{0}^{k}}{n(1+\gamma _{i})} \\ & \phantom{=:} + x_{0}^{k} - \sum _{i=1}^{n} (1+\gamma _{i})\Delta _{i} - K + \bigg( \sum _{i=1}^{n}(1-\gamma _{i})(x_{i}^{k} + \Delta _{i}) - K\bigg)^{+} \\ & = \xi _{0} + \bigg(\sum _{i=1}^{n}(1-\gamma _{i})\xi _{i} - K\bigg)^{+} = \operatorname{L}(\xi ) \geq 0, \end{aligned}$$

thus establishing (A.1).

(2) Let \(\xi ^{k}\in \Xi (x^{k})\) for each \(k\in \mathbb{N}\). Since \(\bigcup _{k\in \mathbb{N}}\Xi (x^{k})\) is bounded and \({\overline{ \mathcal{S}}}\) is closed, it follows that \((\xi ^{k})_{k\in \mathbb{N}}\) is bounded and admits a subsequence (again indexed by \(k\) for simplicity) which converges to some \(\xi \in {\overline{\mathcal{S}}}\). We are left with showing that \(\xi \in \Xi (x)\). For this, we first observe that for each \(k\in \mathbb{N}\), there exists \(\Delta ^{k} \in \mathcal{D}(x^{k})\) such that \(\xi ^{k} = \Gamma (x^{k},\Delta ^{k})\), i.e.,

$$ \xi ^{k}_{0} = x_{0}^{k} - \sum _{i=1}^{n}(\Delta _{i}^{k} + \gamma _{i}| \Delta _{i}^{k}|) - K, \qquad \xi _{i}^{k} = x_{i}^{k} + \Delta _{i}^{k},\quad i=1,\ldots ,n. $$

Now since \(\xi ^{k}\to \xi \) and \(x^{k}\to x\), we immediately find that \(\Delta _{i}^{k} = \xi _{i}^{k} - x^{k}_{i}\) converges to some \(\Delta _{i}\in \mathbb{R}\) satisfying \(\xi _{i}=x_{i}+\Delta _{i}\) for each \(i=1,\ldots ,n\). But then we must have

$$ \xi _{0} = \lim _{k\to \infty } \xi ^{k}_{0} = \lim _{k\to \infty } x_{0} ^{k} - \sum _{i=1}^{n}(\Delta _{i}^{k} + \gamma _{i}|\Delta _{i}^{k}|) - K= x_{0} - \sum _{i=1}^{n}(\Delta _{i} + \gamma _{i}|\Delta _{i}|) - K, $$

which implies that \(\xi =\Gamma (x,\Delta )\), i.e., \(\xi \in \Xi (x)\).

(3) Let \(\xi \in \Xi (x)\) with \(\xi \notin {\mathcal{S}_{{\mathrm{b}}} ^{\mathrm{liq}}}\). Then \(\xi =\Gamma (x,\Delta )\) for some \(\Delta \in \mathcal{D}(x)\). We define a sequence \((\varepsilon _{k})_{k\in \mathbb{N}}\) by

$$ \varepsilon _{k} \mathrel{:=}\sum _{i=1}^{n}(1+\gamma _{i})(x_{i}-x_{i} ^{k})^{+} \qquad \text{for all }k\in \mathbb{N}. $$

Observe that \(\varepsilon _{k}\geq 0\) and \(\varepsilon _{k}\to 0\) as \(k\to \infty \). With this, for each \(k\in \mathbb{N}\), let us now define \(\Delta ^{k}\in \mathbb{R}^{n}\) by

$$ \Delta _{i}^{k} = \textstyle\begin{cases} \max \{\Delta _{i} - ((x_{0}-x_{0}^{k})^{+} + \varepsilon _{k}) / (1+ \gamma _{i}),-x_{i}^{k}\} &\quad \text{if }\Delta _{i} > 0, \\ \max \{\Delta _{i} - ((x_{0}-x_{0}^{k})^{+} + \varepsilon _{k}) / (1- \gamma _{i}),-x_{i}^{k}\} &\quad \text{if }\Delta _{i} \leq 0, \end{cases} $$

for \(i=1,\ldots ,n\), and set \(\xi ^{k}\mathrel{:=}\Gamma (x^{k},\Delta ^{k})\). From the definition of \(\Delta ^{k}\), it is immediately clear that \(\Delta _{i}^{k}\geq -x_{i}^{k}\), implying that \(\xi _{i}^{k} = x _{i}^{k} + \Delta _{i}^{k} \geq 0\). Moreover, as \(k\to \infty \), each \(\Delta _{i}^{k}\) converges to \(\max \{\Delta _{i},-x_{i}\} = \Delta _{i}\) since \(\xi \in {\overline{\mathcal{S}}}\) and hence \(0\leq \xi _{i} = x_{i} + \Delta _{i}\). In particular, using the continuity of \(\Gamma \), this implies that

$$ \lim _{k\to \infty }\xi ^{k} = \lim _{k\to \infty } \Gamma (x^{k},\Delta ^{k}) = \Gamma (x,\Delta ) = \xi . $$

Therefore, in order to conclude, we only have to show that, after possibly passing to a subsequence, \(\operatorname{L}(\xi ^{k})\geq 0\) eventually (so that \(\xi ^{k}\in \Xi (x^{k})\) eventually), and \(\xi ^{k}_{0}\geq 0\) eventually if \(\xi _{0} = 0\) (\(\xi ^{k}_{0}\) and \(\xi _{0}\) have eventually the same sign if \(\xi _{0}\neq 0\), so there is nothing to show in this case). Let us first consider the case \(\xi _{0}\neq 0\). Then we must have \(\operatorname{L}(\xi )>0\). Indeed, if \(\xi \in {\widehat{\mathcal{S}}_{{\mathrm{nb}}}}\), we have \(\operatorname{L}(\xi )\geq \xi _{0} > 0\) and similarly, if \(\xi \in {\mathcal{S}_{{\mathrm{b}}}}\), we directly find \(\operatorname{L}( \xi )>0\) since \(\xi \notin {\mathcal{S}_{{\mathrm{b}}}^{ \mathrm{liq}}}\). But then by the continuity of \(\operatorname{L}\), it follows that \(\lim _{k\to \infty }\operatorname{L}(\xi ^{k}) = \operatorname{L}(\xi ) > 0\), and so we must eventually have \(\operatorname{L}(\xi ^{k})\geq 0\). Let us therefore from now on focus on the case \(\xi _{0} = 0\). In this case, it suffices to show that \(\xi ^{k}_{0}\geq 0\) eventually, since this already implies \(\operatorname{L}(\xi ^{k}) \geq \xi _{0}^{k}\geq 0\) eventually. Let us first suppose that

$$ \Delta _{i}^{k} = - x_{i}^{k} \qquad \text{for all }i=1,\ldots ,n\text{ and eventually all }k\in \mathbb{N}. $$
(A.2)

Then it follows that

$$ \Delta _{i} = \lim _{k\to \infty } \Delta _{i}^{k} = \lim _{k\to \infty } -x_{i}^{k} = -x_{i} \qquad \text{for all }i=1,\ldots ,n. $$

In particular, this implies \(\xi = 0\in {\mathcal{S}_{{\mathrm{nb}}} ^{\mathrm{liq}}}\). But in this case, as shown in Lemma A.3 below, we have \({\Delta ^{k}=(-x_{1}^{k}, \ldots ,-x_{n}^{k})\in \mathcal{D}(x^{k})}\) for all \(k\in \mathbb{N}\) and it follows that \({\xi ^{k} = \Gamma (x^{k},\Delta ^{k})\in {\overline{ \mathcal{S}}}}\). Now since \(\xi ^{k}_{i} = 0\) for all \(i=1,\ldots ,n\), this is only possible if \(\xi ^{k}_{0}\geq 0\) and we are done. Let us now suppose that (A.2) does not hold. Then there exists a subsequence (again indexed by \(k\) for simplicity) such that for each \(k\in \mathbb{N}\), there exists some \(i_{k}\in \{1,\ldots ,n\}\) with \(\Delta _{i_{k}}^{k} > - x_{i_{k}}^{k}\). After passing to another subsequence if necessary, we may furthermore assume that \({i \mathrel{:=}i_{k}}\) does not depend on \(k\). In particular, for this particular \(i\), we have

Now, using that \(\xi _{0} = 0\), we compute

$$ \xi _{0}^{k} = \xi _{0}^{k} - \xi _{0} = x_{0}^{k} - x_{0} - \sum _{j=1} ^{n}\big(\Delta _{j}^{k}-\Delta _{j} + \gamma _{j}(|\Delta _{j}^{k}|-| \Delta _{j}|)\big). $$

As soon as \(k\) is sufficiently large, we observe that \(\Delta _{j} - ((x _{0}-x_{0}^{k})^{+} +\varepsilon _{k})/ (1+\gamma _{j})\) is positive whenever \(\Delta _{j}>0\) (in particular, \(\Delta _{j}^{k} > -x_{j}^{k}\) and thus \(\Delta _{j}^{k}\geq 0\) in this case). Moreover, we have \(\Delta _{j}^{k}\leq 0\) whenever \(\Delta _{j}\leq 0\). Using this, it follows that

(A.3)

for all \(k\in \mathbb{N}\) sufficiently large. Now pick any \(j\in \{1, \ldots ,n\}\) with \(j\neq i\). If \(\Delta _{j}^{k} > -x_{j}^{k}\), then it follows that \(\Delta _{j}^{k} - \Delta _{j} \leq 0\) and hence

On the other hand, if \(\Delta _{j}^{k} = -x_{j}^{k}\), then we can use that \(\Delta _{j} \geq -x_{j}\) to see that \(\Delta _{j}^{k} - \Delta _{j} \leq -x_{j}^{k} + x_{j} \leq (x_{j}-x_{j}^{k})^{+}\) and thus

as well. Plugging the latter two estimates into (A.3) therefore gives

$$\begin{aligned} \xi _{0}^{k} & \geq x_{0}^{k} - x_{0} + (x_{0} - x_{0}^{k})^{+} + \varepsilon _{k} - \sum _{j=1,j\neq i}^{n} (1+\gamma _{j})(x_{j}-x_{j} ^{k})^{+} \\ & \geq -(x_{0}-x_{0}^{k})^{+} + (x_{0} - x_{0}^{k})^{+} + \varepsilon _{k} - \sum _{j=1}^{n} (1+\gamma _{j})(x_{j}-x_{j}^{k})^{+} = 0, \end{aligned}$$

where the last equality follows from the choice of \(\varepsilon _{k}\). □

Lemma A.3

Let\(x\in {\overline{\mathcal{S}}}\)and set\(\Delta ^{\mathrm{Liq}} \mathrel{:=}(-x_{1},\ldots ,-x_{n})\). Then\(\mathcal{D}(x)\neq \emptyset \)if and only if\(\Delta ^{\mathrm{Liq}}\in \mathcal{D}(x)\). Moreover, if\(x\)satisfies

$$ x_{0} + \sum _{i=1}^{n} (1-\gamma _{i})x_{i} - K = 0, $$

then\(\mathcal{D}(x) = \{\Delta ^{\mathrm{Liq}}\}\), i.e., \(\Delta ^{\mathrm{Liq}}\)is the only admissible transaction. In particular, we have\(\Xi (x) = \{0\}\)in this case.

Proof

Suppose that \(\mathcal{D}(x)\neq \emptyset \), choose \(\Delta \in \mathcal{D}(x)\) and set \(\xi \mathrel{:=}\Gamma (x,\Delta )\). Moreover, define \(\xi ^{\mathrm{Liq}}\mathrel{:=}\Gamma (x,\Delta ^{\mathrm{Liq}})\). Since \(\xi ^{\mathrm{Liq}}_{i} = x_{i} + \Delta _{i}^{\mathrm{Liq}} = 0\) for all \(i=1,\ldots ,n\), it suffices to show that \(\operatorname{L}( \xi ^{\mathrm{Liq}})\geq 0\) to obtain \(\xi ^{\mathrm{Liq}}\in \Xi (x)\), i.e., \(\Delta ^{\mathrm{Liq}}\in \mathcal{D}(x)\). That is, we have to show that

$$ \operatorname{L}(\xi ^{\mathrm{Liq}}) = x_{0} + \sum _{i=1}^{n} (1- \gamma _{i})x_{i} - K \geq 0. $$

Now, \(\xi \in \Xi (x)\) implies in particular that \(\operatorname{L}( \xi )\geq 0\), i.e.,

$$ 0 \leq x_{0} - \sum _{i=1}^{n}(\Delta _{i} + \gamma _{i}|\Delta _{i}|) - K + \bigg(\sum _{i=1}^{n} (1-\gamma _{i})(x_{i}+\Delta _{i}) - K\bigg)^{+}. $$
(A.4)

Let us first suppose that the positive part on the right-hand side is equal to zero. Using that \(0\leq \xi _{i} = x_{i}+\Delta _{i}\) implies that \(-\Delta _{i}\leq x_{i}\), it follows that

(A.5)

If, on the other hand, the positive part in (A.4) is not equal to zero, we obtain

$$\begin{aligned} 0 & \leq x_{0} + \sum _{i=1}^{n}\bigl((1-\gamma _{i})(x_{i}+\Delta _{i})- \Delta _{i} - \gamma _{i}|\Delta _{i}|\bigr) - 2K \\ & = x_{0} + \sum _{i=1}^{n}(1-\gamma _{i})x_{i} - K - \sum _{i=1}^{n} \gamma _{i}(\Delta _{i}+|\Delta _{i}|) - K \\ & < x_{0} + \sum _{i=1}^{n}(1-\gamma _{i})x_{i} - K. \end{aligned}$$
(A.6)

We have therefore argued that \(\Delta ^{\mathrm{Liq}}\in \mathcal{D}(x)\) whenever \(\Xi (x)\neq \emptyset \). If we now suppose that \(x\) is such that

$$ x_{0} + \sum _{i=1}^{n} (1-\gamma _{i})x_{i} - K = 0, $$

then it is immediately clear that \(\Delta ^{\mathrm{Liq}}\in \mathcal{D}(x)\). Let us proceed to show that there exists no other \(\Delta \in \mathcal{D}(x)\). By (A.5) and (A.6), we immediately find that

$$ \operatorname{L}\bigl(\Gamma (x,\Delta )\bigr) \leq \operatorname{L} \bigl(\Gamma (x,\Delta ^{\mathrm{Liq}})\bigr) = 0. $$

Since the inequality is strict if we are in the situation of (A.6), this already implies that \(\Delta \notin \mathcal{D}(x)\) in this case. On the other hand, if we are in the situation of (A.5), then we can use that \(\Delta \neq \Delta ^{\mathrm{Liq}}\) implies that \(\Delta _{i}>-x_{i}\) for some \(i\in \mathbb{N}\), and hence we find that the inequality in (A.5) must also be strict and \(\Delta \notin \mathcal{D}(x)\). □

Lemma A.4

Let\((t,x,y)\in [0,T]\times {\overline{\mathcal{O}}}\)and\(\Lambda = ( \tau _{k},\xi _{k})_{k\in \mathbb{N}}\in \mathcal{A}(t,x,y)\). Then\(\mathbb{P}[\lim _{k\to \infty }\tau _{k}>T] = 1\).

Proof

For \(k\in \mathbb{N}\) fixed, we denote by \(X^{k} = (X^{k,0},X^{k,1}, \ldots ,X^{k,n})\) the portfolio process obtained by following the strategy which performs only the first \(k\) trades of \(\Lambda \) and then lets the portfolio run uncontrolled. We write

$$ N^{k,i}_{u} \mathrel{:=}\frac{X^{k,i}_{u}}{P^{i}_{u}}, \qquad u\in [t,T], i=0,1\ldots ,n, $$

which is the number of shares of the \(i\)th asset \(P^{i}\) held at time \(u\). In the absence of transaction costs, the investor’s wealth is given by

$$ Z^{k}_{u} := \boldsymbol{1}^{\top }x + \sum _{i=0}^{n}\int _{t}^{u} N _{s}^{k,i} \,\mathrm{d}P^{i}_{s}, \qquad u\in [t,T], $$

and hence, since any transaction incurs at least a cost of \(K\), it is clear that

$$ 0\leq \operatorname{L}(X^{k}_{\tau _{k}}) \leq \boldsymbol{1}^{\top }X ^{k}_{\tau _{k}} \leq Z^{k}_{\tau _{k}} - kK \qquad \text{on }A_{k}\mathrel{:=}\{\tau _{k}\leq T\}. $$

Let us now assume by way of contradiction that \(A\mathrel{:=} \bigcap _{k\in \mathbb{N}} A_{k}\) has positive probability. This implies that \(Z^{k}_{\tau _{k}}\to \infty \) on this event. This, however, cannot be since the price process \(P\) satisfies the no-free-lunch-with-vanishing-risk property and hence the family

$$ \bigg\{ \,\sup _{u\in [t,T]}\bigg|\,\sum _{i=0}^{n}\int _{t}^{u} N_{s} ^{k,i} \,\mathrm{d}P^{i}_{s}\,\bigg| : k \in \mathbb{N}\bigg\} $$

is bounded in probability. □

Lemma A.5

For all\((t,x,y)\in [0,T]\times {\overline{\mathcal{O}}}\), there exists\(M>0\)such that

$$ \sup _{\Lambda \in \mathcal{A}(t,x,y)}\mathbb{E}\bigg[\sup _{u\in [t,T]}| \boldsymbol{1}^{\top }X^{t,x,y}_{u}({\Lambda })|^{2}\bigg] \leq M ( 1 + |\boldsymbol{1}^{\top }x|^{2}). $$

Proof

We write \((X,Y) \mathrel{:=}(X^{t,x,y}(\Lambda ),Y^{t,y})\) as shorthand notation. We first note that with \(N \mathrel{:=}\max \{1/\gamma _{i} : i = 1,\ldots ,n\}\), it is not difficult to verify that

$$ \max _{i=0,1,\ldots ,n}x_{i} \leq (1+N)\boldsymbol{1}^{\top }x \qquad \text{for all } x\in {\overline{\mathcal{S}}}. $$
(A.7)

Now, for every fixed \(u\in [t,T]\), we have the estimate

$$ |\boldsymbol{1}^{\top }X_{u}| = \sum _{i=0}^{n} X^{i}_{u} \leq \boldsymbol{1}^{\top }x + \int _{t}^{u} \sum _{i=0}^{n} \mu _{i}(Y_{s})X ^{i}_{s} \,\mathrm{d}s+ \sum _{j=1}^{d} \int _{t}^{u} \sum _{i=0}^{n} \sigma _{i,j}(Y_{s})X^{i}_{s}\,\mathrm{d}W^{j}_{s}. $$

Let \(L>0\) be an upper bound of \(|\mu |\). Using (A.7), we further estimate

$$ |\boldsymbol{1}^{\top }X_{u}| \leq \boldsymbol{1}^{\top }x + (n+1)(1+N)L \int _{t}^{u} |\boldsymbol{1}^{\top }X_{s}| \,\mathrm{d}s + \sum _{j=1} ^{d} \int _{t}^{u} \sum _{i=0}^{n} \sigma _{i,j}(Y_{s})X^{i}_{s}\, \mathrm{d}W^{j}_{s}. $$

Squaring both sides, estimating the square of sums by the sum of squares and using Jensen’s inequality for the Lebesgue integral then implies the existence of a constant \(C>0\) such that

$$ |\boldsymbol{1}^{\top }X_{u}|^{2} \leq C\bigg(|\boldsymbol{1}^{\top }x|^{2} + \int _{t}^{u} | \boldsymbol{1}^{\top }X_{s}|^{2} \,\mathrm{d}s + \sum _{j=1}^{d}\bigg| \int _{t}^{u} \sum _{i=0}^{n} \sigma _{i,j}(Y_{s})X^{i}_{s}\,\mathrm{d}W ^{j}_{s}\bigg|^{2}\bigg). $$

Standard estimates involving Doob’s inequality then show that there exists a constant \(D>0\) (which still does not depend on \(\Lambda \)) such that

$$ \mathbb{E}\bigg[ \sup _{r\in [t,u]} |\boldsymbol{1}^{\top }X_{r}|^{2} \bigg] \leq D\bigg(|\boldsymbol{1}^{\top }x|^{2}+ \int _{t}^{u} \mathbb{E}\bigg[\sup _{r\in [t,s]}|\boldsymbol{1}^{\top }X_{r}|^{2} \bigg] \,\mathrm{d}s\bigg), $$

and we conclude by Gronwall’s inequality. □

Lemma A.6

Let\(h_{1},h_{2}:[0,T]\times {\overline{\mathcal{O}}}\to \mathbb{R}\)and\(\lambda _{1},\lambda _{2}\geq 0\). Then

$$ \mathcal{M}(\lambda _{1} h_{1} + \lambda _{2} h_{2}) \leq \lambda _{1} \mathcal{M}h_{1} + \lambda _{2}\mathcal{M}h_{2} \qquad \textit{on }[0,T]\times {\overline{\mathcal{O}}}. $$

Proof

This is an immediate consequence of the definition of ℳ and the sublinearity of the supremum. □

Lemma A.7

Let\(h:[0,T]\times {\overline{\mathcal{O}}}\to \mathbb{R}\)be nonnegative and such that the mapping\(x\mapsto h(t,x,y)\)is increasing on\({\overline{\mathcal{S}}}\)for all\((t,y)\in [0,T]\times \mathbb{R}^{m}\). Then the same is true for\(x\mapsto \mathcal{M}h(t,x,y)\).

Proof

Let \((t,y)\in [0,T]\times \mathbb{R}^{m}\) and fix \(x_{1},x_{2}\in {\overline{\mathcal{S}}}\) with \(x_{1} \leq x_{2}\). If \(\Xi (x_{1}) = \emptyset \), there is nothing to show, so let us suppose that \(\Xi (x_{1})\neq \emptyset \). Now fix an arbitrary \(\xi _{1}=\Gamma (x _{1},\Delta )\in \Xi (x_{1})\) for some \(\Delta \in \mathcal{D}(x_{1})\), define \(\xi _{2}\mathrel{:=}\Gamma (x_{2},\Delta )\) and observe that \(x_{1}\leq x_{2}\) implies that \(\xi _{1}\leq \xi _{2}\). In particular, \(\xi _{2}\in \Xi (x_{2})\). Using the monotonicity of \(h\), this shows that \(h(t,\xi _{1},y) \leq h(t,\xi _{2},y) \leq \mathcal{M}h(t,x_{2},y)\), and since \(\xi _{1}\) was chosen arbitrarily, the result follows. □

Lemma A.8

Let\(h:[0,T]\times {\overline{\mathcal{O}}}\to \mathbb{R}\)be nonnegative and upper semicontinuous. Then\(\mathcal{M}h\)is upper semicontinuous as well.

Proof

Since \({\overline{\mathcal{S}}}\setminus {\mathcal{S}_{\emptyset }}\) is closed, \(\mathcal{M}h(t,x,y) = -1\) whenever \(x\in {\mathcal{S}_{ \emptyset }}\), and \(h\geq 0\), it suffices to show that \(\mathcal{M}h\) is upper semicontinuous on \({[0,T]\times ({\overline{\mathcal{S}}} \setminus {\mathcal{S}_{\emptyset }})\times \mathbb{R}^{m}}\). Let therefore \((t,x,y)\in [0,T]\times ({\overline{\mathcal{S}}}\setminus {\mathcal{S}_{\emptyset }})\times \mathbb{R}^{m}\) and choose an arbitrary sequence \({(t_{k},x_{k},y_{k})_{k\in \mathbb{N}}\subseteq [0,T] \times ({\overline{\mathcal{S}}}\setminus {\mathcal{S}_{\emptyset }}) \times \mathbb{R}^{m}}\) converging to \((t,x,y)\). Since every \(\Xi (x_{k})\) is compact and \(h\) is upper semicontinuous, there exists \(\xi _{k}\in \Xi (x_{k})\) with

$$ \mathcal{M}h(t_{k},x_{k},y_{k}) = h(t_{k},\xi _{k},y_{k}) \qquad \text{for all }k\in \mathbb{N}. $$

Moreover, by Lemma A.2(2) and after possibly passing to a subsequence, there exists \(\xi \in \Xi (x)\) such that \(\xi _{k}\to \xi \). But then, by upper semicontinuity of \(h\),

$$\begin{aligned} \limsup _{k\to \infty } \mathcal{M}h(t_{k},x_{k},y_{k}) &= \lim _{k\to \infty } \mathcal{M}h(t_{k},x_{k},y_{k}) \\ &\leq \limsup _{k\to \infty } h(t_{k},\xi _{k},y_{k}) \leq h(t,\xi ,y) \leq \mathcal{M}h(t,x,y), \end{aligned}$$

showing that \(\mathcal{M}h\) is upper semicontinuous. □

Appendix B: The local viscosity property of the optimal stopping problem

In this section, we show that the value function of the abstract optimal stopping problem studied in Sect. 6.1 is a local viscosity solution of the VIs (6.3). It is readily checked that the abstract optimal stopping problem can be embedded into the setting of [10, Theorem 4.1], hence yielding the following weak dynamic programming principle.

Theorem B.1

(Bouchard and Touzi [10])

Let\((t,x,y)\in [0,T) \times {\mathcal{O}}\)and\(\circ \in \{{\mathrm{b}},{\mathrm{nb}}\}\)be such that\(x\in {\mathcal{S}_{\circ }}\). Let furthermore\(F\subseteq [0,T]\times {\mathcal{S}_{\circ }}\times \mathbb{R}^{m}\)be compact such that\((t,x,y)\)is in the interior of\(F\)relative to\([0,T)\times {\mathcal{O}}\). Next, let\(\theta \in \mathcal{T}^{t}_{t,T}\)be such that the process\((\cdot ,\overline{X}^{t,x,y},Y^{t,y})\)never leaves\(F\)on\([t,\theta ]\). Then

(B.1)

Moreover, it holds that

(B.2)

for any upper semicontinuous function\(\varphi :[0,T)\times {\mathcal{O} _{\circ }^{\star }}\to \mathbb{R}\)with\(\varphi \leq \mathrm{V}\).

Some brief remarks are in order here. First, we observe that the assumptions ensure that \(\theta <\overline{\tau }_{{\mathcal{S}}}^{t,x,y}\), i.e., the state process never reaches the boundary of the state space. One difference in the conclusion of the above theorem and the result in [10] is that we write \(\mathrm{V}^{\circ }\) on the right-hand side of (B.1) instead of the global upper semicontinuous envelope of \(\mathrm{V}\). This is justified since the process \(\overline{X}^{t,x,y}\) never switches from \({\widehat{\mathcal{S}} _{{\mathrm{nb}}}}\) to \({\widehat{\mathcal{S}}_{{\mathrm{b}}}}\) and vice versa. The second difference is that the test functions \(\varphi \) in the second dynamic programming inequality (B.2) are only defined on \([0,T)\times {\mathcal{O}_{\circ }^{\star }}\) instead of the entire state space. This is justified by the assumption that \(x\in {\mathcal{S}_{\circ }}\) and since \((\cdot \wedge \theta , \overline{X}^{t,x,y}_{\cdot \wedge \theta },Y^{t,y}_{\cdot \wedge \theta })\) never leaves \(F\subseteq [0,T)\times {\mathcal{O}_{\circ } ^{\star }}\).

With the weak dynamic programming principle at hand, one can show that \(\mathrm{V}\) is a local viscosity solution of the VIs (6.3).

Theorem B.2

The value function\(\mathrm{V}\)of the abstract optimal stopping problem (6.2) is a local viscosity solution of the VIs (6.3).

Proof

Step 1. We show that \(\mathrm{V}\) is a local viscosity subsolution. For this, let \(\circ \in \{{\mathrm{b}},{\mathrm{nb}}\}\), fix \((\bar{t}, \bar{x},\bar{y})\in [0,T)\times {\mathcal{O}_{\circ }^{\star }}\) and let \(\varphi \in C^{1,2}([0,T)\times {\mathcal{O}_{\circ }^{\star }})\) be such that \(\mathrm{V}^{\circ }-\varphi \) has a strict global maximum at \((\bar{t},\bar{x},\bar{y})\) with \(\mathrm{V}^{\circ }(\bar{t},\bar{x}, \bar{y}) = \varphi (\bar{t},\bar{x},\bar{y})\). Assume by way of contradiction that there exists \(\kappa >0\) such that

$$ \min \{\mathcal{L}\varphi (\bar{t},\bar{x},\bar{y}), \varphi (\bar{t}, \bar{x},\bar{y}) - w^{\circ }(\bar{t},\bar{x},\bar{y})\} \geq 2\kappa > 0. $$

Since \(\varphi \) and \(\mathcal{L}\varphi \) are continuous and \(w^{\circ }\) is upper semicontinuous, there exists some \(\varepsilon >0\) such that \({\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\subseteq [0,T) \times {\mathcal{O}_{\circ }^{\star }}\) and

$$ \min \{\mathcal{L}\varphi , \varphi - w^{\circ }\} \geq \kappa > 0 \qquad \text{on }{\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}. $$
(B.3)

Since \({{\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\setminus {\mathcal{D} ^{\varepsilon }_{{\overline{\mathcal{O}}}_{\circ }}(\bar{t},\bar{x}, \bar{y})}}\) is compact and the maximum of \(\mathrm{V}^{\circ }-\varphi \) is strict, we see that

$$ - \gamma \mathrel{:=} \max _{(t,x,y)\in {{\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\setminus {\mathcal{D} ^{\varepsilon }_{{\overline{\mathcal{O}}}_{\circ }}(\bar{t},\bar{x}, \bar{y})}}}\bigl(\mathrm{V}^{\circ }(t,x,y)-\varphi (t,x,y)\bigr) < 0. $$
(B.4)

Now choose \((t_{k},x_{k},y_{k})_{k\in \mathbb{N}}\subseteq [0,T) \times {\mathcal{O}_{\circ }}\) converging to \((\bar{t},\bar{x}, \bar{y})\) such that

$$ \mathrm{V}^{\circ }(\bar{t},\bar{x},\bar{y}) = \lim _{k\to \infty } \mathrm{V}(t_{k},x_{k},y_{k}). $$

Clearly, we can assume that \((t_{k},x_{k},y_{k})_{k\in \mathbb{N}} \subseteq {\overline{\mathcal{D}}^{\varepsilon /2}_{ \widehat{{\mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\). Now define

$$ \theta _{k} \mathrel{:=}\inf \bigl\{ t>t_{k} : (t,X^{k}_{t},Y^{k}_{t}) \notin {\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\bigr\} , $$

where we have set \(X^{k} \mathrel{:=}\overline{X}^{t_{k},x_{k},y_{k}}\) and \(Y^{k}\mathrel{:=}Y^{t_{k},y_{k}}\) as shorthand notation. Finally, we write \(\eta _{k}\mathrel{:=}\mathrm{V}(t_{k},x_{k},y_{k}) - \varphi (t_{k},x_{k},y_{k})\leq 0\) for each \(k\in \mathbb{N}\). An application of Itô’s formula, as well as (B.3) and (B.4), now shows that

Now \(\eta _{k}\to 0\) and hence \(\eta _{k} + \min \{\kappa ,\gamma \}>0\) for eventually all \(k\in \mathbb{N}\). Using that \(\tau \) was chosen arbitrarily, this shows that

for \(k\) sufficiently large. But this is a contradiction to the first weak dynamic programming inequality (B.1).

Step 2. We show that \(\mathrm{V}\) is a local viscosity supersolution. For this, let \(\circ \in \{{\mathrm{b}},{\mathrm{nb}}\}\), fix \((\bar{t},\bar{x},\bar{y})\in [0,T)\times {\mathcal{O}_{\circ }^{ \star }}\) and let \(\varphi \in C^{1,2}([0,T)\times {\mathcal{O}_{ \circ }^{\star }})\) be such that \(\mathrm{V}_{\circ }-\varphi \) has a strict global minimum at \((\bar{t},\bar{x},\bar{y})\) with \(\mathrm{V} _{\circ }(\bar{t},\bar{x},\bar{y}) = \varphi (\bar{t},\bar{x},\bar{y})\). Let us first show that

$$ \mathrm{V}_{\circ }(\bar{t},\bar{x},\bar{y}) - w_{\circ }(\bar{t}, \bar{x},\bar{y}) \geq 0. $$
(B.5)

For this, we pick \((t_{k},x_{k},y_{k})_{k\in \mathbb{N}}\in [0,T) \times {\mathcal{O}_{\circ }}\) converging to \((\bar{t},\bar{x}, \bar{y})\) and with

$$ \mathrm{V}_{\circ }(\bar{t},\bar{x},\bar{y}) = \lim _{k\to \infty } \mathrm{V}(t_{k},x_{k},y_{k}). $$

By the very definition of the optimal stopping problem, we have \(\mathrm{V}\geq w\) on \([0,T)\times {\mathcal{O}_{\circ }}\). Thus, the lower semicontinuity of \(w_{\circ }\) immediately yields

$$ \mathrm{V}_{\circ }(\bar{t},\bar{x},\bar{y}) = \lim _{k\to \infty } \mathrm{V}(t_{k},x_{k},y_{k}) \geq \liminf _{k\to \infty } w(t_{k},x _{k},y_{k}) \geq w_{\circ }(\bar{t},\bar{x},\bar{y}). $$

Now fix \(\varepsilon >0\) such that \({\overline{\mathcal{D}}^{\varepsilon }_{{\overline{\mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\subseteq [0,T)\times {\mathcal{O}_{\circ }^{\star }}\) and, after possibly passing to a subsequence, \((t_{k},x_{k},y_{k})_{k\in \mathbb{N}}\subseteq {\overline{\mathcal{D}}^{\varepsilon }_{\widehat{{\mathcal{O}}}_{ \circ }}(\bar{t},\bar{x},\bar{y})}\). Next, define

$$ \eta _{k} \mathrel{:=}\mathrm{V}(t_{k},x_{k},y_{k}) - \varphi (t_{k},x _{k},y_{k}) \geq 0 \qquad \text{for all }k\in \mathbb{N}, $$

as well as

We observe that \(\varepsilon _{k}>0\) for all \(k\in \mathbb{N}\) and

$$ \lim _{k\to \infty } \eta _{k} = \lim _{k\to \infty } \varepsilon _{k} = 0 \qquad \text{and} \qquad \lim _{k\to \infty } { \frac{\eta _{k}}{\varepsilon _{k}}} = 0. $$

Now introduce the stopping times

$$ \theta _{k} \mathrel{:=}\inf \bigl\{ t>t_{k} : (t,X^{k}_{t},Y^{k}_{t}) \notin {\overline{\mathcal{D}}^{\varepsilon }_{{\overline{ \mathcal{O}}}_{\circ }}(\bar{t},\bar{x},\bar{y})}\text{ or }t\notin [\bar{t}-\varepsilon _{k},\bar{t}+\varepsilon _{k}]\bigr\} . $$

It is apparent that \(\theta _{k}\in \mathcal{T}^{t_{k}}_{t_{k},T}\) for \(k\) sufficiently large, and hence the second weak dynamic programming inequality (B.2) with \(\tau = T\) implies

$$ \mathrm{V}(t_{k},x_{k},y_{k}) \geq \mathbb{E}[\varphi (\theta _{k},X ^{k}_{\theta _{k}},Y^{k}_{\theta _{k}})] $$

for eventually all \(k\in \mathbb{N}\). Using the definition of \(\eta _{k}\) and applying Itô’s formula, it follows that

$$\begin{aligned} \frac{\eta _{k}}{\varepsilon _{k}} & = \frac{1}{\varepsilon _{k}}\bigl( \mathrm{V}(t_{k},x_{k},y_{k}) - \varphi (t_{k},x_{k},y_{k})\bigr) \\ & \geq \frac{1}{\varepsilon _{k}}\mathbb{E}[\varphi (\theta _{k},X^{k} _{\theta _{k}},Y^{k}_{\theta _{k}}) - \varphi (t_{k},x_{k},y_{k})] \\ & = \mathbb{E}\bigg[-\frac{1}{\varepsilon _{k}}\int _{t_{k}}^{\theta _{k}}\mathcal{L}\varphi (u,X^{k}_{u},Y^{k}_{u}) \,\mathrm{d}u\bigg]. \end{aligned}$$

Now for each \(\omega \in \Omega \), there exists some \(K(\omega ) \in \mathbb{N}\) such that \(\theta _{k}(\omega ) = \varepsilon _{k}\) for all \(k\geq K(\omega )\). Thus the mean value theorem and dominated convergence imply

$$ 0 = \lim _{k\to \infty } { \frac{\eta _{k}}{\varepsilon _{k}}} \geq \lim _{k\to \infty }\mathbb{E}\bigg[{ -\frac{1}{\varepsilon _{k}} \int _{t_{k}}^{\theta _{k}}\mathcal{L}\varphi (u,X^{k}_{u},Y^{k}_{u}) \,\mathrm{d}u}\bigg] = -\mathcal{L}\varphi (\bar{t},\bar{x},\bar{y}). $$

Combining this with (B.5) yields

$$ \min \{\mathcal{L}\varphi (\bar{t},\bar{x},\bar{y}), \mathrm{V}_{ \circ }(\bar{t},\bar{x},\bar{y}) - w_{\circ }(\bar{t},\bar{x},\bar{y}) \} \geq 0, $$

and the proof is complete. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Belak, C., Christensen, S. Utility maximisation in a factor model with constant and proportional transaction costs. Finance Stoch 23, 29–96 (2019). https://doi.org/10.1007/s00780-018-00380-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-018-00380-1

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation