Skip to main content
Log in

A numerical approach to solve consumption-portfolio problems with predictability in income, stock prices, and house prices

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

In this paper, I establish a numerical method to solve a generic consumption-portfolio choice problem with predictability in stock prices, house prices, and labor income. I generalize the SAMS method introduced by Bick et al. (Manag Sci 59:485–503, 2013) to state-dependent modifiers. I set up artificial markets to derive closed-form solutions for my life-cycle problem and transform the resulting consumption-portfolio strategies into feasible ones in the true market. To obtain transformed-feasible strategies that are close to the truly, unknown optimal strategies, I introduce state-dependent modifiers. I show that this generalization of the SAMS method reduces the welfare losses from over 10% to less than 2%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. See, e.g., Campbell (2006) and Cochrane (2011), Section V.

  2. Notice that my economic model is also studied by Kraft et al. (2019), but these authors focus on the economic implications only. By contrast, my paper provides the numerical background to solve this model and thus complements their results.

  3. Kraft and Munk (2011) show that the losses from infrequently adjusting housing decisions are small, confirming the relevance of my solution in more realistic settings. See Section 5 in their paper.

  4. See also Theorem 3.1 and its proof.

  5. Note that with the modifiers used for the approximate solution, I cannot be sure that the agent is better off in the artificial market. Therefore, I do not use them for determining the upper loss bound. Instead, I only use them to calculate feasible strategies in the true market.

  6. A formal proof is available upon request.

  7. Cvitanic and Karatzas (1992) show that, under certain technical conditions, the solution in the true market is equal to the solution in the worst of all the artificial markets. But in complex models as mine it seems impossible to identify this worst market.

  8. Note that I compute the functions F and G derived in Theorem 3.1 by Monte-Carlo simulations based on the Feynman-Kac representations stated in “Appendices A.1” and “B.1”, respectively.

  9. However, I use the general parameterization (46)–(47) to derive the upper utility bound in the artificial market.

  10. I simulate using Euler-Maruyama discretizations with 20 time steps per year. Note that F and G involve numerical integration. Before running the simulations, I evaluate F and G on grids, and when I need values for F and G and their derivatives in the simulations, I use the grid values and linear interpolation.

  11. This is only true for the grid values. Notice additionally by running regressions I linearly approximate the dependence of the portfolio strategies on the predictors.

  12. See also (51)–(54) for notational details.

  13. Notice that the utility index is always negative since risk aversion \(\gamma \) is bigger than one. Therefore, the utility is also bounded from above by zero.

References

  • Bick B, Kraft H, Munk C (2013) Solving constrained consumption-investment problems by simulation of artificial market strategies. Manag Sci 59:485–503

    Article  Google Scholar 

  • Brandt MW, Goyal A, Santa-Clara P, Stroud JR (2005) A simulation approach to dynamic portfolio choice with an application to learning about return predictability. Rev Financ Stud 18:831–873

    Article  Google Scholar 

  • Campbell JY (2006) Household finance, presidential address to the American Finance Association. J Finance 61:1553–1604

    Article  Google Scholar 

  • Cochrane JH (2011) Presidential address: discount rates. J Finance 66:1047–1108

    Article  Google Scholar 

  • Cvitanic J, Karatzas I (1992) Convex duality in constrained portfolio optimization. Ann Appl Probab 2:767–818

    Article  MathSciNet  Google Scholar 

  • Cvitanic J, Goukasian L, Zapatero F (2003) Monte Carlo computation of optimal portfolios in complete markets. J Econ Dyn Control 27:971–986

    Article  MathSciNet  Google Scholar 

  • Detemple JB, Garcia R, Rindisbacher M (2003) A Monte Carlo method for optimal portfolios. J Finance 58:401–446

    Article  Google Scholar 

  • Fleming W, Soner M (2006) Controlled Markov processes and viscosity solutions, 2nd edn. Springer, Berlin

    MATH  Google Scholar 

  • Heath D, Schweizer M (2000) Martingales versus PDEs in finance: an equivalence result with examples. J Appl Probab 37:947–957

    Article  MathSciNet  Google Scholar 

  • Koijen RSJ, Nijman TE, Werker BJM (2007) Appendix describing the numerical methods used in ”When can life-cycle investors benefit from time-varying bond risk premia?”. Working paper

  • Koijen RSJ, Nijman TE, Werker BJM (2010) When can life cycle investors benefit from time-varying bond risk premia? Rev Financ Stud 23:741–780

    Article  Google Scholar 

  • Kraft H, Munk C (2011) Optimal housing, consumption, and investment decisions over the life-cycle. Manag Sci 57:1025–1041

    Article  Google Scholar 

  • Kraft H, Munk C, Weiss F (2019) Predictors and portfolios over the life cycle. J Bank Finance 100:1–27

    Article  Google Scholar 

  • Liu J (2007) Portfolio selection in stochastic environments. Rev Financ Stud 20:1–39

    Article  Google Scholar 

  • Longstaff FA, Schwartz ES (2001) Valuing American options by simulation: a simple least-squares approach. Rev Financ Stud 14:113–147

    Article  Google Scholar 

  • Merton RC (1969) Lifetime Portfolio selection under uncertainty: the continuous-time case. Rev Econ Stat 51:247–257

    Article  Google Scholar 

  • Merton RC (1971) Optimum consumption and portfolio rules in a continuous-time model. J Econ Theory 3:373–413

    Article  MathSciNet  Google Scholar 

  • Schroeder P, Schober P, Wittum G (2013) Dimension-wise decompositions and their efficient parallelization. Recent Dev Comput Finance Interdiscip Math Sci 14:445–472

    MathSciNet  MATH  Google Scholar 

  • Wachter JA (2002) Portfolio and consumption decisions under mean-reverting returns: an exact solution for complete markets. J Financ Quant Anal 37:63–91

    Article  Google Scholar 

Download references

Acknowledgements

I thank Holger Kraft, Claus Munk, Christian Schlag, Oliver Stein (the editor) and two anonymous referees for helpful comments and suggestions. All remaining errors are my own. I gratefully acknowledges financial support by the Center of Excellence SAFE, funded by the State of Hessen initiative for research LOEWE.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Farina Weiss.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs and derivations for human capital

1.1 Proof of Proposition 3.1

In a complete, unconstrained market I can represent human capital by the risk-neutral expectation of the discounted future labor income. In retirement, i.e. for \(t\in ({\widetilde{T}},T]\), the human capital is therefore

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}} \left[ \int _t^T e^{ -\int _t^s {{\tilde{r}}}(u,x_u,y_u)\,du} L_s \,ds \right] , \end{aligned}$$
(48)

where \({\mathbb {Q}}\) is the unique risk-neutral probability measure in a given artificial market. Before retirement, the human capital is

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}} \left[ \int _t^{{\widetilde{T}}} e^{ -\int _t^s {{\tilde{r}}}(u,x_u,y_u)\,du} L_s\,ds + \int _{{\widetilde{T}}}^T e^{ -\int _t^s {{\tilde{r}}}(u,x_u,y_u)\,du} L_s\,ds \right] . \end{aligned}$$
(49)

Here, when computing the expectation at time \(t<{\widetilde{T}}\) of the income rate as time \(s>{\widetilde{T}}\), I have to incorporate the drop in income at time \({\widetilde{T}}\).

To derive the income dynamics under the \({\mathbb {Q}}\) measure, I must identify the market prices of risk associated with the Brownian shocks \(B_S,B_H,B_L,B_x,B_y\). While the market price of risk associated with \(B_L,B_x,B_y\) is \(\lambda _L, \lambda _x, \lambda _y\) by assumption, I identify the market prices of risk \(m_{S},m_{H}\) associated with \(B_S,B_H\) by using the fact that the excess expected return on an asset is the product of its sensitivities towards the shocks and the market prices of risks associated with the shocks. For the stock and the housing investment this means

$$\begin{aligned} m_{St}= & {} \frac{{\tilde{\mu }}_{S}'(t,x_t,y_t)+\chi _S x_t}{\sigma _{S}}, \quad m_{Ht} = \frac{{\tilde{\mu }}_{H}'(t,x_t,y_t)+ \chi _{Hx} x_t+ \chi _{Hy} y_t}{\sigma _{H}{\hat{\rho }}_{H}} \\&- \frac{\rho _{HS}\left[ {\tilde{\mu }}_{S}'(t,x_t,y_t)+ \chi _S x_t\right] }{{\hat{\rho }}_{H}\sigma _{S}}. \end{aligned}$$

The risk-neutral income dynamics under the \({\mathbb {Q}}\) measure is therefore

$$\begin{aligned} \frac{dL_t}{L_t}&= \Bigl [- \left\{ M_{LS}(t) \chi _S+ M_{LH}(t) \chi _{Hx}\right\} x_t - \left\{ M_{LH}(t) \chi _{Hy} - \chi _L(t)\right\} y_t + \ell (t,x_t,y_t) \Bigr ]\,dt\\&\quad + \sigma _{L}\Bigl [ \rho _{LS} \,dB^{{\mathbb {Q}}}_{St} + {\hat{\rho }}_{LH} \,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{L}\,dB^{{\mathbb {Q}}}_{Lt}\Bigr ], \end{aligned}$$

where

$$\begin{aligned} M_{LS}(t)= & {} \frac{\sigma _{L}(t)}{\sigma _{S}}\left[ \rho _{LS} - \frac{\rho _{HS}{\hat{\rho }}_{LH}}{{\hat{\rho }}_{H}}\right] , \quad M_{LH}(t) = \frac{\sigma _{L}(t){\hat{\rho }}_{LH}}{\sigma _{H}{\hat{\rho }}_{H}}, \\ \ell (t,x,y)= & {} \mu _L(t) - M_{LS}(t) {\tilde{\mu }}_{S}'(t,x,y) - M_{LH}(t) {\tilde{\mu }}_{H}'(t,x,y) - \sigma _{L}(t) {\hat{\rho }}_{L} \lambda _L(t,x,y). \end{aligned}$$

As the drift involves x and y, I have to derive the risk-neutral dynamics of these processes as well. For x, I obtain

$$\begin{aligned} dx_t&= \Bigl [ - \left\{ \kappa _x+ M_{xS} \chi _S + M_{xH} \chi _{Hx}\right\} x_t - M_{xH} \chi _{Hy} y_t - M_x(t,x_t,y_t) \Bigr ]\,dt \\&\quad + \sigma _{x}\Bigl [ \rho _{xS}\,dB^{{\mathbb {Q}}}_{St} + {\hat{\rho }}_{xH}\,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{xL}\,dB^{{\mathbb {Q}}}_{Lt} + {\hat{\rho }}_{x}\,dB^{{\mathbb {Q}}}_{xt}\Bigr ], \end{aligned}$$

where

$$\begin{aligned} M_{xS}= & {} \frac{\sigma _{x}}{\sigma _{S}} \left[ \rho _{xS} - \frac{\rho _{HS}{\hat{\rho }}_{xH}}{{\hat{\rho }}_{H}}\right] ,\quad M_{xH} = \frac{\sigma _{x}{\hat{\rho }}_{xH}}{\sigma _{H}{\hat{\rho }}_{H}},\\ M_x(t,x,y)= & {} M_{xS} {\tilde{\mu }}_{S}'(t,x,y) + M_{xH} {\tilde{\mu }}_{H}'(t,x,y) + \sigma _{x} {\hat{\rho }}_{xL} \lambda _L(t,x,y) + \sigma _{x} {\hat{\rho }}_{x} \lambda _x(t,x,y) . \end{aligned}$$

For y, I obtain

$$\begin{aligned} dy_t&= \Bigl [ - \left\{ M_{yS}\chi _S + M_{yH}\chi _{Hx}\right\} x_t - \left\{ \kappa _y+ M_{yH} \chi _{Hy}\right\} y_t - M_y(t,x_t,y_t) \Bigr ]\,dt \\&\quad + \sigma _{y}\Bigl [ \rho _{yS}\,dB^{{\mathbb {Q}}}_{St} + {\hat{\rho }}_{yH}\,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{yL}\,dB^{{\mathbb {Q}}}_{Lt} + {\hat{\rho }}_{yx}\,dB^{{\mathbb {Q}}}_{xt}+ {\hat{\rho }}_{y}\,dB^{{\mathbb {Q}}}_{yt}\Bigr ], \end{aligned}$$

where

$$\begin{aligned} M_{yS}= & {} \frac{\sigma _{y}}{\sigma _{S}} \left[ \rho _{yS} - \frac{\rho _{HS}{\hat{\rho }}_{yH}}{{\hat{\rho }}_{H}}\right] ,\quad M_{yH} = \frac{\sigma _{y}{\hat{\rho }}_{yH}}{\sigma _{H}{\hat{\rho }}_{H}},\\ M_y(t,x,y)= & {} M_{yS}{\tilde{\mu }}_{S}'(t,x,y) + M_{yH}{\tilde{\mu }}_{H}'(t,x,y) \\&+\sigma _{y} {\hat{\rho }}_{yL} \lambda _L(t,x,y) +\sigma _{y} {\hat{\rho }}_{yx} \lambda _x(t,x,y) +\,\sigma _{y} {\hat{\rho }}_{y} \lambda _y(t,x,y). \end{aligned}$$

Hence, F satisfies the partial differential equation (PDE)

$$\begin{aligned} 0= & {} 1+ \frac{\partial F}{\partial t} + \frac{1}{2} \sigma _x^2 F_{xx} + \frac{1}{2} \sigma _y^2 F_{yy} + \sigma _{xy} F_{xy} \\&+ \biggl [- \left\{ M_{LS}(t) \chi _S + M_{LH}(t) \chi _{Hx}\right\} x + \left\{ \chi _{L} - M_{LH}(t) \chi _{Hy}\right\} y + \mu _L(t) \\&- M_{LS}(t) {\tilde{\mu }}_{S}'(t,x,y)\\&- M_{LH}(t) {\tilde{\mu }}_{H}'(t,x,y) - \sigma _{L}(t) {\hat{\rho }}_{L} \lambda _L(t,x,y) - {{\tilde{r}}}(t,x,y) \biggr ] F \\&+ \biggl [- \left\{ \kappa _x+ M_{xS} \chi _S + M_{xH} \chi _{Hx}\right\} x - M_{xH} \chi _{Hy} y + \sigma _{xL} \\&- M_{xS} {\tilde{\mu }}_{S}'(t,x,y) - M_{xH} {\tilde{\mu }}_{H}'(t,x,y) \\&- \sigma _{x} {\hat{\rho }}_{xL} \lambda _L(t,x,y) - \sigma _{x} {\hat{\rho }}_{x} \lambda _x(t,x,y) \biggr ] F_x \\&+ \biggl [ - \left\{ M_{yS}\chi _S + M_{yH}\chi _{Hx}\right\} x - \left\{ \kappa _y+ M_{yH} \chi _{Hy}\right\} y + \sigma _{yL} \\&- M_{yS}{\tilde{\mu }}_{S}'(t,x,y) - M_{yH}{\tilde{\mu }}_{H}'(t,x,y) \\&-\sigma _{y} {\hat{\rho }}_{yL} \lambda _L(t,x,y) -\sigma _{y} {\hat{\rho }}_{yx} \lambda _x(t,x,y) -\sigma _{y} {\hat{\rho }}_{y} \lambda _y(t,x,y) \biggr ] F_y \end{aligned}$$

with terminal conditions \(F(T,x,y)= 0\) and I omit the dependence of F and its derivatives on txy. This PDE can be shown directly by substitution of the relevant derivatives of F. Alternatively, it follows by making use of the fact that \(\mathcal {P}(t,L,x,y)=LF(t,x,y)\) can be seen as the price of a stream of dividends, and it is well-known from derivatives pricing that \(\mathcal {P}\) satisfies a certain PDE. Substituting \(\mathcal {P}=LF\) into that, leads to (18). If in addition the coefficients of the previous PDE satisfy the conditions of Heath and Schweizer (2000), then solution F admits the Feynman-Kac representation

$$\begin{aligned} F(t,x_t,y_t)&= \mathrm{E}^{{\mathbb {Q}}}_{t} \left[ \int _t^T e^{-\int _t^s r_F(u,x_u,y_u) du} ds \right] , \end{aligned}$$

where

$$\begin{aligned} -r_F(t,x,y)&= \left\{ M_{LS}(t) \chi _S + M_{LH}(t) \chi _{Hx}\right\} x + \left\{ M_{LH}(t) \chi _{Hy} - \chi _{L} \right\} y - \mu _L(t) + {{\tilde{r}}}(t,x,y) \\&\quad + M_{LS}(t) {\tilde{\mu }}_{S}'(t,x,y) + M_{LH}(t) {\tilde{\mu }}_{H}'(t,x,y) + \sigma _{L}(t) {\hat{\rho }}_{L} \lambda _L(t,x,y) . \\ dx&= \biggl [- \left\{ \kappa _x+ M_{xS} \chi _S + M_{xH} \chi _{Hx}\right\} x \\&\quad - M_{xH} \chi _{Hy} y + \sigma _{xL} - M_{xS} {\tilde{\mu }}_{S}'(t,x,y) - M_{xH} {\tilde{\mu }}_{H}'(t,x,y) \\&\quad - \sigma _{x} {\hat{\rho }}_{xL} \lambda _L(t,x,y) - \sigma _{x} {\hat{\rho }}_{x} \lambda _x(t,x,y) \biggr ] dt + \sigma _x dB^{{\mathbb {Q}}}_{xt}, \\ dy&= \biggl [ - \left\{ M_{yS}\chi _S + M_{yH}\chi _{Hx}\right\} x - \left\{ \kappa _y+ M_{yH} \chi _{Hy}\right\} y \\&\quad + \sigma _{yL} - M_{yS}{\tilde{\mu }}_{S}'(t,x,y) - M_{yH}{\tilde{\mu }}_{H}'(t,x,y) \\&\quad -\sigma _{y} {\hat{\rho }}_{yL} \lambda _L(t,x,y) -\sigma _{y} {\hat{\rho }}_{yx} \lambda _x(t,x,y) -\sigma _{y} {\hat{\rho }}_{y} \lambda _y(t,x,y) \biggr ] dt \\&\quad + \sigma _y \rho _{xy} dB^{{\mathbb {Q}}}_{xt}+ \sigma _y \sqrt{1-\rho _{xy}^2} dB^{{\mathbb {Q}}}_{yt}. \end{aligned}$$

I use this representation only to obtain an upper bound \({\bar{J}}\) by minimizing the indirect utility in the artificial market for a set of modifiers. Notice that any feasible specification of the modifiers leads to an upper bound. If there were integrability issues with a particular specification of the modifiers, it is always possible to truncate this specification (at a high and thus numerically irrelevant level) without compromising the fact that the resulting value is still an upper bound.

1.2 Corollary 3.1: Solving for F with time-dependent modifiers

With time-dependent modifiers, the adjusted asset drifts and the interest rate are now \({\tilde{\mu }}_i'(t,x,y) \equiv {\tilde{\mu }}_i'(t)\) with \(i\in \{S,H\}\), and \(\lambda _j(t,x,y) \equiv \lambda _j(t)\) with \(j\in \{L,x,y\}\), as well as \({{\tilde{r}}}(t,x,y) \equiv {{\tilde{r}}}(t)\). For this special case, I can solve for F in closed form.

First, assume that either \(t< s < {{\tilde{T}}}\) or \({{\tilde{T}}}< t < s\) so that there is no one-time drop of income between t and s. I get that

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}}\left[ L_s \right] = L_t \exp \{{\tilde{\alpha }}(t,s) + \beta (t,s) x_t + \psi (t,s) y_t\} , \end{aligned}$$

where

$$\begin{aligned} \frac{\partial \beta }{\partial t} \left( t,s\right)= & {} \left[ \kappa _x + M_{xS} \chi _S + M_{xH} \chi _{Hx} \right] \beta \left( t,s\right) + \left[ M_{yS} \chi _S + M_{yH} \chi _{Hx} \right] \psi \left( t,s\right) \\&+ M_{LS}(t) \chi _S + M_{LH}(t) \chi _{Hx} , \\ \frac{\partial \psi }{\partial t} \left( t,s\right)= & {} \left[ M_{xH} \chi _{Hy} \right] \beta \left( t,s\right) + \left[ \kappa _y + M_{yH} \chi _{Hy} \right] \psi \left( t,s\right) + M_{LH}(t) \chi _{Hy} - \chi _{L} , \end{aligned}$$

with boundary condition \(\beta \left( s,s\right) = \psi \left( s,s\right) =0\). \({\tilde{\alpha }}(t,s)\) is determined by

$$\begin{aligned} {\tilde{\alpha }}(t,s)= & {} \int _t^s l(u) - \left[ M_x(u) - \sigma _{xL}(u) \right] \beta \left( u,s\right) - \left[ M_y(u) - \sigma _{yL}(u)\right] \psi \left( u,s\right) \\&+ \frac{1}{2} \sigma _x^2 \beta \left( u,s\right) ^2 + \frac{1}{2} \sigma _y^2 \psi \left( u,s\right) ^2 + \sigma _{xy} \beta \left( u,s\right) \psi \left( u,s\right) du, \end{aligned}$$

with boundary condition \(\alpha \left( s,s\right) =0\) and I denote \({\tilde{\alpha }}(t,s) = \alpha (t,s)+\int _t^s {{\tilde{r}}}(u) du \). Furthermore, I have introduced the notation \(\sigma _{xL}(t)= \rho _{xL} \sigma _x \sigma _L(t)\) and \(\sigma _{yL}(t)= \rho _{yL} \sigma _y \sigma _L(t)\). With deterministic modifiers the functions, \(\ell \), \(M_x\) and \(M_y\) only depend on time (age)

$$\begin{aligned} \ell (t)&= \mu _L(t) - M_{LS}(t) {\tilde{\mu }}_{S}'(t) - M_{LH}(t) {\tilde{\mu }}_{H}'(t) - \sigma _{L}(t) {\hat{\rho }}_{L} \lambda _L(t),\\ M_x(t)&= M_{xS} {\tilde{\mu }}_{S}'(t) + M_{xH} {\tilde{\mu }}_{H}'(t) + \sigma _{x} {\hat{\rho }}_{xL} \lambda _L(t) + \sigma _{x} {\hat{\rho }}_{x} \lambda _x(t) ,\\ M_y(t)&= M_{yS}{\tilde{\mu }}_{S}'(t) + M_{yH} {\tilde{\mu }}_{H}'(t) +\sigma _{y} {\hat{\rho }}_{yL} \lambda _L(t) +\sigma _{y} {\hat{\rho }}_{yx} \lambda _x(t) +\sigma _{y} {\hat{\rho }}_{y} \lambda _y(t). \end{aligned}$$

Next, for \(t< {{\tilde{T}}} < s\)

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}}\left[ L_s \right] = \Upsilon L_t \exp \{{\tilde{\alpha }}(t,s) + \beta (t,s) x_t + \psi (t,s) y_t\} . \end{aligned}$$

Now, the human capital is obtained by substituting the equations for \(\mathrm{E}^{{\mathbb {Q}}}[L_s]\) into the equations for human capital, which implies the statement in Corollary 3.1.

1.3 Corollary 3.2: Solving for F with state-dependent modifiers

In the following, I assume that the the modifiers are defined as (22)–(24). Moreover, I denote

$$\begin{aligned} {\tilde{\chi }}_S =\chi _S + \nu _{S,2}, \qquad {\tilde{\chi }}_{yS} = \nu _{S,3}, \qquad {\tilde{\chi }}_{xH} =\chi _{xH} + \nu _{H,2}, \qquad {\tilde{\chi }}_{yH} = \chi _{Hy} + \nu _{H,3}, \end{aligned}$$

and the deterministic parts of the asset drifts

$$\begin{aligned} {\tilde{\mu }}_{S}'(t) = \mu _{S} + {\mathcal {D}}+ \nu _{S,0} + \nu _{S,1} \, t, \qquad {\tilde{\mu }}_{H}'(t) = \mu _{H} + R - m +\nu _{H,0} + \nu _{H,1} \, t, \end{aligned}$$

and

$$\begin{aligned} \lambda _{L}(t) = \lambda _{L,0} + \lambda _{L,1} \, t, \qquad \lambda _{x}(t) = \lambda _{x,0} + \lambda _{x,1} \, t, \qquad \lambda _{y}(t) = \lambda _{y,0} + \lambda _{y,1} \, t. \end{aligned}$$

With these assumptions the risk-neutral income, x, and y dynamics under the measure \({\mathbb {Q}}\) are

$$\begin{aligned} \frac{dL_t}{L_t}&= \Bigl [- \zeta _{Lx}(t) x_t - \zeta _{Ly}(t) y_t + {\tilde{\ell }}(t)\Bigr ]\,dt + \sigma _{L}(t) \Bigl [ \rho _{LS} \,dB^{{\mathbb {Q}}}_{St} + {\hat{\rho }}_{LH} \,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{L}\,dB^{{\mathbb {Q}}}_{Lt}\Bigr ], \\ dx_t&= \Bigl [ - \left\{ \kappa _x+\zeta _{xx}\right\} x_t - \zeta _{xy} y_t - {{\tilde{M}}}_x(t)\Bigr ]\,dt + \sigma _{x}\Bigl [ \rho _{xS}\,dB^{{\mathbb {Q}}}_{St} \\&\quad + {\hat{\rho }}_{xH}\,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{xL}\,dB^{{\mathbb {Q}}}_{Lt} + {\hat{\rho }}_{x}\,dB^{{\mathbb {Q}}}_{xt}\Bigr ],\\ dy_t&= \Bigl [-\zeta _{yx} x_t - \left\{ \kappa _y+ \zeta _{yy}\right\} y_t - {{\tilde{M}}}_y(t)\Bigr ]\,dt + \sigma _{y}\Bigl [\rho _{yS}\,dB^{{\mathbb {Q}}}_{St} + {\hat{\rho }}_{yH}\,dB^{{\mathbb {Q}}}_{Ht} + {\hat{\rho }}_{yL}\,dB^{{\mathbb {Q}}}_{Lt}\\&\quad + {\hat{\rho }}_{yx}\,dB^{{\mathbb {Q}}}_{xt}+ {\hat{\rho }}_{y}\,dB^{{\mathbb {Q}}}_{yt}\Bigr ], \end{aligned}$$

where I have introduced

$$\begin{aligned} {\tilde{\ell }}(t)&= \mu _L(t) - M_{LS}(t) {\tilde{\mu }}_{S}'(t) - M_{LH}(t) {\tilde{\mu }}_{H}'(t) - \sigma _{L}(t) {\hat{\rho }}_{L} \lambda _L(t), \\ \zeta _{Lx}(t)&= M_{LS}(t) {\tilde{\chi }}_S+ M_{LH}(t) {\tilde{\chi }}_{Hx}+ \sigma _{L}(t)\hat{\rho }_{L} \lambda _{L,2}, \\ \zeta _{Ly}(t)&= M_{LS}(t) {\tilde{\chi }}_{Sy} + M_{LH}(t) {\tilde{\chi }}_{Hy} + \sigma _{L}(t)\hat{\rho }_{L} \lambda _{L,3} - \chi _L ,\\ {{\tilde{M}}}_x(t)&= M_{xS} {\tilde{\mu }}_{S}'(t) + M_{xH} {\tilde{\mu }}_{H}'(t) + \sigma _{x} {\hat{\rho }}_{xL} \lambda _L(t) + \sigma _{x} {\hat{\rho }}_{x} \lambda _x(t), \\ \zeta _{xx}&= M_{xS} {\tilde{\chi }}_S + M_{xH} {\tilde{\chi }}_{Hx}+ \sigma _{x} \hat{\rho }_{xL} \lambda _{L,2} + \sigma _{x} \hat{\rho }_{x} \lambda _{x,2}, \\ \zeta _{xy}&= M_{xS} {\tilde{\chi }}_{Sy} + M_{xH} {\tilde{\chi }}_{Hy}+ \sigma _{x} \hat{\rho }_{xL} \lambda _{L,3} + \sigma _{x} \hat{\rho }_{x} \lambda _{x,3},\\ {{\tilde{M}}}_y(t)&= M_{yS} {\tilde{\mu }}_{S}'(t) + M_{yH} {\tilde{\mu }}_{H}'(t) + \sigma _{y} {\hat{\rho }}_{yL} \lambda _L(t) + \sigma _{y} {\hat{\rho }}_{yx} \lambda _x(t) + \sigma _{y} {\hat{\rho }}_{y} \lambda _y(t), \\ \zeta _{yx}&= M_{yS} {\tilde{\chi }}_S + M_{yH} {\tilde{\chi }}_{Hx} + \sigma _{y} \hat{\rho }_{yL} \lambda _{L,2}+ \sigma _{y} \hat{\rho }_{yx} \lambda _{x,2}+ \sigma _{y} \hat{\rho }_{y} \lambda _{y,2},\\ \zeta _{yy}&= M_{yS} {\tilde{\chi }}_{Sy} + M_{yH} {\tilde{\chi }}_{Hy}+ \sigma _{y} \hat{\rho }_{yL}\lambda _{L,3}+ \sigma _{y} \hat{\rho }_{yx}\lambda _{x,3}+ \sigma _{y} \hat{\rho }_{y}\lambda _{y,3}. \end{aligned}$$

To derive F, I first assume that either \(t< s < {{\tilde{T}}}\) or \({{\tilde{T}}}< t < s\) so that there is no one-time drop of income between t and s. I get

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}}\left[ L_s \right] = L_t \exp \{{\tilde{\alpha }}(t,s) + \beta (t,s) x_t + \psi (t,s) y_t\} , \end{aligned}$$

where

$$\begin{aligned} \frac{\partial \beta }{\partial t} \left( t,s\right)= & {} \left[ \kappa _x+\zeta _{xx} \right] \beta \left( t,s\right) + \zeta _{yx} \psi \left( t,s\right) + \zeta _{Lx}(t) , \\ \frac{\partial \psi }{\partial t} \left( t,s\right)= & {} \zeta _{xy} \beta \left( t,s\right) + \left[ \kappa _y+ \zeta _{yy}\right] \psi \left( t,s\right) + \zeta _{Ly}(t) \, , \end{aligned}$$

with boundary condition \(\beta \left( s,s\right) = \psi \left( s,s\right) =0\). \({\tilde{\alpha }}(t,s)\) is determined by

$$\begin{aligned} {\tilde{\alpha }}(t,s)= & {} \int _t^s {\tilde{\ell }}(u) - \Bigr [ {{\tilde{M}}}_x(u) - \sigma _{xL}(u) \Bigl ] \beta \left( u,s\right) - \Bigr [ {{\tilde{M}}}_y(u) - \sigma _{yL}(u)\Bigr ] \psi \left( u,s\right) \\&+ \frac{1}{2} \sigma _x^2 \beta \left( u,s\right) ^2 + \frac{1}{2} \sigma _y^2 \psi \left( u,s\right) ^2 + \sigma _{xy} \beta \left( u,s\right) \psi \left( u,s\right) du, \end{aligned}$$

with boundary condition \(\alpha \left( s,s\right) =0\) and I denote \({\tilde{\alpha }}(t,s) = \alpha (t,s)+\int _t^s {{\tilde{r}}}(u) du \).

Next, for \(t< {{\tilde{T}}} < s\) I get

$$\begin{aligned} \mathrm{E}_t^{{\mathbb {Q}}}\left[ L_s \right] = \Upsilon L_t \exp \{{\tilde{\alpha }}(t,s) + \beta (t,s) x_t + \psi (t,s) y_t\} . \end{aligned}$$

Now, the human capital is obtained by substituting the equations for \(\mathrm{E}_t^{{\mathbb {Q}}}[L_s]\) into the equations for human capital, which implies the statement in Corollary 3.2.

Proofs and derivations for Theorem 3.1

1.1 Proof of Theorem 3.1

1.1.1 Setting up the HJB equation

The wealth dynamics in the artificial market is similar to (6), but adjusted because of the possibility to invest in the artificial assets with price dynamics (14)–(16) as well as the modification of \({{\tilde{r}}}\), \({\tilde{\mu }}_{S}'\), and \({\tilde{\mu }}_{H}'\)

$$\begin{aligned} dW_t&= W_t \Bigg \{ \Bigl [ {{\tilde{r}}}(t,x_t,y_t) + \Pi _{St} ({\tilde{\mu }}_{S}'(t,x_t,y_t) + \chi _{S} x_t) + \Pi _{Ht} ({\tilde{\mu }}_{H}'(t,x_t,y_t) + \chi _{Hx} x_t + \chi _{Hy} y_t) \nonumber \\&\quad + \Pi _{Lt} \lambda _L(t,x_t,y_t) + \Pi _{xt} \lambda _x(t,x_t,y_t) + \Pi _{yt} \lambda _y(t,x_t,y_t) \Bigr ]\,dt \nonumber \\&\quad + \left( \Pi _{St}\sigma _{S} + \Pi _{Ht}\sigma _{H}\rho _{HS}\right) \,dB_{St} \nonumber \\&\quad + \Pi _{Ht}\sigma _{H}{\hat{\rho }}_{H}\,dB_{Ht} +\Pi _{Lt}\,dB_{Lt} +\Pi _{xt}\,dB_{xt} +\Pi _{yt}\,dB_{yt} \Bigg \} + \Bigl [ L_t- c_t - \phi _{Ct} RH_t\Bigr ]\,dt \nonumber \\&= \Big [{{\tilde{r}}}(t,x_t,y_t) W_t + \alpha _t^{\scriptscriptstyle \varvec{\top }}\lambda _t + L_t- \phi _{Ct} RH_t - c_t \Big ]\,dt + \alpha _t^{\scriptscriptstyle \varvec{\top }}\Sigma \,dB_t, \end{aligned}$$
(50)

where

$$\begin{aligned} \alpha _t= & {} \begin{pmatrix} \alpha _{St}\\ \alpha _{Ht} \\ \alpha _{Lt} \end{pmatrix} = \begin{pmatrix} \Pi _{St}\sigma _{S}W_t \\ \Pi _{Ht}\sigma _{H} W_t \\ \Pi _{Lt} W_t\\ \Pi _{xt} W_t\\ \Pi _{yt} W_t \end{pmatrix}, \quad \lambda _t = \begin{pmatrix} \Big [{\tilde{\mu }}_{S}'(t,x_t,y_t)+\chi _S x_t\Big ]/\sigma _{S} \\ \Big [{\tilde{\mu }}_{H}'(t,x_t,y_t)+ \chi _{Hx} x_t+ \chi _{Hy} y_t\Big ]/\sigma _{H} \\ \lambda _L(t,x_t,y_t)\\ \lambda _x(t,x_t,y_t)\\ \lambda _y(t,x_t,y_t) \end{pmatrix}, \nonumber \\ B_t= & {} \begin{pmatrix} B_{St}\\ B_{Ht} \\ B_{Lt}\\ B_{xt}\\ B_{yt} \end{pmatrix}, \end{aligned}$$
(51)
$$\begin{aligned} \Sigma= & {} \begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ \rho _{HS} &{}\quad {\hat{\rho }}_{H} &{}\quad 0&{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1&{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0&{}\quad 1 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0&{}\quad 0 &{}\quad 1 \end{pmatrix}. \end{aligned}$$
(52)

Let \(Z=(H,L,x,y)^{\scriptscriptstyle \varvec{\top }}\) be the vector of state variables, which has the dynamics

$$\begin{aligned} dZ_t = \mu _{Z}(t,Z_t)\,dt + \Sigma _Z(t,Z_t)\,dB_t, \end{aligned}$$

where

$$\begin{aligned} \mu _{Z}(t,Z_t)= & {} \begin{pmatrix} H_t [ {{\tilde{r}}}(t,x_t,y_t) +{\tilde{\mu }}_{H}(t,x_t,y_t)+ \chi _{Hx} x_t+ \chi _{Hy} y_t]\\ L_t [\mu _{L}(t)+ \chi _L y_t] \\ -\kappa _x x_t \\ -\kappa _y y_t \end{pmatrix}, \end{aligned}$$
(53)
$$\begin{aligned} \Sigma _Z(Z_t)= & {} \begin{pmatrix} H_t\sigma _{H}\rho _{HS} &{} H_t \sigma _{H}{\hat{\rho }}_{H} &{} 0 &{} 0&{} 0 \\ L_t\sigma _{L}(t)\rho _{LS} &{} L_t\sigma _{L}(t){\hat{\rho }}_{LH} &{} L_t\sigma _{L}(t){\hat{\rho }}_{L}&{} 0&{} 0 \\ \sigma _{x}\rho _{xS} &{} \sigma _{x}{\hat{\rho }}_{xH} &{} \sigma _{x}{\hat{\rho }}_{xL} &{} \sigma _{x}{\hat{\rho }}_{x}&{} 0\\ \sigma _{y}\rho _{yS} &{} \sigma _{y}{\hat{\rho }}_{yH} &{} \sigma _{y}{\hat{\rho }}_{yL}&{} \sigma _{y}{\hat{\rho }}_{yx}&{} \sigma _{y}{\hat{\rho }}_{y} \end{pmatrix}. \end{aligned}$$
(54)

The Hamilton-Jacobi-Bellman equation (HJB) associated with the problem can be written as

$$\begin{aligned} \delta J = \mathcal {L}_1 J + \mathcal {L}_2 J + \mathcal {L}_3 J, \end{aligned}$$
(55)

where

$$\begin{aligned} \mathcal {L}_1 J&= \max _{c,\phi _C} \left\{ U(c,\phi _C) - J_W (c+ HR\phi _C) \right\} , \\ \mathcal {L}_2 J&= \max _\alpha \left\{ J_W\alpha ^{\scriptscriptstyle \varvec{\top }}\lambda + \frac{1}{2}J_{WW} \alpha ^{\scriptscriptstyle \varvec{\top }}\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }}\alpha + \alpha ^{\scriptscriptstyle \varvec{\top }}\Sigma \Sigma _Z^{\scriptscriptstyle \varvec{\top }}J_{WZ}\right\} , \\ \mathcal {L}_3 J&= \frac{\partial J}{\partial t} + J_W (rW+L) + J_Z^{\scriptscriptstyle \varvec{\top }}\mu _{Z} + \frac{1}{2}{\text {tr}}\left( J_{ZZ} \Sigma _Z \Sigma _Z ^{\scriptscriptstyle \varvec{\top }}\right) . \end{aligned}$$

Recall that \(J=J(t,W,H,L,x,y)=J(t,W,Z)\) so that

$$\begin{aligned} J_Z = \begin{pmatrix} J_H\\ J_L \\ J_x \\ J_y \end{pmatrix}, \quad J_{ZZ} = \begin{pmatrix} J_{HH} &{}\quad J_{HL} &{}\quad J_{Hx} &{}\quad J_{Hy} \\ J_{HL} &{}\quad J_{LL} &{}\quad J_{Lx} &{}\quad J_{Ly} \\ J_{Hx} &{}\quad J_{Lx} &{}\quad J_{xx} &{}\quad J_{xy} \\ J_{Hy} &{}\quad J_{Ly} &{}\quad J_{xy} &{}\quad J_{yy} \end{pmatrix}, \quad J_{WZ} = \begin{pmatrix} J_{WH}\\ J_{WL} \\ J_{Wx} \\ J_{Wy} \end{pmatrix}. \end{aligned}$$

First, consider \(\mathcal {L}_1J\). The first-order conditions are \(U_c(c^*,\phi _C^*)=J_W\) and \(U_\phi (c^*,\phi _C^*) = RH J_W\). These imply \(U_\phi (c^*,\phi _C^*)/U_c(c^*,\phi _C^*) = RH\) so that

$$\begin{aligned} \phi _C^*=c^*\left( \frac{aRH}{1-a}\right) ^{-1}. \end{aligned}$$

I substitute that relation into \(U_c=J_W\) and find

$$\begin{aligned} c^*=J_W^{-1/\gamma } a^{1/\gamma } \left( \frac{aRH}{1-a}\right) ^{k_1}, \end{aligned}$$
(56)

and hence

$$\begin{aligned} \phi _C^* = J_W^{-1/\gamma } a^{1/\gamma } \left( \frac{aRH}{1-a}\right) ^{k_1-1}, \end{aligned}$$
(57)

where \(k_1=(1-a)(\gamma -1)/\gamma \). These maximizers lead to

$$\begin{aligned} \mathcal {L}_1 J = \frac{\gamma }{1-\gamma } J_W^{\frac{\gamma -1}{\gamma } } a^{\frac{1-\gamma }{\gamma }} \left( \frac{aRH}{1-a}\right) ^{k_1}. \end{aligned}$$
(58)

Next, consider \(\mathcal {L}_2J\). The first-order condition for \(\alpha \) reads \(J_W\lambda + J_{WW}\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }}\alpha + \Sigma \Sigma _Z^{\scriptscriptstyle \varvec{\top }}J_{WZ}=0\) or

$$\begin{aligned} \alpha= & {} -\frac{J_W}{J_{WW}}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\lambda -\frac{1}{J_{WW}}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\Sigma \Sigma _Z^{\scriptscriptstyle \varvec{\top }}J_{WZ} \nonumber \\= & {} -\frac{J_W}{J_{WW}}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\lambda -\frac{1}{J_{WW}}(\Sigma _Z\Sigma ^{-1})^{\scriptscriptstyle \varvec{\top }}J_{WZ}. \end{aligned}$$
(59)

Substituting the optimal \(\alpha \) back into \(\mathcal {L}_2 J\) leads to

$$\begin{aligned} \mathcal {L}_2 J = -\frac{1}{2}\frac{J_W^2}{J_{WW}} \lambda ^{\scriptscriptstyle \varvec{\top }}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\lambda -\frac{J_W}{J_{WW}} J_{WZ}^{\scriptscriptstyle \varvec{\top }}\Sigma _Z\Sigma ^{-1}\lambda -\frac{1}{2}\frac{1}{J_{WW}}J_{WZ}^{\scriptscriptstyle \varvec{\top }}\Sigma _Z\Sigma _Z^{\scriptscriptstyle \varvec{\top }}J_{WZ}. \end{aligned}$$
(60)

The matrix products are

$$\begin{aligned} \Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }}= & {} \begin{pmatrix} 1 &{}\quad \rho _{HS} &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ \rho _{HS} &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1&{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0&{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0&{}\quad 0 &{}\quad 1 \end{pmatrix} \quad \\ \quad \Rightarrow (\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}= & {} \frac{1}{{\hat{\rho }}_{H}^2} \begin{pmatrix} 1 &{}\quad -\rho _{HS} &{}\quad 0 &{}\quad 0 &{}\quad 0\\ -\rho _{HS} &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad {\hat{\rho }}_{H}^2 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad {\hat{\rho }}_{H}^2 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad {\hat{\rho }}_{H}^2 \end{pmatrix}, \\ \Sigma _Z \Sigma ^{-1}= & {} \begin{pmatrix} 0 &{}\quad H \sigma _{H} &{}\quad 0 &{}\quad 0 &{}\quad 0\\ L M_{LS} \sigma _{S} &{}\quad L M_{LH} \sigma _{H} &{}\quad L {\hat{\rho }}_{L} \sigma _{L} &{}\quad 0 &{}\quad 0 \\ M_{xS}\sigma _{S} &{}\quad M_{xH}\sigma _{H} &{}\quad {\hat{\rho }}_{xL}\sigma _{x} &{}\quad {\hat{\rho }}_{x}\sigma _{x} &{}\quad 0\\ M_{yS}\sigma _{S} &{}\quad M_{yH}\sigma _{H} &{}\quad {\hat{\rho }}_{yL}\sigma _{y} &{}\quad {\hat{\rho }}_{yx}\sigma _{y} &{}\quad {\hat{\rho }}_{y}\sigma _{y} \end{pmatrix}, \\ \Sigma _Z \Sigma _Z^{\scriptscriptstyle \varvec{\top }}= & {} \begin{pmatrix} H^2\sigma _H^2 &{}\quad HL\sigma _{HL} &{}\quad H\sigma _{Hx}&{}\quad H\sigma _{Hy}\\ HL\sigma _{HL} &{}\quad L^2\sigma _{L}^2&{}\quad L\sigma _{Lx}&{}\quad L\sigma _{Ly}\\ H\sigma _{Hx} &{}\quad L\sigma _{Lx}&{}\quad \sigma _{x}^2&{}\quad \sigma _{xy}\\ H\sigma _{Hy} &{}\quad L\sigma _{Ly}&{}\quad \sigma _{xy}&{}\quad \sigma _{y}^2 \end{pmatrix}, \end{aligned}$$

where I have applied the covariance notation \(\sigma _{ab} = \rho _{ab}\sigma _{a}\sigma _{b}\) and constants defined in “Appendix A.1”.

Substitution of these matrix products into (59) gives

$$\begin{aligned} \alpha _S&= - \frac{J_W}{J_{WW}} \frac{1}{\sigma _{S}{\hat{\rho }}_{H}^2} \Big [ {\tilde{\mu }}_{S}'+ \chi _S x - \frac{\rho _{HS}\sigma _{S}}{\sigma _{H}}\left( {\tilde{\mu }}_{H}'+\chi _{Hx} x +\chi _{Hy} y\right) \Big ] - \frac{J_{WL}}{J_{WW}} LM_{LS} \sigma _{S} \nonumber \\&\quad - \frac{J_{Wx}}{J_{WW}} M_{xS}\sigma _{S} - \frac{J_{Wy}}{J_{WW}} M_{yS}\sigma _{S}, \end{aligned}$$
(61)
$$\begin{aligned} \alpha _H&= - \frac{J_W}{J_{WW}} \frac{1}{\sigma _{H}{\hat{\rho }}_{H}^2} \Big [ {\tilde{\mu }}_{H}'+\chi _{Hx} x +\chi _{Hy} y - \frac{\rho _{HS}\sigma _{H}}{\sigma _{S}} \left( {\tilde{\mu }}_{S}'+ \chi _S x\right) \Big ] - \frac{J_{WH}}{J_{WW}} H \sigma _{H}\nonumber \\&\quad - \frac{J_{WL}}{J_{WW}} LM_{LH} \sigma _{H} - \frac{J_{Wx}}{J_{WW}} M_{xH}\sigma _{H} - \frac{J_{Wy}}{J_{WW}} M_{yH}\sigma _{H}, \end{aligned}$$
(62)
$$\begin{aligned} \alpha _L&= - \frac{J_W}{J_{WW}} \lambda _L - \frac{J_{WL}}{J_{WW}} L {\hat{\rho }}_{L}\sigma _{L} - \frac{J_{Wx}}{J_{WW}} {\hat{\rho }}_{xL}\sigma _{x} - \frac{J_{Wy}}{J_{WW}} {\hat{\rho }}_{yL}\sigma _{y}, \end{aligned}$$
(63)
$$\begin{aligned} \alpha _x&= - \frac{J_W}{J_{WW}} \lambda _x - \frac{J_{Wx}}{J_{WW}} {\hat{\rho }}_{x}\sigma _{x} - \frac{J_{Wy}}{J_{WW}} {\hat{\rho }}_{yx}\sigma _{y}, \end{aligned}$$
(64)
$$\begin{aligned} \alpha _y&= - \frac{J_W}{J_{WW}} \lambda _y - \frac{J_{Wy}}{J_{WW}} {\hat{\rho }}_{y}\sigma _{y}, \end{aligned}$$
(65)

where I omit the dependence on txy.

1.1.2 Conjecture of the solution to the HJB equation

I conjecture

$$\begin{aligned} J(t,W,H,L,x,y) = \frac{1}{1-\gamma } G(t,H,x,y)^\gamma \left( W+L F(t,x,y)\right) ^{1-\gamma }. \end{aligned}$$
(66)

Now, I rewrite the terms \(\mathcal {L}_1 J\), \(\mathcal {L}_2 J\), \(\mathcal {L}_3 J\) exploiting the conjecture for J. First, since

$$\begin{aligned} J_w^{\tfrac{\gamma -1}{\gamma }} = J_w J_w^{-1/\gamma } = \frac{(1-\gamma ) J}{W+LF} \left\{ G^\gamma [W+LF]^{-\gamma } \right\} ^{-1/\gamma } = (1-\gamma ) J \frac{1}{G}, \end{aligned}$$

I get from (58) that

$$\begin{aligned} \mathcal {L}_1 J = \gamma J \frac{1}{G} a^{\frac{1-\gamma }{\gamma }} \left( \frac{aRH}{1-a}\right) ^{k_1}. \end{aligned}$$

Next, I have from (60) that

$$\begin{aligned} \mathcal {L}_2 J&= \mathcal {L}_{2,1}J + \mathcal {L}_{2,2}J + \mathcal {L}_{2,3}J, \end{aligned}$$

where

$$\begin{aligned} \mathcal {L}_{2,1}J&= -\frac{1}{2}\frac{J_W^2}{J_{WW}} \lambda ^{\scriptscriptstyle \varvec{\top }}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\lambda = \frac{1-\gamma }{2\gamma } J \lambda ^{\scriptscriptstyle \varvec{\top }}(\Sigma \Sigma ^{\scriptscriptstyle \varvec{\top }})^{-1}\lambda \\&= \frac{1-\gamma }{2\gamma } J \Biggl \{ \left( \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 + \left( \frac{{\tilde{\mu }}_{H}'+\chi _{Hx} x+\chi _{Hy} y}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2 \\&\quad - 2\rho _{HS} \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}} \, \frac{{\tilde{\mu }}_{H}'+\chi _{Hx} x+\chi _{Hy} y}{\sigma _{H}{\hat{\rho }}_{H}} + \lambda _L^2 + \lambda _x^2 + \lambda _y^2 \Biggr \}, \\ \mathcal {L}_{2,2}J&= -\frac{J_W}{J_{WW}} J_{WZ}^{\scriptscriptstyle \varvec{\top }}\Sigma _Z\Sigma ^{-1}\lambda = \frac{1}{\gamma }(W+LF) J_{WZ}^{\scriptscriptstyle \varvec{\top }}\Sigma _Z\Sigma ^{-1}\lambda \\&= (1-\gamma ) J \Biggl \{ \frac{HG_H}{G}({\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y) \\&\quad + \frac{G_x}{G} \Bigl ( - \left( M_{xS} \chi _S + M_{xH} \chi _{Hx}\right) x - M_{xH} \chi _{Hy} y - M_{xS} {\tilde{\mu }}_{S}'- M_{xH} {\tilde{\mu }}_{H}'\\&\quad - \sigma _{x} \left( {\hat{\rho }}_{xL} \lambda _L+ {\hat{\rho }}_{x} \lambda _x\right) \Bigr ) \\&\quad + \frac{G_y}{G} \Bigl ( - \left( M_{yS}\chi _S + M_{yH}\chi _{Hx}\right) x- M_{yH} \chi _{Hy} y - M_{yS}{\tilde{\mu }}_{S}'- M_{yH}{\tilde{\mu }}_{H}' \\&\quad -\sigma _{y} \left( {\hat{\rho }}_{yL} \lambda _L+ {\hat{\rho }}_{yx} \lambda _x+ {\hat{\rho }}_{y} \lambda _y \right) \Bigr ) \\&\quad - \frac{L}{W+LF} \bigg [ F \Bigl (- \left( M_{LS}\chi _S + M_{LH}\chi _{Hx}\right) x + \left( \chi _{L}- M_{LH}\chi _{Hy}\right) y + \mu _L- {{\tilde{r}}} \\&\quad - M_{LS}{\tilde{\mu }}_{S}' + M_{LH} {\tilde{\mu }}_{H}' - \sigma _{L} {\hat{\rho }}_{L} \lambda _L \Bigr ) \\&\quad + F_x \Bigl ( - \left( M_{xS} \chi _S + M_{xH} \chi _{Hx}\right) x- M_{xH} \chi _{Hy} y - M_{xS} {\tilde{\mu }}_{S}' - M_{xH} {\tilde{\mu }}_{H}'\\&\quad - \sigma _{x} \left( {\hat{\rho }}_{xL} \lambda _L + {\hat{\rho }}_{x} \lambda _x\right) \Bigr ) \\&\quad + F_y \Bigl ( - \left( M_{yS} \chi _S + M_{yH} \chi _{Hx}\right) x- M_{yH} \chi _{Hy} y - M_{yS}{\tilde{\mu }}_{S}' - M_{yH}{\tilde{\mu }}_{H}' \\&\quad -\sigma _{y} \left( {\hat{\rho }}_{yL} \lambda _L + {\hat{\rho }}_{yx} \lambda _x + {\hat{\rho }}_{y} \lambda _y\right) \Bigr ) \bigg ] \Biggr \}, \end{aligned}$$

and

$$\begin{aligned} \mathcal {L}_{2,3}J&= -\frac{1}{2}\frac{1}{J_{WW}}J_{WZ}^{\scriptscriptstyle \varvec{\top }}\Sigma _Z\Sigma _Z^{\scriptscriptstyle \varvec{\top }}J_{WZ}\\&= \gamma (1-\gamma ) J \Biggl \{ \frac{1}{2}\sigma _{H}^2 \frac{H^2 G_H^2}{G^2} + \frac{1}{2}\sigma _{x}^2 \frac{G_x^2}{G^2} + \frac{1}{2}\sigma _{y}^2 \frac{G_y^2}{G^2} \\&\quad + \sigma _{Hx} \frac{H G_H G_x}{G^2} + \sigma _{Hy} \frac{H G_H G_y}{G^2} + \sigma _{xy} \frac{G_xG_y}{G^2} \\&\quad - \frac{L}{W+LF} \Bigg [ \frac{H G_H}{G} \left( \sigma _{HL} F + \sigma _{Hx} F_x + \sigma _{Hy} F_y \right) + \frac{G_x}{G} \left( \sigma _{Lx} F + \sigma _{x}^2 F_x + \sigma _{xy} F_y \right) \\&\quad + \frac{G_y}{G} \left( \sigma _{Ly} F + \sigma _{xy} F_x + \sigma _{y}^2 F_y \right) \Bigg ] \\&\quad +\frac{L^2}{(W+LF)^2} \Bigg [ \frac{1}{2}\sigma _{L}^2 F^2 + \frac{1}{2}\sigma _{x}^2 F_x^2 + \frac{1}{2}\sigma _{y}^2 F_y^2 + \sigma _{Lx} F F_x + \sigma _{Ly} F F_y + \sigma _{xy} F_x F_y \Bigg ] \Biggr \}. \end{aligned}$$

Finally, I can rewrite \(\mathcal {L}_3 J\) as

$$\begin{aligned} \mathcal {L}_3 J&= \mathcal {L}_{3,1}J + \mathcal {L}_{3,2}J, \end{aligned}$$

where

$$\begin{aligned} \mathcal {L}_{3,1}J&= \frac{\partial J}{\partial t} + J_W ({{\tilde{r}}} W+L) + J_Z^{\scriptscriptstyle \varvec{\top }}\mu _{Z} \\&=(1-\gamma ) J \Biggl \{{{\tilde{r}}}+ \frac{\gamma }{1-\gamma } \Bigg [ \frac{1}{G}\frac{\partial G}{\partial t} \\&\quad + \frac{HG_H}{G}\left( {{\tilde{r}}}+{\tilde{\mu }}_{H}+\chi _{Hx}x+\chi _{Hy}y\right) - \kappa _x x \frac{G_x}{G} - \kappa _y y \frac{G_y}{G} \Bigg ] \\&\quad + \frac{L}{W+LF} \Bigg [ \frac{\partial F}{\partial t} +1 + \left( \mu _{L}+ \chi _L y-{{\tilde{r}}}\right) F - \kappa _x x F_x - \kappa _y y F_y \Bigg ] \Biggr \}, \end{aligned}$$

and

$$\begin{aligned} \mathcal {L}_{3,2}J&= \frac{1}{2}{\text {tr}}\left( J_{ZZ} \Sigma _Z \Sigma _Z ^{\scriptscriptstyle \varvec{\top }}\right) \\&= (1-\gamma ) J \Bigg \{ \frac{\gamma }{1-\gamma } \Bigg [ \frac{1}{2}\sigma _{H}^2 H^2 \frac{G_{HH}}{G} +\frac{1}{2}\sigma _{x}^2 \frac{G_{xx}}{G} + \frac{1}{2}\sigma _{y}^2 \frac{G_{yy}}{G} -\frac{1-\gamma }{2} \sigma _{H}^2 H^2 \frac{G_H^2}{G^2}\\&\quad -\frac{1-\gamma }{2} \sigma _{x}^2 \frac{G_x^2}{G^2} -\frac{1-\gamma }{2} \sigma _{y}^2 \frac{G_y^2}{G^2} + \sigma _{Hx} H \frac{G_{Hx}}{G} + \sigma _{Hy} H \frac{G_{Hy}}{G} + \sigma _{xy} \frac{G_{xy}}{G} \\&\quad -(1-\gamma ) \sigma _{Hx}H \frac{G_H G_x}{G^2} -(1-\gamma )\sigma _{Hy}H \frac{G_H G_y}{G^2} -(1-\gamma ) \sigma _{xy} \frac{G_x G_y}{G^2} \Bigg ] \\&\quad +\frac{L}{W+LF} \Bigg [ \frac{1}{2}\sigma _{x}^2 F_{xx} + \frac{1}{2}\sigma _{y}^2 F_{yy} + \sigma _{xy} F_{xy} + \sigma _{Lx} F_x + \sigma _{Ly} F_y \\&\quad +\gamma \bigg [ \frac{HG_H}{G} \left( \sigma _{HL}F + \sigma _{Hx}F_x + \sigma _{Hy}F_y \right) + \frac{G_x}{G} \left( \sigma _{Lx} F + \sigma _{x}^2 F_x + \sigma _{xy} F_y \right) \\&\quad + \frac{G_y}{G} \left( \sigma _{Ly} F + \sigma _{xy} F_x + \sigma _{y}^2 F_y \right) \bigg ] \Bigg ]\\&\quad -\frac{L^2}{(W+LF)^2} \gamma \bigg [ \frac{1}{2}\sigma _{L}^2 F^2 + \frac{1}{2}\sigma _{x}^2 F_x^2 + \frac{1}{2}\sigma _{y}^2 F_y^2 + \sigma _{Lx} F F_x + \sigma _{Ly} F F_y + \sigma _{xy} F_x F_y \bigg ] \Bigg \}. \end{aligned}$$

By adding \(\mathcal {L}_{2,3}J\) and \(\mathcal {L}_{3,2}J\) numerous terms cancel so that

$$\begin{aligned} \mathcal {L}_{2,3}J + \mathcal {L}_{3,2}J&= (1-\gamma ) J \Bigg \{ \frac{\gamma }{1-\gamma } \Bigg [ \frac{1}{2}\sigma _{H}^2 H^2 \frac{G_{HH}}{G} +\frac{1}{2}\sigma _{x}^2 \frac{G_{xx}}{G} + \frac{1}{2}\sigma _{y}^2 \frac{G_{yy}}{G} \\&\quad + \sigma _{Hx} H \frac{G_{Hx}}{G} + \sigma _{Hy} H \frac{G_{Hy}}{G} + \sigma _{xy} \frac{G_{xy}}{G} \Bigg ] \\&\quad +\frac{L}{W+LF} \Bigg [ \frac{1}{2}\sigma _{x}^2 F_{xx} + \frac{1}{2}\sigma _{y}^2 F_{yy} + \sigma _{xy} F_{xy} +\sigma _{Lx} F_x + \sigma _{Ly} F_y \Bigg ] \Bigg \}. \end{aligned}$$

If I further add \(\mathcal {L}_{2,2}J\) and \(\mathcal {L}_{3,1}J\) to this, all the terms multiplying \(L/(W+LF)\) cancel because F satisfies the PDE (18). In sum, I get

$$\begin{aligned} \delta J&= \mathcal {L}_1 J + \mathcal {L}_{2,1} J + \mathcal {L}_{2,2}J + \mathcal {L}_{2,3}J + \mathcal {L}_{3,1} J + \mathcal {L}_{3,2}J \\&= \gamma J \frac{1}{G} a^{\frac{1-\gamma }{\gamma }} \left( \frac{aRH}{1-a}\right) ^{k_1} \\&\quad +(1-\gamma ) J \frac{1}{G} \Bigg \{ \frac{1}{2\gamma } \bigg [ \left( \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 + \left( \frac{{\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2\\&\quad - 2\rho _{HS} \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}} \, \frac{{\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}} \\&\quad + \lambda _L^2+ \lambda _x^2+ \lambda _y^2 \bigg ] +{{\tilde{r}}} + HG_H \left( {\tilde{\mu }}_{H}'+ \chi _{Hx}x+ \chi _{Hy}y\right) \\&\quad - G_x \Bigl (\left( M_{xS} \chi _S + M_{xH} \chi _{Hx}\right) x + M_{xH} \chi _{Hy} y +M_{xS} {\tilde{\mu }}_{S}'+ M_{xH} {\tilde{\mu }}_{H}'+ \sigma _{x} \left( {\hat{\rho }}_{xL} \lambda _L + {\hat{\rho }}_{x} \lambda _x\right) \Bigr ) \\&\quad - G_y \Bigl ( \left( M_{yS}\chi _S + M_{yH}\chi _{Hx}\right) x + M_{yH} \chi _{Hy} y + M_{yS}{\tilde{\mu }}_{S}'+ M_{yH}{\tilde{\mu }}_{H}'\\&\quad +\sigma _{y} \left( {\hat{\rho }}_{yL} \lambda _L+ {\hat{\rho }}_{yx} \lambda _x + {\hat{\rho }}_{y} \lambda _y\right) \Bigr )\\&\quad + \frac{\gamma }{1-\gamma } \bigg [ \frac{1}{2}\sigma _{H}^2H^2 G_{HH} + \frac{1}{2}\sigma _{x}^2 G_{xx} + \frac{1}{2}\sigma _{y}^2 G_{yy} + \sigma _{Hx} H G_{Hx} + \sigma _{Hy} H G_{Hy} + \sigma _{xy} G_{xy} \\&\quad + \frac{\partial G}{\partial t} + HG_H\left( {{\tilde{r}}}+{\tilde{\mu }}_{H}+\chi _{Hx}x+\chi _{Hy}y\right) - \kappa _x x G_x - \kappa _y y G_y \bigg ] \bigg \}. \end{aligned}$$

Therefore, it follows that the HJB equation is satisfied if the G function solves the PDE

$$\begin{aligned} 0&= k_2 H^{k_1} + \frac{\partial G}{\partial t} + \frac{1}{2}\sigma _{H}^2 H^2 G_{HH} + \frac{1}{2}\sigma _{x}^2 G_{xx} + \frac{1}{2}\sigma _{y}^2 G_{yy} \nonumber \\&\quad +\, \sigma _{Hx} H G_{Hx} + \sigma _{Hy} H G_{Hy} + \sigma _{xy} G_{xy} \nonumber \\&\quad +\, H G_H \left( {{\tilde{r}}} - \tfrac{\gamma -1}{\gamma }[R-m] + \frac{1}{\gamma }\left( {\tilde{\mu }}_{H}+\chi _{Hx}x+\chi _{Hy}y\right) \right) \nonumber \nonumber \\&\quad +\,\frac{\gamma -1}{\gamma } G_x \Bigl ( \left( \kappa _x + M_{xS} \chi _S +\, M_{xH} \chi _{Hx}\right) x \nonumber \\&\quad +\, M_{xH} \chi _{Hy} y + M_{xS} {\tilde{\mu }}_{S}' + M_{xH} {\tilde{\mu }}_{H}' + \sigma _{x} \left( {\hat{\rho }}_{xL} \lambda _L + {\hat{\rho }}_{x} \lambda _x\right) \Bigr ) \nonumber \\&\quad +\,\frac{\gamma -1}{\gamma } G_y\Bigl ( \left( M_{yS}\chi _S + M_{yH}\chi _{Hx}\right) x + \left( \kappa _y + M_{yH} \chi _{Hy}\right) y + M_{yS}{\tilde{\mu }}_{S}'+ M_{yH}{\tilde{\mu }}_{H}' \nonumber \\&\quad +\,\sigma _{y} \left( {\hat{\rho }}_{yL} \lambda _L + {\hat{\rho }}_{yx} \lambda _x+ {\hat{\rho }}_{y} \lambda _y\right) \Bigr ) \nonumber \\&\quad -\, G\Biggl ( \frac{\gamma -1}{\gamma }{{\tilde{r}}}+ \frac{\delta }{\gamma } + \frac{\gamma -1}{2 \gamma ^2} \bigg [ \left( \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 \nonumber \\&\quad +\, \left( \frac{{\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2 - 2\rho _{HS} \frac{{\tilde{\mu }}_{S}'+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}} \, \frac{{\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}} \nonumber \\&\quad +\, \lambda _L^2+ \lambda _x^2+ \lambda _y^2 \bigg ] \Biggr ). \end{aligned}$$
(67)

Coming back to the optimal investment strategy, first note that

$$\begin{aligned} - \frac{J_W}{J_{WW}}&= \frac{1}{\gamma } (W+LF),&\qquad - \frac{J_{Wx}}{J_{WW}}&= \frac{G_x}{G}(W+LF) - LF_x, \\ - \frac{J_{WL}}{J_{WW}}&= - F,&\qquad - \frac{J_{Wy}}{J_{WW}}&= \frac{G_y}{G}(W+LF) - LF_y, \\ - \frac{J_{WH}}{J_{WW}}&= \frac{G_H}{G} (W+LF). \end{aligned}$$

By substituting these expressions into (61)-(65), I obtain

$$\begin{aligned} \alpha _S&= \frac{1}{\gamma } (W+LF) \frac{1}{\sigma _{S}{\hat{\rho }}_{H}^2} \left( {\tilde{\mu }}_{S}'+\chi _S x - \frac{\rho _{HS}\sigma _{S}}{\sigma _{H}}[{\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y]\right) - F LM_{LS} \sigma _{S} \nonumber \\&\quad +\left[ \frac{G_x}{G}(W+LF) - LF_x\right] M_{xS}\sigma _{S} +\left[ \frac{G_y}{G}(W+LF) - LF_y\right] M_{yS}\sigma _{S}, \end{aligned}$$
(68)
$$\begin{aligned} \alpha _H&= \frac{1}{\gamma } (W+LF) \frac{1}{\sigma _{H}{\hat{\rho }}_{H}^2} \left( {\tilde{\mu }}_{H}'+\chi _{Hx}x+\chi _{Hy}y - \frac{\rho _{HS}\sigma _{H}}{\sigma _{S}} [{\tilde{\mu }}_{S}' + \chi _Sx]\right) \nonumber \\&\quad +\frac{G_H}{G} (W+LF) H \sigma _{H}\nonumber \\&\quad - F LM_{LH}(t) \sigma _{H} +\left[ \frac{G_x}{G}(W+LF) - LF_x\right] M_{xH}\sigma _{H} \nonumber \\&\quad +\left[ \frac{G_y}{G}(W+LF) - LF_y\right] M_{yH}\sigma _{H} , \end{aligned}$$
(69)
$$\begin{aligned} \alpha _L&= \frac{1}{\gamma } (W+LF) \lambda _L - F L {\hat{\rho }}_{L}\sigma _{L} +\left[ \frac{G_x}{G}(W+LF) - LF_x\right] {\hat{\rho }}_{xL}\sigma _{x} \nonumber \\&\quad +\left[ \frac{G_y}{G}(W+LF) - LF_y\right] {\hat{\rho }}_{yL}\sigma _{y}, \end{aligned}$$
(70)
$$\begin{aligned} \alpha _x&= \frac{1}{\gamma } (W+LF) \lambda _x +\left[ \frac{G_x}{G}(W+LF) - LF_x\right] {\hat{\rho }}_{x}\sigma _{x} +\left[ \frac{G_y}{G}(W+LF) - LF_y\right] {\hat{\rho }}_{yx}\sigma _{y}, \end{aligned}$$
(71)
$$\begin{aligned} \alpha _y&= \frac{1}{\gamma } (W+LF) \lambda _y +\left[ \frac{G_y}{G}(W+LF) - LF_y\right] {\hat{\rho }}_{y}\sigma _{y}. \end{aligned}$$
(72)

Now (31)–(35) in the theorem follows easily since \(\Pi _S=\alpha _S/(\sigma _{S}W)\), \(\Pi _H = \alpha _H/(\sigma _{H}W)\), \(\Pi _L = \alpha _L/W\), \(\Pi _x = \alpha _x/W\), and \(\Pi _y = \alpha _y/W\).

1.1.3 Solving for the G function

I rewrite (67) as

$$\begin{aligned} 0= & {} k_2 H^{k_1} + \frac{\partial G}{\partial t}+ \frac{1}{2}\sigma _H^2 H^2 G_{HH}+ \frac{1}{2}\sigma _x^2 G_{xx} + \frac{1}{2}\sigma _y^2 G_{yy} \nonumber \\&+\, \sigma _{Hx} H G_{Hx} + \sigma _{Hy} H G_{Hy} + \sigma _{xy} G_{xy} \nonumber \\&+\, H G_H \bar{\mu }_H(t,x,y) + G_x \bar{\mu }_x(t,x,y) + G_y \bar{\mu }_y(t,x,y) - {\bar{r}}_G(t,x,y) G, \end{aligned}$$
(73)

with terminal conditions \(G(T,H,x,y)= \varepsilon ^{\frac{1}{\gamma }}\) and

$$\begin{aligned} - {\bar{r}}_G(t,x,y)= & {} \frac{\gamma -1}{\gamma }{{\tilde{r}}}(t,x,y) + \frac{\delta }{\gamma } + \frac{\gamma -1}{2 \gamma ^2} \bigg [ \left( \frac{{\tilde{\mu }}_{S}'(t,x,y)+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 \\&+ \left( \frac{{\tilde{\mu }}_{H}'(t,x,y)+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2\\&- 2\rho _{HS} \frac{{\tilde{\mu }}_{S}'(t,x,y)+\chi _S x}{\sigma _{S}{\hat{\rho }}_{H}} \, \frac{{\tilde{\mu }}_{H}'(t,x,y)+\chi _{Hx}x+\chi _{Hy}y}{\sigma _{H}{\hat{\rho }}_{H}}\\&+ \lambda _L(t,x,y)^2+ \lambda _x(t,x,y)^2+ \lambda _y(t,x,y)^2 \bigg ] \, ,\\ \bar{\mu }_H(t,x,y)= & {} {{\tilde{r}}}(t,x,y) - \tfrac{\gamma -1}{\gamma }[R-m] + \frac{1}{\gamma }\left( {\tilde{\mu }}_{H}(t,x,y)+\chi _{Hx}x+\chi _{Hy}y\right) ,\\ \bar{\mu }_x(t,x,y)= & {} - \left( \kappa _x + M_{xS} \chi _S + M_{xH} \chi _{Hx}\right) x - M_{xH} \chi _{Hy} y - M_{xS} {\tilde{\mu }}_{S}'(t,x,y) \\&- M_{xH} {\tilde{\mu }}_{H}'(t,x,y) \\&- \sigma _{x} \left( {\hat{\rho }}_{xL} \lambda _L(t,x,y) + {\hat{\rho }}_{x} \lambda _x(t,x,y)\right) \, ,\\ \bar{\mu }_y(t,x,y)= & {} - \left( M_{yS}\chi _S + M_{yH}\chi _{Hx}\right) x - \left( \kappa _y + M_{yH} \chi _{Hy}\right) y - M_{yS}{\tilde{\mu }}_{S}'(t,x,y) \\&- M_{yH}{\tilde{\mu }}_{H}'(t,x,y) \\&- \sigma _{y} \left( {\hat{\rho }}_{yL} \lambda _L(t,x,y) + {\hat{\rho }}_{yx} \lambda _x(t,x,y) + {\hat{\rho }}_{y} \lambda _y(t,x,y)\right) \, . \end{aligned}$$

I conjecture that the solution takes the form

$$\begin{aligned} G(t,H,x,y)&= \varepsilon ^{\frac{1}{\gamma }} g_1(t,x,y)+ k_2 H^{k_1} g_2(t,x,y) \, . \end{aligned}$$

Solving for the \(g_1\) function I substitute the conjecture into the PDE (73) which yields (only first part of G)

$$\begin{aligned} 0= & {} \frac{\partial g_1}{\partial t} + \frac{1}{2}\sigma _x^2 \frac{\partial ^2 g_1}{\partial x^2} + \frac{1}{2}\sigma _y^2 \frac{\partial ^2 g_1}{\partial y^2} + \sigma _{xy} \frac{\partial ^2 g_1}{\partial xy} + \bar{\mu }_x(t,x,y) \frac{\partial g_1}{\partial x} + \bar{\mu }_y(t,x,y) \frac{\partial g_1}{\partial y} \nonumber \\&- \,{\bar{r}}_G(t,x,y) g_1 , \end{aligned}$$
(74)

with terminal condition \(g_1(T,x,y) = 1\). If in addition the coefficients of the previous PDE satisfy the conditions of Heath and Schweizer (2000), I can apply the Feynman-Kac theorem and obtain

$$\begin{aligned} g_1(t,x_t,y_t)&= \mathrm{E}^{{{\mathcal {B}}}}_{t} \left[ e^{-\int _t^T {\bar{r}}_G(u,x_u,y_u) du} \right] , \end{aligned}$$

with measure \({{\mathcal {B}}}\) where

$$\begin{aligned} dx_t= & {} \bar{\mu }_x(t,x_t,y_t) dt + \sigma _x dB^{{{\mathcal {B}}}}_{xt}, \\ dy_t= & {} \bar{\mu }_y(t,x_t,y_t) dt + \sigma _y \rho _{xy} dB^{{{\mathcal {B}}}}_{xt}+ \sigma _y \sqrt{1-\rho _{xy}^2} dB^{{{\mathcal {B}}}}_{yt}. \end{aligned}$$

Solving for the \(g_2\) function Next, I substitute the conjecture into the PDE (73) and dividing by \(k_2 H^{k_1}\) which yields (only second part of G)

$$\begin{aligned} 0= & {} 1+ \frac{\partial g_2}{\partial t} + \frac{1}{2}\sigma _x^2 \frac{\partial ^2 g_2}{\partial x^2} + \frac{1}{2}\sigma _y^2 \frac{\partial ^2 g_2}{\partial y^2} + \sigma _{xy} \frac{\partial ^2 g_2}{\partial xy} \nonumber \\&+\bar{\mu }_x(t,x,y)\frac{\partial g_2}{\partial x}+\bar{\mu }_y(t,x,y) \frac{\partial g_2}{\partial y}- {\bar{r}}_G(t,x,y) g_2, \end{aligned}$$
(75)

with terminal condition \(g_2(T,x,y)= 0\). If in addition the coefficients of the previous PDE satisfy the conditions of Heath and Schweizer (2000), I can apply the Feynman-Kac theorem and obtain

$$\begin{aligned} g_2(t,x_t,y_t) = \mathrm{E}^{{{\mathcal {B}}}}_{t} \left[ \int _t^T e^{-\int _t^s {\bar{r}}_G(u,x_u,y_u) du} ds \right] , \end{aligned}$$

with measure \({{\mathcal {B}}}\) where

$$\begin{aligned} dx_t= & {} \bar{\mu }_x(t,x_t,y_t) dt + \sigma _x dB^{{{\mathcal {B}}}}_{xt}, \\ dy_t= & {} \bar{\mu }_y(t,x_t,y_t) dt + \sigma _y \rho _{xy} dB^{{{\mathcal {B}}}}_{xt}+ \sigma _y \sqrt{1-\rho _{xy}^2} dB^{{{\mathcal {B}}}}_{yt}. \end{aligned}$$

1.1.4 Verification

In this part of the appendix, I summarize a classical verification result. More details can be found in Fleming and Soner (2006), among others.

Since I apply ideas from weak duality theory, I will verify the previous calculations for tractable analytical markets only. Notice that by disregarding artificial markets that are not analytical tractable I might only increase the welfare loss, which formally means that I increase the duality gap. Whether this is numerically significant or not, can be checked by using the upper bound. It turns out that the losses are still small (see Sect. 4.3).

First, to simplify notations, I define a vector-valued process that contains wealth and all state variablesFootnote 12

$$\begin{aligned} {\hat{Z}}=(W,Z)^{\scriptscriptstyle \varvec{\top }}= (W,H,L,x,y)^{\scriptscriptstyle \varvec{\top }}. \end{aligned}$$

Besides, set

$$\begin{aligned} \mu _{{\hat{Z}}} = \begin{pmatrix} \mu _W \\ \mu _Z \end{pmatrix} = \begin{pmatrix} {{\tilde{r}}} W + \alpha ^{\scriptscriptstyle \varvec{\top }}\lambda + L- \phi _{C} RH - c \\ \mu _Z \end{pmatrix} \qquad \hbox {and}\qquad \Sigma _{{\hat{Z}}} = \begin{pmatrix} \alpha ^{\scriptscriptstyle \varvec{\top }}\Sigma \\ \Sigma _Z \end{pmatrix}. \end{aligned}$$

I start by fixing a given artificial market \(\mathcal {M}_\theta \) that is parameterized by the choice of \(\theta =\left( \nu _{S}, \nu _{H}, \lambda _{L}, \lambda _{x}, \lambda _{y}\right) \). I formally define the admissibility of a strategy as follows:

Definition B.1

(Admissibility) A strategy \(\Lambda =(\Pi , c, \phi _C)\) is said to admissible if

(a) it is progressively measurable w.r.t. to the filtration generated by the multi-dimensional Brownian motion B,

(b) the corresponding wealth equation (50) has a pathwise unique solution,

(c) for all \(n\in {\mathbb {N}}\)

$$\begin{aligned} \mathrm{E}\Big [ \int _0^T\big |\big (J_{{\hat{Z}}}(t,{{\hat{Z}}}_t)\big )^{\scriptscriptstyle \varvec{\top }}\Sigma _{{\hat{Z}}}(t,{{\hat{Z}}}_t)\big |^n \,dt\Big ]<\infty , \end{aligned}$$

where \(|\cdot |\) is the Euclidian norm,

(d) the utility index of the strategy \(\Lambda \) satisfies

$$\begin{aligned} \mathrm{E}\left[ \left| \int _0^T e^{-\delta t} U(c_t, \phi _{Ct} )\,dt + \varepsilon e^{-\delta T} {\bar{U}}(W_T)\right| ^- \right] <\infty , \end{aligned}$$

i.e. we exclude strategies that lead to utility of minus infinity.Footnote 13

In the following, I only consider artificial markets \(\mathcal {M}_\theta \) that have a classical solution, i.e. I impose the following requirement: The value function J of the artificial market is a \(C^{1,2}\) function.

Proof of the verification result Due to the previous requirement, I can apply Ito’s formula and obtain

$$\begin{aligned} d J=\Big \{J_t+(\mu _{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}J_{{\hat{Z}}}+0.5{\text {tr}}\left[ (\Sigma _{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}J_{{{\hat{Z}}}{{\hat{Z}}}} \Sigma _{{\hat{Z}}}\right] \Big \}d t + (J_{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}\Sigma _{{\hat{Z}}} d B. \end{aligned}$$

Integrating this equation and adding intermediate consumption yields

$$\begin{aligned}&J(T,{\hat{Z}}_T)+\int _0^T e^{-\delta t} U(c_t, \phi _{Ct} )\,dt \\&\quad =J(0,{\hat{Z}}_0) + \int _0^T\left( \Big \{J_t+(\mu _{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}J_{{\hat{Z}}}+0.5{\text {tr}}\left[ (\Sigma _{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}J_{{{\hat{Z}}}{{\hat{Z}}}} \Sigma _{{\hat{Z}}}\right] \Big \}(t,{\hat{Z}}_t) + e^{-\delta t} U(c_t, \phi _{Ct})\right) \,dt \\&\qquad + \int _0^T (J_{{\hat{Z}}})^{\scriptscriptstyle \varvec{\top }}\Sigma _{{\hat{Z}}} d B_t, \end{aligned}$$

where the notation \(\{\dots \}(t,{\hat{Z}}_t)\) means that all terms inside of \(\{\dots \}\) depend on t and \({\hat{Z}}_t\). Using the terminal condition of J and taking expectations on both sides yields

since, by (c), the Ito integral is a true martingale and thus its expected value is zero. Notice that the term labeled “HJB” is at most zero, since it is equivalent to the arguments of the HJB equation (without the maximum operator). If we consider the candidate strategy \(\Lambda ^*\) given by (31)–(37) that follows from solving the HJB equation (55), then the term is exactly zero. For all other admissible strategies \(\Lambda \) it is at most zero. Therefore, \(\Lambda ^*\) is the optimal strategy and J given by (29) is the value function of the problem.

1.2 Corollary 3.3: Solving for G with time-dependent modifiers

With time-dependent modifiers, I can solve (67) in closed form and I conjecture the solution as

$$\begin{aligned}&G(t,H,x,y) = \varepsilon ^{1/\gamma } \exp \left\{ D_0(t) + D_x(t) x + D_y(t)y + \frac{1}{2}D_{xx}(t) x^2 + \frac{1}{2}D_{yy}(t) y^2 + D_{xy}(t) x y \right\} \\&\quad + k_2 H^{k_1} \int _t^T \exp \left\{ {\bar{D}}_0(t,s) + {\bar{D}}_x(t,s) x + {\bar{D}}_y(t,s)y \right. \\&\quad \left. + \frac{1}{2}{\bar{D}}_{xx}(t,s) x^2 + \frac{1}{2}{\bar{D}}_{yy}(t,s) y^2 + {\bar{D}}_{xy}(t,s) x y \right\} \,ds. \end{aligned}$$

Solutions of this form are known from Wachter (2002), Liu (2007), and Kraft and Munk (2011), among others. I substitute the relevant derivatives into (67), equate the sum of all the terms without integrals to zero, and equate the sum of the integrands in all the integral terms to zero. I find that the PDE is indeed satisfied under the following conditions on the D and \({\bar{D}}\) functions.

The functions \(D_0\), \(D_x\), \(D_y\), \(D_{xx}\), \(D_{yy}\), and \(D_{xy}\) must satisfy the following system of ODEs

$$\begin{aligned} 0&= \frac{1}{2}D_{xx}(t)' + \frac{1}{2}\sigma _{x}^2 D_{xx}(t)^2 + \frac{1}{2}\sigma _{y}^2 D_{xy}(t)^2 + \sigma _{xy} D_{xy}(t) D_{xx}(t) - \bar{\kappa }_x D_{xx}(t) \\&\quad - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S + M_{yH} \chi _{Hx}] D_{xy}(t) - K_{xx}, \\ 0&= \frac{1}{2}D_{yy}(t)'+ \frac{1}{2}\sigma _{x}^2 D_{xy}(t)^2 + \frac{1}{2}\sigma _{y}^2 D_{yy}(t)^2 + \sigma _{xy} D_{xy}(t) D_{yy}(t)\\&\quad - \bar{\kappa }_y D_{yy}(t) - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} D_{xy}(t)- K_{yy}, \\ 0&= D_{xy}(t)'+ \sigma _{x}^2 D_{xx}(t) D_{xy}(t) + \sigma _{y}^2 D_{yy}(t) D_{xy}(t) + \sigma _{xy} \left( D_{xx}(t) D_{yy}(t) \right. \\&\quad \left. + D_{xy}(t)^2 \right) - (\bar{\kappa }_x+\bar{\kappa }_y) D_{xy}(t) \\&\quad - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} D_{xx}(t) - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S + M_{yH} \chi _{Hx}] D_{yy}(t) - K_{xy}, \\ 0&= D_x(t)' + \left( \sigma _{x}^2 D_{xx}(t) + \sigma _{xy} D_{xy}(t) - \bar{\kappa }_x \right) D_x(t) + \left( \sigma _{y}^2 D_{xy}(t) + \sigma _{xy} D_{xx}(t) \right. \\&\quad \left. - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S + M_{yH} \chi _{Hx}] \right) D_y(t) - K_x(t) ,\\ 0&= D_y(t)' + \left( \sigma _{x}^2 D_{xy}(t) + \sigma _{xy} D_{yy}(t) - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} \right) D_x(t) \\&\quad + \left( \sigma _{y}^2 D_{yy}(t) + \sigma _{xy} D_{xy}(t) - \bar{\kappa }_y \right) D_y(t) - K_y(t) ,\\ 0&= D_0(t)' + \frac{1}{2}\sigma _{x}^2 \left( D_x(t)^2 + D_{xx}(t) \right) \\&\quad +\frac{1}{2}\sigma _{y}^2 \left( D_y(t)^2 + D_{yy}(t) \right) + \sigma _{xy} \left( D_x(t) D_y(t) + D_{xy}(t) \right) \\&\quad - \tfrac{\gamma -1}{\gamma } \left[ M_x(t) D_x(t) +M_y(t) D_y \right] - K_0(t), \end{aligned}$$

with terminal condition

\(D_0(T)=D_x(T)=D_y(T)=D_{xx}(T)=D_{yy}(T)=D_{xy}(T)=0\) and I have introduced

$$\begin{aligned} \bar{\kappa }_x&= \kappa _x+\frac{\gamma -1}{\gamma } \Bigl [M_{xS} \chi _S + M_{xH} \chi _{Hx}\Bigr ], \qquad \bar{\kappa }_y = \kappa _y + \frac{\gamma -1}{\gamma } M_{xH} \chi _{Hy}, \\ K_0(t)&= \frac{\delta }{\gamma } + \frac{\gamma -1}{2\gamma ^2} \bigg [ \left( \frac{{\tilde{\mu }}_{S}'(t)}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 \\&\quad + \left( \frac{{\tilde{\mu }}_{H}'(t)}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2 - 2 \frac{\rho _{HS} {\tilde{\mu }}_{S}'(t) \mu _H'(t)}{\sigma _{S}\sigma _{H}{\hat{\rho }}_{H}^2} + \lambda _L(t)^2 + \lambda _x(t)^2 + \lambda _y(t)^2 \bigg ] , \\ K_x(t)&= \frac{\gamma -1}{\gamma ^2 } \Bigl [ \left( {\tilde{\mu }}_{S}'(t) - \frac{\rho _{HS} \sigma _S}{\sigma _H} {\tilde{\mu }}_{H}'(t) \right) \frac{\chi _S}{\sigma _S^2 {\hat{\rho }}_H^2} + \left( {\tilde{\mu }}_{H}'(t) - \frac{\rho _{HS} \sigma _H}{\sigma _S} {\tilde{\mu }}_{S}'(t) \right) \frac{\chi _{xH}}{\sigma _H^2 {\hat{\rho }}_H^2} \Bigr ],\\ K_y(t)&= \frac{\gamma -1}{\gamma ^2 } \Bigl [\left( {\tilde{\mu }}_{H}'(t) - \frac{\rho _{HS} \sigma _H}{\sigma _S} {\tilde{\mu }}_{S}'(t) \right) \frac{\chi _{yH}}{\sigma _H^2 {\hat{\rho }}_H^2} \Bigr ]. \end{aligned}$$

and

$$\begin{aligned} K_{xx}&= \frac{\gamma -1}{2 \gamma ^2} \left[ \left( \frac{\chi _S}{\sigma _S {\hat{\rho }}_H}\right) ^2 + \left( \frac{\chi _{xH}}{\sigma _H {\hat{\rho }}_H}\right) ^2 - 2 \frac{\rho _{HS} \chi _S \chi _{xH} }{\sigma _S \sigma _H {\hat{\rho }}_H^2} \right] ,\\ K_{yy}&= \frac{\gamma -1}{2 \gamma ^2} \left[ \left( \frac{\chi _{yH}}{\sigma _H {\hat{\rho }}_H}\right) ^2 - 2 \frac{\rho _{HS} \chi _{Sy}\chi _{yH} }{\sigma _S \sigma _H {\hat{\rho }}_H^2}\right] ,\\ K_{xy}&= \frac{\gamma -1}{ \gamma ^2} \left[ \frac{\chi _{xH}\chi _{yH}}{\left( \sigma _H {\hat{\rho }}_H\right) ^2} - \rho _{HS}\frac{\chi _{Sy}\chi _{xH} +\chi _{S}\chi _{yH} }{\sigma _S \sigma _H {\hat{\rho }}_H^2}\right] . \end{aligned}$$

Secondly, the functions \({\bar{D}}_0\), \({\bar{D}}_x\), \({\bar{D}}_y\), \({\bar{D}}_{xx}\), \({\bar{D}}_{yy}\), and \({\bar{D}}_{xy}\) must satisfy the following system of equations:

$$\begin{aligned} 0&= \frac{1}{2}\frac{\partial ^{\text {}} {\bar{D}}_{xx}(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 {\bar{D}}_{xx}(t,s)^2 + \frac{1}{2}\sigma _{y}^2 {\bar{D}}_{xy}(t,s)^2 + \sigma _{xy} {\bar{D}}_{xy}(t,s) {\bar{D}}_{xx}(t,s) - \bar{\kappa }_x {\bar{D}}_{xx}(t,s) \\&\quad - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S + M_{yH} \chi _{Hx}] {\bar{D}}_{xy}(t,s) - K_{xx}, \\ 0&= \frac{1}{2}\frac{\partial ^{\text {}} {\bar{D}}_{yy}(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 {\bar{D}}_{xy}(t,s)^2 + \frac{1}{2}\sigma _{y}^2 {\bar{D}}_{yy}(t,s)^2 + \sigma _{xy} {\bar{D}}_{xy}(t,s) {\bar{D}}_{yy}(t,s) - \bar{\kappa }_y {\bar{D}}_{yy}(t,s) \\&\quad - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} {\bar{D}}_{xy}(t,s) - K_{yy}, \\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_{xy}(t,s)}{\partial t^{\text {}}} + \sigma _{x}^2 {\bar{D}}_{xx}(t,s) {\bar{D}}_{xy}(t,s) + \sigma _{y}^2 {\bar{D}}_{yy}(t,s) {\bar{D}}_{xy}(t,s) \\&\quad + \sigma _{xy} \left( {\bar{D}}_{xx}(t,s) {\bar{D}}_{yy}(t,s) + {\bar{D}}_{xy}(t,s)^2 \right) \\&\quad - (\bar{\kappa }_x+\bar{\kappa }_y) {\bar{D}}_{xy}(t,s) - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} {\bar{D}}_{xx}(t,s) - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S \\&\quad + M_{yH} \chi _{Hx}] {\bar{D}}_{yy}(t,s) - K_{xy}, \\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_x(t,s)}{\partial t^{\text {}}} + \left( \sigma _{x}^2 {\bar{D}}_{xx}(t,s) + \sigma _{xy} {\bar{D}}_{xy}(t,s) - \bar{\kappa }_x \right) {\bar{D}}_x(t,s) \\&\quad + \left( \sigma _{y}^2 {\bar{D}}_{xy}(t,s) + \sigma _{xy} {\bar{D}}_{xx}(t,s) - \tfrac{\gamma -1}{\gamma } [M_{yS} \chi _S + M_{yH} \chi _{Hx}] \right) {\bar{D}}_y(t,s) \\&\quad + k_1 \sigma _{Hx} {\bar{D}}_{xx}(t,s) + k_1\sigma _{Hy} {\bar{D}}_{xy}(t,s) - K_x(t),\\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_y(t,s)}{\partial t^{\text {}}} + \left( \sigma _{x}^2 {\bar{D}}_{xy}(t,s) + \sigma _{xy} {\bar{D}}_{yy}(t,s) - \tfrac{\gamma -1}{\gamma } M_{xH} \chi _{Hy} \right) {\bar{D}}_x(t,s) \\&\quad + \left( \sigma _{y}^2 {\bar{D}}_{yy}(t,s) + \sigma _{xy} {\bar{D}}_{xy}(t,s) - \bar{\kappa }_y \right) {\bar{D}}_y(t,s) \\&\quad + k_1 \sigma _{Hx} {\bar{D}}_{xy}(t,s) + k_1\sigma _{Hy} {\bar{D}}_{yy}(t,s) - K_y(t),\\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_0(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 \left( {\bar{D}}_x(t,s)^2 + {\bar{D}}_{xx}(t,s)\right) +\frac{1}{2}\sigma _{y}^2 \left( {\bar{D}}_y(t,s)^2 + {\bar{D}}_{yy}(t,s) \right) \\&\quad + \sigma _{xy} \left( {\bar{D}}_x(t,s) {\bar{D}}_y(t,s) + {\bar{D}}_{xy}(t,s) \right) \\&\quad +\left( k_1\sigma _{Hx}- \tfrac{\gamma -1}{\gamma } M_x(t)\right) {\bar{D}}_x(t,s) + \left( k_1\sigma _{Hy}- \tfrac{\gamma -1}{\gamma } M_y(t)\right) {\bar{D}}_y(t,s) \\&\quad + \frac{1}{2}k_1 (k_1-1) \sigma _{H}^2 + k_1 \left( {{\tilde{r}}}- \tfrac{\gamma -1}{\gamma } [R-m] + \frac{1}{\gamma } {\tilde{\mu }}_{H} \right) - K_0(t), \end{aligned}$$

with terminal condition \({\bar{D}}_0(s,s)={\bar{D}}_x(s,s)={\bar{D}}_y(s,s)={\bar{D}}_{xx}(s,s)={\bar{D}}_{yy}(s,s)={\bar{D}}_{xy}(s,s)=0\).

1.3 Corollary 3.4: Solving for G with state-dependent modifiers

In the following, I assume that the the modifiers are defined as (22)–(24). Moreover, I denote

$$\begin{aligned} {\tilde{\chi }}_S =\chi _S + \nu _{S,2}, \qquad {\tilde{\chi }}_{yS} = \nu _{S,3}, \qquad {\tilde{\chi }}_{xH} =\chi _{xH} + \nu _{H,2}, \qquad {\tilde{\chi }}_{yH} = \chi _{Hy} + \nu _{H,3}, \end{aligned}$$

and the deterministic parts of the asset drifts

$$\begin{aligned} {\tilde{\mu }}_{S}'(t) = \mu _{S} + {\mathcal {D}}+ \nu _{S,0} + \nu _{S,1} \, t, \qquad {\tilde{\mu }}_{H}'(t) = \mu _{H} + R - m +\nu _{H,0} + \nu _{H,1} \, t, \end{aligned}$$

and

$$\begin{aligned} \lambda _{L}(t) = \lambda _{L,0} + \lambda _{L,1} \, t, \qquad \lambda _{x}(t) = \lambda _{x,0} + \lambda _{x,1} \, t, \qquad \lambda _{y}(t) = \lambda _{y,0} + \lambda _{y,1} \, t. \end{aligned}$$

I rewrite (67) as

$$\begin{aligned} 0= & {} k_2 H^{k_1} + \frac{\partial G}{\partial t}+ \frac{1}{2}\sigma _H^2 H^2 G_{HH}+ \frac{1}{2}\sigma _x^2 G_{xx} + \frac{1}{2}\sigma _y^2 G_{yy} + \sigma _{Hx} H G_{Hx} \nonumber \\&+ \sigma _{Hy} H G_{Hy} + \sigma _{xy} G_{xy} \nonumber \\&+\bar{\mu }_H(t,x,y) H G_H +\bar{\mu }_x(t,x,y) G_x +\bar{\mu }_y(t,x,y) G_y - {\bar{r}}_G(t,x,y) G . \end{aligned}$$
(76)

with terminal condition \(G(T,H,x,y)= \varepsilon ^{\frac{1}{\gamma }}\) and

$$\begin{aligned} - {\bar{r}}_G(t,x,y)= & {} \frac{\gamma -1}{\gamma } {{\tilde{r}}}(t) + K_0(t) + K_x(t) x+ K_y(t) y + K_{xx} x^2 + K_{yy} y^2+ K_{xy} xy \, ,\\ \bar{\mu }_H(t,x,y)= & {} {{\tilde{r}}}(t) + {\tilde{\mu }}_H(t,x,y) + \chi _{Hx} x + \chi _{Hy} y - \frac{\gamma -1}{\gamma } \left( {\tilde{\mu }}_{H}'(t,x,y)+ \chi _{Hx} x_t+ \chi _{Hy} y_t \right) \, ,\\ \bar{\mu }_x(t,x,y)= & {} - {\bar{\kappa }}_x x - \frac{\gamma -1}{\gamma } \zeta _{xy} y - \frac{\gamma -1}{\gamma } M_x(t) \, ,\\ \bar{\mu }_y(t,x,y)= & {} - \frac{\gamma -1}{\gamma } \zeta _{yx} x -{\bar{\kappa }}_y y - \frac{\gamma -1}{\gamma } M_y(t) \, , \end{aligned}$$

where

$$\begin{aligned} \bar{\kappa }_x&= \kappa _x+\frac{\gamma -1}{\gamma } \zeta _{xx}, \qquad \bar{\kappa }_y = \kappa _y + \frac{\gamma -1}{\gamma } \zeta _{yy}, \\ K_0(t)&= \frac{\delta }{\gamma } + \frac{\gamma -1}{2\gamma ^2} \bigg [ \left( \frac{{\tilde{\mu }}_{S}'(t)}{\sigma _{S}{\hat{\rho }}_{H}}\right) ^2 + \left( \frac{{\tilde{\mu }}_{H}'(t)}{\sigma _{H}{\hat{\rho }}_{H}}\right) ^2\\&\quad - 2 \frac{\rho _{HS} {\tilde{\mu }}_{S}'(t) {\tilde{\mu }}_H'(t)}{\sigma _{S}\sigma _{H}{\hat{\rho }}_{H}^2} + \lambda _L(t)^2 + \lambda _x(t)^2 + \lambda _y(t)^2 \bigg ], \\ K_x(t)&= \frac{\gamma -1}{\gamma ^2 } \Bigl [ \left( {\tilde{\mu }}_{S}'(t) - \frac{\rho _{HS} \sigma _S}{\sigma _H} {\tilde{\mu }}_{H}'(t) \right) \frac{{\tilde{\chi }}_S}{\sigma _S^2 {\hat{\rho }}_H^2} + \left( {\tilde{\mu }}_{H}'(t) - \frac{\rho _{HS} \sigma _H}{\sigma _S} {\tilde{\mu }}_{S}'(t) \right) \frac{{\tilde{\chi }}_{xH}}{\sigma _H^2 {\hat{\rho }}_H^2} \\&\quad + \lambda _L(t) \lambda _{L,2} + \lambda _x(t) \lambda _{x,2} + \lambda _y(t) \lambda _{y,2} \Bigr ],\\ K_y(t)&= \frac{\gamma -1}{\gamma ^2}\Bigl [\left( {\tilde{\mu }}_{S}'(t) - \frac{\rho _{HS} \sigma _S}{\sigma _H} {\tilde{\mu }}_{H}'(t) \right) \frac{{\tilde{\chi }}_{Sy}}{\sigma _S^2 {\hat{\rho }}_H^2} + \left( {\tilde{\mu }}_{H}'(t) - \frac{\rho _{HS} \sigma _H}{\sigma _S} {\tilde{\mu }}_{S}'(t) \right) \frac{{\tilde{\chi }}_{yH}}{\sigma _H^2 {\hat{\rho }}_H^2} \\&\quad + \lambda _L(t) \lambda _{L,3} + \lambda _x(t) \lambda _{x,3} + \lambda _y(t) \lambda _{y,3}\Bigr ], \end{aligned}$$

and

$$\begin{aligned} K_{xx}&= \frac{\gamma -1}{2 \gamma ^2} \left[ \left( \frac{{\tilde{\chi }}_S}{\sigma _S {\hat{\rho }}_H}\right) ^2 + \left( \frac{{\tilde{\chi }}_{xH}}{\sigma _H {\hat{\rho }}_H}\right) ^2 - 2 \frac{\rho _{HS} {\tilde{\chi }}_S{\tilde{\chi }}_{xH} }{\sigma _S \sigma _H {\hat{\rho }}_H^2} + \lambda _{L,2}^2 + \lambda _{x,2}^2 + \lambda _{y,2}^2 \right] , \\ K_{yy}&= \frac{\gamma -1}{2 \gamma ^2} \left[ \left( \frac{{\tilde{\chi }}_Sy}{\sigma _S {\hat{\rho }}_H}\right) ^2 + \left( \frac{{\tilde{\chi }}_{yH}}{\sigma _H {\hat{\rho }}_H}\right) ^2 - 2 \frac{\rho _{HS} {\tilde{\chi }}_S{\tilde{\chi }}_{xH} }{\sigma _S \sigma _H {\hat{\rho }}_H^2} + \lambda _{L,3}^2 + \lambda _{x,3}^2 + \lambda _{y,3}^2\right] , \\ K_{xy}&= \frac{\gamma -1}{\gamma ^2} \Bigl [ \left( 1 - \frac{\rho _{HS} \sigma _S}{\sigma _H}\right) \frac{{\tilde{\chi }}_S {\tilde{\chi }}_{yS}}{\sigma _S^2 {\hat{\rho }}_H^2} + \left( 1 - \frac{\rho _{HS} \sigma _H}{\sigma _S} \right) \frac{{\tilde{\chi }}_{xH} {\tilde{\chi }}_{yH}}{\sigma _H^2 {\hat{\rho }}_H^2} \\&\quad + \lambda _{L,2} \lambda _{L,3} + \lambda _{x,2} \lambda _{x,3} + \lambda _{y,2} \lambda _{y,3} \Bigr ] . \end{aligned}$$

I conjecture that the solution of (76) takes the form

$$\begin{aligned}&G(t,H,x,y) = \varepsilon ^{1/\gamma } \exp \left\{ D_0(t) + D_x(t) x + D_y(t)y + \frac{1}{2}D_{xx}(t) x^2 + \frac{1}{2}D_{yy}(t) y^2 + D_{xy}(t) x y \right\} \\&\quad + k_2 H^{k_1} \int _t^T \exp \left\{ {\bar{D}}_0(t,s) + {\bar{D}}_x(t,s) x + {\bar{D}}_y(t,s)y + \frac{1}{2}{\bar{D}}_{xx}(t,s) x^2 \right. \\&\quad \left. + \frac{1}{2}{\bar{D}}_{yy}(t,s) y^2 + {\bar{D}}_{xy}(t,s) x y \right\} \,ds. \end{aligned}$$

Just as with time-dependent modifiers, I substitute the relevant derivatives of the conjecture into (76). I equate the sum of all the terms without integrals to zero, and equate all terms with integral to zero. Hence I find that the partial differential equation (76) is satisfied under the following conditions for D and \({\bar{D}}\):

The functions \(D_0\), \(D_x\), \(D_y\), \(D_{xx}\), \(D_{yy}\), and \(D_{xy}\) must satisfy the following system of ODEs

$$\begin{aligned} 0&= \frac{1}{2}D_{xx}(t)' + \frac{1}{2}\sigma _{x}^2 D_{xx}(t)^2 + \frac{1}{2}\sigma _{y}^2 D_{xy}(t)^2 + \sigma _{xy} D_{xy}(t) D_{xx}(t) - \bar{\kappa }_x D_{xx}(t) \\&\quad - \tfrac{\gamma -1}{\gamma } \zeta _{yx} D_{xy}(t) - K_{xx}, \\ 0&= \frac{1}{2}D_{yy}(t)' + \frac{1}{2}\sigma _{x}^2 D_{xy}(t)^2 + \frac{1}{2}\sigma _{y}^2 D_{yy}(t)^2 + \sigma _{xy} D_{xy}(t) D_{yy}(t)\\&\quad - \bar{\kappa }_y D_{yy}(t) - \tfrac{\gamma -1}{\gamma } \zeta _{xy} D_{xy}(t) - K_{yy}, \\ 0&= D_{xy}(t)' + \sigma _{x}^2 D_{xx}(t) D_{xy}(t) + \sigma _{y}^2 D_{yy}(t) D_{xy}(t) + \sigma _{xy} \left( D_{xx}(t) D_{yy}(t) + D_{xy}(t)^2 \right) \\&\quad - (\bar{\kappa }_x+\bar{\kappa }_y) D_{xy}(t) \\&\quad - \tfrac{\gamma -1}{\gamma } \zeta _{xy} D_{xx}(t) - \tfrac{\gamma -1}{\gamma } \zeta _{yx} D_{yy}(t) - K_{xy}, \\ 0&= D_x(t)' + \left( \sigma _{x}^2 D_{xx}(t) + \sigma _{xy} D_{xy}(t) - \bar{\kappa }_x \right) D_x(t)\\&\quad + \left( \sigma _{y}^2 D_{xy}(t) + \sigma _{xy} D_{xx}(t) - \tfrac{\gamma -1}{\gamma } \zeta _{yx} \right) D_y(t) - K_x(t),\\ 0&= D_y(t)' + \left( \sigma _{x}^2 D_{xy}(t) + \sigma _{xy} D_{yy}(t) - \tfrac{\gamma -1}{\gamma } \zeta _{xy} \right) D_x(t) \\&\quad + \left( \sigma _{y}^2 D_{yy}(t) + \sigma _{xy} D_{xy}(t) - \bar{\kappa }_y \right) D_y(t) - K_y(t),\\ 0&= D_0(t)' + \frac{1}{2}\sigma _{x}^2 \left( D_x(t)^2 + D_{xx}(t)\right) +\frac{1}{2}\sigma _{y}^2 \left( D_y(t)^2 + D_{yy}(t) \right) \\&\quad + \sigma _{xy} \left( D_x(t) D_y(t) + D_{xy}(t) \right) \\&\quad - \tfrac{\gamma -1}{\gamma } \left[ M_x(t) D_x(t)+ M_y(t) D_y(t)\right] - K_0(t), \end{aligned}$$

with terminal condition \(D_0(T)=D_x(T)=D_y(T)=D_{xx}(T)=D_{yy}(T)=D_{xy}(T)=0\). The constants \(\zeta \) are defined in “Appendix A.3”.

Secondly, the functions \({\bar{D}}_0\), \({\bar{D}}_x\), \({\bar{D}}_y\), \({\bar{D}}_{xx}\), \({\bar{D}}_{yy}\), and \({\bar{D}}_{xy}\) must satisfy the following system of equations:

$$\begin{aligned} 0&= \frac{1}{2}\frac{\partial ^{\text {}} {\bar{D}}_{xx}(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 {\bar{D}}_{xx}(t,s)^2 + \frac{1}{2}\sigma _{y}^2 {\bar{D}}_{xy}(t,s)^2 + \sigma _{xy} {\bar{D}}_{xy}(t,s) {\bar{D}}_{xx}(t,s) \\&\quad - \bar{\kappa }_x {\bar{D}}_{xx}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{yx} {\bar{D}}_{xy}(t,s) - K_{xx}, \\ 0&= \frac{1}{2}\frac{\partial ^{\text {}} {\bar{D}}_{yy}(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 {\bar{D}}_{xy}(t,s)^2 + \frac{1}{2}\sigma _{y}^2 {\bar{D}}_{yy}(t,s)^2 + \sigma _{xy} {\bar{D}}_{xy}(t,s) {\bar{D}}_{yy}(t,s) \\&\quad - \bar{\kappa }_y {\bar{D}}_{yy}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{xy} {\bar{D}}_{xy}(t,s) - K_{yy}, \\ 0&= \frac{\partial ^{\text {}} D_{xy}}{\partial t^{\text {}}} + \sigma _{x}^2 {\bar{D}}_{xx}(t,s) {\bar{D}}_{xy}(t,s) + \sigma _{y}^2 {\bar{D}}_{yy}(t,s) {\bar{D}}_{xy}(t,s) \\&\quad + \sigma _{xy} \left( {\bar{D}}_{xx}(t,s) {\bar{D}}_{yy}(t,s) + {\bar{D}}_{xy}(t,s)^2 \right) \\&\quad - (\bar{\kappa }_x+\bar{\kappa }_y) {\bar{D}}_{xy}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{xy} {\bar{D}}_{xx}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{yx} {\bar{D}}_{yy}(t,s) - K_{xy}, \\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_x(t,s)}{\partial t^{\text {}}} + \left( \sigma _{x}^2 {\bar{D}}_{xx}(t,s) + \sigma _{xy} {\bar{D}}_{xy}(t,s) - \bar{\kappa }_x \right) {\bar{D}}_x(t,s) \\&\quad + \left( \sigma _{y}^2 {\bar{D}}_{xy}(t,s) + \sigma _{xy} {\bar{D}}_{xx}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{yx} \right) {\bar{D}}_y(t,s) \\&\quad + k_1 \sigma _{Hx} {\bar{D}}_{xx}(t,s) + k_1\sigma _{Hy} {\bar{D}}_{xy}(t,s) - K_x(t),\\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_y(t,s)}{\partial t^{\text {}}} + \left( \sigma _{x}^2 {\bar{D}}_{xy}(t,s) + \sigma _{xy} {\bar{D}}_{yy}(t,s) - \tfrac{\gamma -1}{\gamma } \zeta _{xy}\right) {\bar{D}}_x(t,s) \\&\quad + \left( \sigma _{y}^2 {\bar{D}}_{yy}(t,s) + \sigma _{xy} {\bar{D}}_{xy}(t,s) - \bar{\kappa }_y \right) {\bar{D}}_y(t,s) \\&\quad + k_1 \sigma _{Hx} {\bar{D}}_{xy}(t,s) + k_1\sigma _{Hy} {\bar{D}}_{yy}(t,s) - K_y(t),\\ 0&= \frac{\partial ^{\text {}} {\bar{D}}_0(t,s)}{\partial t^{\text {}}} + \frac{1}{2}\sigma _{x}^2 \left( {\bar{D}}_x(t,s)^2 + {\bar{D}}_{xx}(t,s)\right) +\frac{1}{2}\sigma _{y}^2 \left( {\bar{D}}_y(t,s)^2 + {\bar{D}}_{yy}(t,s) \right) \\&\quad + \sigma _{xy} \left( {\bar{D}}_x(t,s) {\bar{D}}_y(t,s) + {\bar{D}}_{xy}(t,s) \right) \\&\quad +\left( k_1\sigma _{Hx}- \tfrac{\gamma -1}{\gamma } {{\tilde{M}}}_x(t)\right) {\bar{D}}_x(t,s) + \left( k_1\sigma _{Hy}- \tfrac{\gamma -1}{\gamma } {{\tilde{M}}}_y(t)\right) {\bar{D}}_y(t,s) + \frac{1}{2}k_1 (k_1-1) \sigma _{H}^2 \\&\quad + k_1 \left( {{\tilde{r}}}- \tfrac{\gamma -1}{\gamma } [R-m] + \frac{1}{\gamma } {\tilde{\mu }}_{H} \right) - K_0(t), \end{aligned}$$

with terminal condition \({\bar{D}}_0(s,s)={\bar{D}}_x(s,s)={\bar{D}}_y(s,s)={\bar{D}}_{xx}(s,s)={\bar{D}}_{yy}(s,s)={\bar{D}}_{xy}(s,s)=0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weiss, F. A numerical approach to solve consumption-portfolio problems with predictability in income, stock prices, and house prices. Math Meth Oper Res 93, 33–81 (2021). https://doi.org/10.1007/s00186-020-00727-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-020-00727-5

Keywords

JEL Classification

Navigation