Skip to main content
Log in

Optimal Stopping Time for Geometric Random Walks with Power Payoff Function

  • Stochastic Systems
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

An Erratum to this article was published on 01 December 2020

This article has been updated

Abstract

Two optimal stopping problems for geometric random walks with the observer’s power payoff function, on the finite and infinite horizons, are solved. For these problems, an explicit form of the cut value and also optimal stopping rules are established. It is proved that the optimal stopping rules are nonrandomized thresholds and describe the corresponding free boundary. An explicit form of the free boundary is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Change history

References

  1. Rozov, A.K., Optimalanye pravila ostanovki i ikh primeneniya (Optimal Stopping Rules and Their Applications), St. Petersburg: Politekhnika, 2009.

  2. Shiryaev, A.N., Statisticheskii posledovatelanyi analiz, Moscow: Nauka, 1969, 1st ed., Translated under the title Statistical Sequential Analysis, American Mathematical Society, 1973.

  3. Shiryaev, A. N. Osnovy stokhasticheskoi finansovoi matematiki, tom 2: Teoriya, Moscow: Fazis, 1998. Translated under the title Essentials of Stochastic Finance: Facts, Models, Theory. Singapore: World Scientific, 1999.

    Google Scholar 

  4. Arkin, V. I., Slastnikov, A. D., & Arkina, S. V., Stimulation of Investment Projects Using Amortization Mechanism, in Konsortsium ekonomicheskikh issledovanii i obrazovaniya. Nauchnye doklady (Consortium of Economic Studies and Education. Scientific Reports), Scientific Report no. 02/05. Moscow: EERC, 2002.

    Google Scholar 

  5. Ermakov, S. M. & Zhiglyavskii, A. A., Matematicheskaya teoriya optimalanogo eksperimenta (The Mathematical Theory of Optimal Experiment), Moscow: Nauka, 1987.

    Google Scholar 

  6. Shiryaev, A. N., Veroyatnost’–1, Moscow: Mosk. Tsentr Neprer. Mat. Obraz., 2004. Translated under the title Probability-1. New York: Springer-Verlag, 2016.

    Google Scholar 

  7. Wald, A., Sequential Analysis. New York: Wiley, 1947.

    MATH  Google Scholar 

  8. Ferguson, T.S., Optimal Stopping and Applications, Unpublished manuscript, 2000. www.math.ucla. edu/ ~ tom/Stopping/Contents.html (Accessed July 10, 2014).

  9. Föllmer, H. & Schied, A., Stochastic Finance. An Introduction in Discrete Time. 2nd ed. (De Gruyter, Berlin, 2004). Translated under the title Vvedenie v stokhasticheskie finansy. Diskretnoe vremya, Moscow: Mosk. Tsentr Neprer. Mat. Obraz., 2008.

    Book  Google Scholar 

  10. Jönsson, H., Kukush, A.G., Silvestrov, D.S., Threshold Structure of Optimal Stopping Strategies for American Type Option. I, Theory Probab. Math. Statist., 2005, no. 71, pp. 93–103.

  11. Jönsson, H., Kukush, A. G., & Silvestrov, D. S., Threshold Structure of Optimal Stopping Strategies for American Type Option. II. Theory Probab. Math. Statist. 2006, no. 72, pp. 47–58.

    Article  MathSciNet  Google Scholar 

  12. Kukush, A. G. & Silvestrov, D. S., Optimal Pricing of American Type Options with Discrete Time. Theory Stoch. Proces., 2004, vol. 10(26) no. 1-2, pp. 72–96.

    MathSciNet  MATH  Google Scholar 

  13. Novikov, A. A. & Shiryaev, A. N., On an Effective Solution of the Optimal Stopping Problem for Random Walks. Theor. Prob. App. 2005, vol. 49, no. 2, pp. 344–354.

    Article  MathSciNet  Google Scholar 

  14. Silaeva, M. V. & Silaev, A. M., Spros i predlozhenie (Demand and Supply), Nizhny Novgorod: Vyssh. Shk. Ekon., 2006.

    Google Scholar 

  15. Rockafellar, R. T., Convex Analysis, Princeton: Princeton Univ. Press, 1970. Translated under the title Vypuklyi analiz, Moscow: Mir, 1973.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Appendix 1

The proofs of the main assertions below involve the existence conditions and properties of the generating function and corresponding distribution function \({F}_{{\rho }_{1}}(x)\) of the random variable ρ1. First, introduce necessary definitions and auxiliary results.

Let a Borel function \(\varphi :{{\mathbb{R}}}^{1}\to {{\mathbb{R}}}^{+}\), further denoted by \(\varphi \left(\sigma \right)\), be defined by the equality

$$\varphi \left(\sigma \right)={\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}},$$
(A.1.1)

where 1 < λ < is a parameter and \(\sigma \in {{\mathbb{R}}}^{1}\) is a variable. The function \(\varphi \left(\sigma \right)\) is called the generating function for the moments of the random variable ρ1 [6]. Also, introduce the notations

$${M}^{+}\triangleq {\rm sup}_{x\in {{\mathbb{R}}}^{1}}\left\{x\in {{\mathbb{R}}}^{1}:{F}_{{\rho }_{1}}\left(x\right)<1\right\}\quad \left({M}^{-}\triangleq {\rm inf}_{x\in {{\mathbb{R}}}^{1}}\left\{x\in {{\mathbb{R}}}^{1}:{F}_{{\rho }_{1}}\left(x\right)>0\right\}\right).$$

The value \({M}^{+}\left({M}^{-}\right)\) is the essential upper (lower) bound of the random variable ρ1 [6], and the set \(\left[{M}^{-},{M}^{+}\right]\) is its support [6].

The next simple (apparently, known) result was however not formulated and proved in the literature.

Proposition 1.

Let M > − and M+ < . Then the following assertions are true:

1) For any \(\sigma \in {{\mathbb{R}}}^{1}\), (21) holds.

2) For any \(\sigma \in {{\mathbb{R}}}^{1}\) there exist the finite derivatives \(\frac{{d}^{l}}{d{\sigma }^{l}}\varphi \left(\sigma \right)\), where \(l\in {\mathbb{N}}\) is arbitrary; moreover, \(\frac{{d}^{2}}{d{\sigma }^{2}}\varphi \left(\sigma \right)>0\), meaning that \(\varphi \left(\sigma \right)\) is a strictly convex function.

3) If \(m\left(\sigma \right)\triangleq \frac{d}{d\sigma }\mathrm{ln}\,\varphi \left(\sigma \right)\), then

$${M}^{+}=\mathop{{\rm{lim}}}\limits_{\sigma \to \infty }\frac{m\left(\sigma \right)}{\mathrm{ln}\,\lambda },$$
(A.1.2)
$${M}^{-}=\mathop{{\rm{lim}}}\limits_{\sigma \to -\infty }\frac{m\left(\sigma \right)}{\mathrm{ln}\,\lambda }.$$
(A.1.3)

Proof of Proposition 1. By the hypotheses of Proposition 1, M and M+ are finite. Therefore, without loss of generality, assume that ρ1cP-a.s., where c > 0 is a constant.

1. Due to this assumption and the definition of M+, it follows that cρ1M+ < P-a.s. Hence, for any σ ⩾ 0,

$$0<{\lambda }^{-\left|\sigma \right|c}\leqslant {\lambda }^{-\left|\sigma \right|{\rho }_{1}}\leqslant {\lambda }^{\sigma {\rho }_{1}}\leqslant {\lambda }^{\left|\sigma \right|{\rho }_{1}}\leqslant {\lambda }^{\left|\sigma \right|{M}^{+}}<\infty $$
(A.1.4)

P-a.s. Relations (A.1.4) give the requisite inequalities \(0<\varphi \left(\sigma \right)<\infty \).

2. Let \(l\in {\mathbb{N}}\). Then, for any σ ⩾ 0, inequality (A.1.4) yields

$$0<{c}^{l}{\lambda }^{\sigma c}\leqslant {\left|{\rho }_{1}\right|}^{l}{\lambda }^{\sigma {\rho }_{1}}\leqslant {\left({M}^{+}\right)}^{l}{\lambda }^{\left|\sigma \right|{M}^{+}}<\infty $$
(A.1.5)

P-a.s., and consequently

$$0<{\mathtt{E}}{\left|{\rho }_{1}\right|}^{l}{\lambda }^{\sigma {\rho }_{1}}<\infty .$$

This means that for any \(l\in {\mathbb{N}}\) there exists the lth derivative

$$\frac{{d}^{l}\varphi }{d{\sigma }^{l}}\left(\sigma \right)={\left(\mathrm{ln}\,\lambda \right)}^{l}{\mathtt{E}}{\rho }_{1}^{l}{\lambda }^{\sigma {\rho }_{1}}$$

of the generating function, and also

$$\left|\frac{{d}^{l}\varphi }{d{\sigma }^{l}}\left(\sigma \right)\right|<\infty .$$
(A.1.6)

In particular, for any σ ⩾ 0,

$$\frac{d}{d\sigma }\varphi \left(\sigma \right)=\left(\mathrm{ln}\,\lambda \right){\mathtt{E}}{\rho }_{1}{\lambda }^{\sigma {\rho }_{1}},$$
(A.1.7)
$$\frac{{d}^{2}\varphi }{d{\sigma }^{2}}\left(\sigma \right)={\left(\mathrm{ln}\,\lambda \right)}^{2}{\mathtt{E}}{\rho }_{1}^{2}{\lambda }^{\sigma {\rho }_{1}}>0,$$
(A.1.8)
$$m\left(\sigma \right)=\left(\mathrm{ln}\,\lambda \right)\frac{{\mathtt{E}}{\rho }_{1}{\lambda }^{\sigma {\rho }_{1}}}{{\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}}}.$$
(A.1.9)

From (A.1.6)–(A.1.8) it follows that \(\varphi \left(\sigma \right)\) is a strictly convex function.

3. Now establish equalities (A.1.2), (A.1.3). Let \({{\mathtt{P}}}^{\sigma ^{\prime} }\left(A\right)\) be a probability measure defined using the Esscher transform (e.g., see [3]) of the probability distribution of the random variable ρ1:

$${{\mathtt{P}}}^{\sigma ^{\prime} }\left(A\right)\triangleq {\mathtt{E}}\frac{{\lambda }^{\sigma ^{\prime} {\rho }_{1}}}{{\mathtt{E}}{\lambda }^{\sigma ^{\prime} {\rho }_{1}}}{1}_{A}\left(\omega \right),$$
(A.1.10)

where \(A\in {\mathcal{F}}\) is arbitrary and \(\sigma ^{\prime} \geqslant 0\). As is known, \({{\mathtt{P}}}^{\sigma ^{\prime} }\) is equivalent to P; see [3]. Then (A.1.9) and (A.1.10) lead to the representation

$$\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1},$$
(A.1.11)

where \({{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1}\) denotes the expected value of the random variable ρ1 with respect to the measure \({{\mathtt{P}}}^{\sigma ^{\prime} }\). Since ρ1M+ < P-a.s., from (A.1.11) it follows that

$$\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1}\leqslant {M}^{+}$$
(A.1.12)

for σ⩾0.

Let \({\left\{{M}_{n}\right\}}_{n\geqslant 1}\) be a number sequence such that 0 < Mn < M+ and \(\mathop{{\rm{lim}}}\limits_{n\to \infty }{M}_{n}={M}^{+}\). Then

$$\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1}\geqslant {{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1}{1}_{\left[{M}_{n},\infty \right)}\left({\rho }_{1}\right)\geqslant {M}_{n},$$

where

$${1}_{\left[{M}_{n},\infty \right)}\left(x\right)\triangleq \left\{\begin{array}{ll}1,&x\in \left[{M}_{n},\infty \right)\\ 0,&x\notin \left[{M}_{n},\infty \right).\end{array}\right.$$

Hence,

$$\mathop{{\rm{lim}}}\limits_{\sigma ^{\prime} \to \infty }\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }\geqslant {M}_{n}\mathop{\to }\limits_{n\to \infty }{M}^{+}.$$

This formula, in combination with (A.1.12), gives the requisite equality \(\mathop{{\rm{lim}}}\limits_{\sigma ^{\prime} \to \infty }\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={M}^{+}\).

The equality \(\mathop{{\rm{lim}}}\limits_{\sigma ^{\prime} \to -\infty }\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={M}^{-}\) is established by analogy. The proof of Proposition 1 is complete.

Corollary 1.

Let the hypotheses of Proposition 1hold. Then the generating function \(\varphi \left(\sigma \right)\), where σ⩾0, has the following properties.

1. If M+ < 0, then the function φ(σ) is monotonically decreasing from value 1 to value 0.

2. If M+ > 0 and Eρ1 < 0, then there exist \(0<{\sigma }_{0}<{\sigma }_{1}<{\sigma }_{\frac{1}{\beta }}<\infty \) such that:

(a) \(0<\varphi \left({\sigma }_{0}\right)=\mathop{\min }\limits_{\sigma \in {{\mathbb{R}}}^{+}}\varphi \left(\sigma \right)<1\), i.e., σ0 is a unique nonnegative root of the equation

$$\frac{d}{d\sigma }\varphi \left({\sigma }_{0}\right)=0;$$

(b) \(\varphi \left(0\right)=\varphi \left({\sigma }_{1}\right)=1\), where σ1 ≠ 0 is a unique nontrivial root of the equation \(\varphi \left(\sigma \right)=1\), and also

\(0<\varphi \left(\sigma \right)<1\) for any \(\sigma \in \left(0,{\sigma }_{1}\right)\),

\(\varphi \left(\sigma \right)\geqslant 1\) for any σσ1;

(c) for any \(\beta \in \left(0,1\right)\) there exists a unique root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left(\sigma \right)=\frac{1}{\beta }\), and also

—if \(\sigma <{\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)<\frac{1}{\beta }\),

—if \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\).

3. If Eρ1 ⩾ 0, then for any \(\beta \in \left(0,1\right]\) there exists a unique nontrivial root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left({\sigma }_{\frac{1}{\beta }}\right)=\frac{1}{\beta }\), and also \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\) for \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\).

Proof of Corollary 1. In accordance with item 2 of Proposition 1 (also, see (A.1.8)), the function \(\varphi \left(\sigma \right)\) is strictly convex. As is known [15], the generating function of a strictly convex function (in particular, \(\frac{d}{d\sigma }\varphi \left(\sigma \right)\)) is continuous and monotonically increasing. In addition, the derivatives \(\frac{d\varphi }{d\sigma }\left(0\right)\) and \(\frac{{d}^{2}\varphi }{d{\sigma }^{2}}\left(0\right)\) are obviously well-defined as the right derivatives at the zero point and have the form

$$\frac{d\varphi }{d\sigma }\left(0\right)=\mathrm{ln}\,\lambda {\mathtt{E}}{\rho }_{1},\quad \frac{{d}^{2}\varphi }{d{\sigma }^{2}}\left(0\right)={\left(\mathrm{ln}\,\lambda \right)}^{2}{\mathtt{E}}{\rho }_{1}^{2}.$$

By the definition of the generating function (see formula (A.1.1), the parameter λ satisfies the inequality \(\mathrm{ln}\,\lambda >1;\) therefore, the sign of \(\frac{d\varphi }{d\sigma }\left(0\right)\) coincides with that of Eρ1. Consequently, \(\varphi \left(\sigma \right)\) depends on σ in one of the following possible ways.

Case 1: M+ < 0. Then Eρ1 < 0, \(\frac{d\varphi }{d\sigma }\left(0\right)<0\), and \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\uparrow 0\) as σ. In other words, for σ⩾0 the function φ(σ) is monotonically decreasing from value 1 (\(\varphi \left(0\right)=1\)) to value 0.

Case 2. Let M+ > 0, but Eρ1 < 0 (meaning that \(\frac{d\varphi }{d\sigma }\left(0\right)<0\)). In addition, in this case

$$\mathop{{\rm{lim}}}\limits_{\sigma \to \infty }\frac{d\varphi }{d\sigma }\left(\sigma \right)\geqslant \mathrm{ln}\,\lambda \mathop{{\rm{lim}}}\limits_{\sigma \to \infty }\left\{{\mathtt{E}}{\rho }_{1}{\lambda }^{\sigma {\rho }_{1}}{1}_{\left\{{O}_{{M}^{+}}(\varepsilon )\cap (0,\infty )\right\}}\right\}=\infty ,$$

where \({O}_{{M}^{+}}(\varepsilon )\) denotes the ε-neighborhood of the point M+ and ε > 0 is arbitrary. As a result, there exist values σ0, σ1, and \({\sigma }_{\frac{1}{\beta }}\in {{\mathbb{R}}}^{+}\) such that:

2.1. σ0 is a unique root of the equation \(\frac{d\varphi }{d\sigma }\left(\sigma \right)=0\) (by the Cauchy theorem, since \(\frac{d\varphi }{d\sigma }\left(0\right)<0\), \(\mathop{{\rm{lim}}}\limits_{\sigma \to \infty }\frac{d\varphi }{d\sigma }\left(\sigma \right)=+\infty \), and \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\) is monotonically increasing in σ). Moreover, due to the Fermat theorem, \(0<\varphi \left({\sigma }_{0}\right)=\mathop{\min }\limits_{\sigma \in {{\mathbb{R}}}^{1}}\varphi \left(\sigma \right)<\varphi \left(0\right)=1\);

2.2. σ1 > 0 is a unique nontrivial solution of the equation \(\varphi \left(\sigma \right)=1\left(=\varphi \left(0\right)\right)\). Moreover, if: (a) \(\sigma \in \left(0,{\sigma }_{1}\right]\), then \(\varphi \left(\sigma \right)\leqslant 1\), (b) σ > σ1, then \(\varphi \left(\sigma \right)>1\);

2.3. \({\sigma }_{\frac{1}{\beta }}>0\) is the root of the equation \(\varphi \left(\sigma \right)=\frac{1}{\beta }\), where \(\beta \in \left(0,1\right)\). Moreover, \({\sigma }_{1}<{\sigma }_{\frac{1}{\beta }}\) and if \(\sigma >{\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)>\varphi \left({\sigma }_{\frac{1}{\beta }}\right)\).

Case 3. Let Eρ1 ⩾ 0. Then \(\frac{d\varphi }{d\sigma }\left(0\right)\geqslant 0\) and, since \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\) is monotonically increasing in σ, obviously \(\varphi \left(\sigma \right)\geqslant 1\) is also monotonic and increasing in σ, σ ⩾ 0. Therefore, for any \(\beta \in \left(0,1\right]\) there exists a unique root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left({\sigma }_{\frac{1}{\beta }}\right)=\frac{1}{\beta }\). Moreover, \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\) for \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\).

The proof of Corollary 1 is complete.

Appendix 2

This appendix presents the proofs of Theorems 1 and 2.

For proving Theorem 1, use an auxiliary assertion on the solution of the following recursive relation in \({w}^{k}:{{\mathbb{R}}}^{+}\to {{\mathbb{R}}}^{+}:\)

$$\left\{\begin{array}{l}{w}^{k}\left(x\right)=\beta {\mathtt{E}}{w}^{k+1}\left(x{\lambda }^{{\rho }_{1}}\right)\\ {w}^{k}\left(x\right){| }_{k = N}=A{x}^{\sigma }+B.\end{array}\right.$$
(A.2.1)

Lemma 1.

Let the hypotheses of Theorem 1 hold. A family \({\{{w}^{k}(x)\}}_{k\in {N}_{0}}\) is a unique solution of (A.2.1) if and only if

$${w}^{k}\left(x\right)={A}_{k}{x}^{\sigma }+{B}_{k},\quad k\in {N}_{0},$$
(A.2.2)

where \({\{{A}_{k}\}}_{k\in {N}_{0}}\) and \({\{{B}_{k}\}}_{k\in {N}_{0}}\) are given by (19).

Proof of Lemma 1. 1. Necessity. Demonstrate that the family of functions (A.2.2) is the solution of system (A.2.1). Employ the method of mathematical induction. For k = N, from (A.2.1) it follows that \({w}^{k}\left(x\right){| }_{k = N}=A{x}^{\sigma }+B,x\in {{\mathbb{R}}}^{+}\).

Let

$${w}^{k+1}\left(x\right)={A}_{k+1}{x}^{\sigma }+{B}_{k+1},$$

where

$${A}_{k+1}=A{\left(\beta {\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}}\right)}^{N-k-1},\quad {B}_{k+1}={\beta }^{N-k-1}B.$$

Establish the equality \({w}^{k}\left(x\right)={A}_{k}{x}^{\sigma }+{B}_{k}\). Really, (A.2.1) and the inductive hypothesis considered jointly imply

$$\begin{array}{c}{w}^{k}\left(x\right)=\beta {\mathtt{E}}{w}^{k+1}\left(x{\lambda }^{{\rho }_{1}}\right)=\beta {\mathtt{E}}\left({A}_{k+1}{x}^{\sigma }{\lambda }^{\sigma {\rho }_{1}}+{B}_{k+1}\right)\\ =\beta {A}_{k+1}{x}^{\sigma }{\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}}+\beta {B}_{k+1}=A{\left(\beta {\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}}\right)}^{N-k}{x}^{\sigma }+{\beta }^{N-k}B={A}_{k}{x}^{\sigma }+{B}_{k}\end{array}$$

for any \(x\in {{\mathbb{R}}}^{+}\). Thus, the requiste result is obtained, and the necessity part of Lemma 1 is proved.

2. Sufficiency. Show that the family of functions (A.2.2) satisfies (A.2.1). From (A.2.2) it follows that

$${w}^{k+1}\left(x{\lambda }^{{\rho }_{1}}\right)\triangleq {A}_{k+1}{x}^{\sigma }{\lambda }^{\sigma {\rho }_{1}}+{B}_{k+1}.$$

Next, calculate the expectation of the right- and left-hand sides of the last equality and multiply the resulting expression by β. In view of (20), this gives

$$\begin{array}{c}\beta {\mathtt{E}}{w}^{k+1}\left(x{\lambda }^{{\rho }_{1}}\right)=\beta {\mathtt{E}}\left({A}_{k+1}{x}^{\sigma }{\lambda }^{\sigma {\rho }_{1}}+{B}_{k+1}\right)\\ =\left(\beta {A}_{k+1}{\mathtt{E}}{\lambda }^{\sigma {\rho }_{1}}\right){x}^{\sigma }+\beta {B}_{k+1}={A}_{k}{x}^{\sigma }+{B}_{k}={w}^{k}\left(x\right).\end{array}$$

which finally establishes the sufficiency part of Lemma 1. Note that uniqueness is obvious. The proof of Lemma 1 is complete.

Proof of Theorem 1. As is known [3], the solution of the recursive relation (9) has admits of representation (18). In turn, from (18) it follows that for any kN0 there are three mutually exclusive cases: 1) \({\Gamma }_{k}=\varnothing \) \(\left({C}_{k}={{\mathbb{R}}}^{+}\right)\), 2) \({C}_{k}=\varnothing \) \(\left({\Gamma }_{k}={{\mathbb{R}}}^{+}\right)\), and 3) \({C}_{k}\ne \varnothing ,{\Gamma }_{k}\ne \varnothing \), where \({C}_{k}\cup {\Gamma }_{k}={{\mathbb{R}}}^{+}\). Consider each in detail.

Case 1. From (9) it follows that for any \(x\in {{\mathbb{R}}}^{+}\) the cut value \({v}^{k}\left(x\right)\) satisfies the recursive relation

$$\left\{\begin{array}{l}{v}^{k}\left(x\right)=\beta {\mathtt{E}}{v}^{k+1}\left(x{\lambda }^{{\rho }_{1}}\right)\\ {v}^{k}\left(x\right){| }_{k = N}=A{x}^{\sigma }+B.\end{array}\right.$$
(A.2.3)

Due to Lemma 1, the unique solution of (A.2.3) has the form

$${v}^{k}\left(x\right)={A}_{k}{x}^{\sigma }+{B}_{k},$$
(A.2.4)

where Ak and Bk satisfy the recursive relations (20).

Case 2. From (18) it follows that for any \(x\in {{\mathbb{R}}}^{+}\) the solution of (9) has the form

$${v}^{k}\left(x\right)=A{x}^{\sigma }+B.$$
(A.2.5)

Case 3. Since \({C}_{k}\ne \varnothing \), from (15) it follows that for any 0 ⩽ nk the set Cn is non-empty, \({C}_{n}\ne \varnothing \). Therefore, for any xCn the cut value \({v}^{n}\left(x\right)\) satisfies the recursive relation (A.2.3), which has solution (A.2.4). Hence, in view of (15), for any nk and \(x\in {{\mathbb{R}}}^{+}\) expression (18) can be written as

$$\begin{array}{l}{v}^{n}\left(x\right)=\left(A{x}^{\sigma }+B\right){1}_{\left\{x\in {\Gamma }_{n}\right\}}+\left({A}_{n}{x}^{\sigma }+{B}_{n}\right){1}_{\left\{x\in {C}_{n}\right\}}\\ =\left(A{x}^{\sigma }+B\right)+\left[{A}_{n}{x}^{\sigma }+{B}_{n}-A{x}^{\sigma }-B\right]{1}_{\left\{x\in {C}_{n}\right\}}.\end{array}$$
(A.2.6)

The last equality here is immediate from the definition of the sets Cn and Γn.

From (A.2.6) and (12) it obviously follows that, for any \(x\in {{\mathbb{R}}}^{+}\),

$${v}^{n}\left(x\right)-A{x}^{\sigma }-B=\left[{A}_{n}{x}^{\sigma }+{B}_{n}-A{x}^{\sigma }-B\right]{1}_{\left\{x\in {C}_{n}\right\}}\geqslant 0.$$
(A.2.7)

Recall that \({C}_{n}\ne \varnothing \). Then from (A.2.7) it follows that the set \({C}_{n}\left(n\leqslant k\right)\) can be represented as

$$\begin{array}{c}{C}_{n}\,=\,\left\{x\in {{\mathbb{R}}}^{+}:{v}^{n}\left(x\right)-A{x}^{\sigma }-B\right\}\\ \,=\,\left\{x\in {{\mathbb{R}}}^{+}:\left[{A}_{n}{x}^{\sigma }+{B}_{n}-A{x}^{\sigma }-B\right]{1}_{\left\{x\in {C}_{n}\right\}}>0\right\}\\ \,=\,\left\{x\in {{\mathbb{R}}}^{+}:{A}_{n}{x}^{\sigma }+{B}_{n}>A{x}^{\sigma }+B\right\},\end{array}$$
(A.2.8)

which proves (22). Consequently, by (A.2.8) the set \({\Gamma }_{n}={{\mathbb{R}}}^{+}\backslash {C}_{n}\) admits of representation (23).

Consider the right-hand side of equality (A.2.7). Due to (A.2.8), for any \(x\in {{\mathbb{R}}}^{+}\),

$$\begin{array}{l}\left[{A}_{n}{x}^{\sigma }+{B}_{n}-A{x}^{\sigma }-B\right]{1}_{\left\{x\in {{\mathbb{R}}}^{+}:{A}_{n}{x}^{\sigma }+{B}_{n} \,{>}\,A{x}^{\sigma }+B\right\}}\\ =\max \left\{{A}_{n}{x}^{\sigma }+{B}_{n}-A{x}^{\sigma }-B,0\right\}.\end{array}$$

This equality, in combination with (23) and (A.2.5), finally establishes the requisite equality (24). The proof of Theorem 1 is complete.

Proof of Theorem 2. In accordance with Theorem 1, for any kN0 the stopping domain Γk has form (23), hence being closed. Therefore, its interior \(\left({\rm{int}}{\Gamma }_{k}\right)\) admits of representation (25). Clearly, the set ∂Γk given by (26) consists of all boundary points of the set Γk, and for any kN0 the boundary is \(\partial {\Gamma }_{k}\ne \varnothing \). Therefore, the elements \(x\left(k\right)\in \partial {\Gamma }_{k}\) satisfy the equation

$${A}_{k}{x}^{\sigma }+{B}_{k}=A{x}^{\sigma }+B.$$
(A.2.9)

If the hypotheses of Theorem 1 and at least one of conditions I–IV of Theorem 2 are satisfied, then in each of the cases considered there exists a unique nonnegative solution \(x\left(k\right)\) of (A.2.9). Consequently, ∂Γk is a singleton, i.e., \(\partial {\Gamma }_{k}=\left\{x\left(k\right)\right\}\). Assertions I–IV of Theorem 2 are verified directly. The proof of Theorem 2 is complete.

Appendix 3

The proof of all assertions of Theorem 3 is based on the following auxiliary result.

Lemma 2.

Let the hypotheses of Theorem 3 hold for \(x\in {{\mathbb{R}}}^{+}\) and a Borel function \(w:{{\mathbb{R}}}^{+}\to {{\mathbb{R}}}^{1}\) that satisfies the equation

$$w\left(x\right)=\beta {\mathtt{E}}w\left(x{\lambda }^{{\rho }_{1}}\right).$$
(A.3.1)

Then for any \(x\in {{\mathbb{R}}}^{+}\) Eq. (A.3.1) has the unique nontrivial solution

$$w\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}},$$
(A.3.2)

where A* > 0 is some constant and \({\sigma }_{\frac{1}{\beta }}\) is a unique root of the equation \(\beta \varphi \left(\sigma \right)=1\).

Remark 5. Lemma 2 establishes the structure of the solution of Eq. (A.3.1). However, the value of the constant A* is still unknown.

Proof of Lemma 2. Use the method of mathematical induction. Consider the recursive relation

$$\left\{\begin{array}{l}{w}^{k}\left(x\right)=\beta {\mathtt{E}}{w}^{k-1}\left(x{\lambda }^{{\rho }_{1}}\right)\\ {w}^{k}\left(x\right){| }_{k = 0}={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}},k\geqslant 1,\end{array}\right.$$
(A.3.3)

where \(x\in {{\mathbb{R}}}^{+}\) and A* is some positive constant.

From (A.3.3) it follows that

$${w}^{1}\left(x\right)=\beta {\mathtt{E}}{w}^{0}\left(x{\lambda }^{{\rho }_{1}}\right)=\beta {\mathtt{E}}{A}^{* }{\left(x{\lambda }^{{\rho }_{1}}\right)}^{{\sigma }_{\frac{1}{\beta }}}=\beta {A}^{* }{x}^{{\sigma }_{\frac{1}{\sigma }}}\varphi \left({\sigma }_{\frac{1}{\sigma }}\right).$$

The condition \(\beta \varphi \left({\sigma }_{\frac{1}{\beta }}\right)=1\) implies \({w}^{1}\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Now let \({w}^{k-1}(x)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Show that (A.3.3) leads to the equality \({w}^{k}(x)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Really, due to (A.3.3),

$${w}^{k}\left(x\right)=\beta {\mathtt{E}}{A}^{* }{\left(x{\lambda }^{{\rho }_{1}}\right)}^{{\sigma }_{\frac{1}{\beta }}}=\beta {A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\varphi \left({\sigma }_{\frac{1}{\sigma }}\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}.$$

The inductive step is proved. Hence, for any k ⩾ 1, \(x\in {{\mathbb{R}}}^{+}\), and \(\beta \in \left(0,1\right]\), the equality \({w}^{k}\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\) holds. Consequently,

$$w\left(x\right)\triangleq \mathop{{\rm{lim}}}\limits_{k\to \infty }{w}^{k}\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}.$$

The uniqueness of the solution of Eq. (A.3.1) follows from the uniquness of the corresponding limit. The proof of Lemma 2 is complete.

Proof of Theorem 3. Since Eq. (A.3.1) has the unique solution Eq. (A.3.2) for any \(x\in {{\mathbb{R}}}^{+}\), it also has the same solution for any \(x\in C\subseteq {{\mathbb{R}}}^{+}\). Therefore, due to (35) and Lemma 2, the equality

$$v\left(x\right)=w\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}$$
(A.3.4)

holds for any xC. Hence, by (A.3.4) from (34) and (35) it follows that, for any \(x\in {{\mathbb{R}}}^{+}\),

$$\begin{array}{l}v\left(x\right)=\left(A{x}^{\sigma }+B\right){1}_{\left\{x\in \Gamma \right\}}+\beta {\mathtt{E}}v\left(x{\lambda }^{{\rho }_{1}}\right){1}_{\left\{x\in C\right\}}\\ =(A{x}^{\sigma }+B){1}_{\{x\in \Gamma \}}+v(x){1}_{\{x\in C\}}\\ =(A{x}^{\sigma }+B){1}_{\{x\in \Gamma \}}+{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}{1}_{\{x\in C\}}.\end{array}$$
(A.3.5)

As \({1}_{\left\{x\in \Gamma \right\}}=1-{1}_{\left\{x\in C\right\}}\), for any \(x\in {{\mathbb{R}}}^{+}\) relation (A.3.5) leads to

$$v\left(x\right)-A{x}^{\sigma }-B=\left({A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B\right){1}_{\left\{x\in C\right\}}\geqslant 0.$$
(A.3.6)

The inequality in (A.3.6) is immediate from inequalities (30). A direct check shows that, due to (A.3.6),

$$\left({A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B\right){1}_{\left\{x\in C\right\}}=\max \left[{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B,0\right]$$
(A.3.7)

for any \(x\in {{\mathbb{R}}}^{+}\). In turn, (A.3.6) and (A.3.7) considered jointly imply (40).

To proceed, derive the requisite representation of the sets C and Γ. Really, from (40) and the definition of the set C it follows that

$$C=\left\{x\in {{\mathbb{R}}}^{+}:v\left(x\right)-A{x}^{\sigma }+B>0\right\}=\left\{x\in {{\mathbb{R}}}^{+}:{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}>A{x}^{\sigma }+B\right\}.$$

In view of the definition of the set \(\Gamma ={{\mathbb{R}}}^{+}\backslash C\), this gives

$$\Gamma =\left\{x\in {{\mathbb{R}}}^{+}:{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\leqslant A{x}^{\sigma }+B\right\}.$$
(A.3.8)

Now find the value of the constant A*. For this purpose, first note that inequality (30) and (A.3.7) lead to the optimization problem

$$\left(v\left(x\right)-A{x}^{\sigma }-B\right)\to {\rm inf}_{x\in {{\mathbb{R}}}^{+}}.$$
(A.3.9)

From equality (A.3.6) it follows that problem (A.3.9) is equivalent to

$$\max \left[{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B,0\right]\to {\rm inf}_{x\in {{\mathbb{R}}}^{+}}.$$
(A.3.10)

On the other hand, (A.3.6) and (A.3.7) considered jointly imply

$${\rm inf}_{x\in {{\mathbb{R}}}^{+}}\max \left[{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B,0\right]=0.$$
(A.3.11)

The natural question is whether the greatest lower bound in (A.3.11) is achieved or not. In other words, does there exist a value xΓ > 0 such that

$${A}^{* }{x}_{\Gamma }^{{\sigma }_{\frac{1}{\beta }}}=A{x}_{\Gamma }^{\sigma }+B?$$
(A.3.12)

In accordance with the Fermat theorem, such a value xΓ > 0 exists if the equation

$$d\left({A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}-A{x}^{\sigma }-B\right)/dx=0$$

is solvable. Hence,

$${\sigma }_{\frac{1}{\beta }}{A}^{* }{x}_{\Gamma }^{{\sigma }_{\frac{1}{\beta }}-1}=\sigma A{x}_{\Gamma }^{\sigma -1}.$$
(A.3.13)

Thus, (A.3.12) and (A.3.13) considered jointly yield the system of nonlinear albegraic equations in A* and xΓ of the form

$$\left\{\begin{array}{l}{A}^{* }{x}_{\Gamma }^{{\sigma }_{\frac{1}{\beta }}}=A{x}_{\Gamma }^{\sigma }+B\\ {\sigma }_{\frac{1}{\beta }}{A}^{* }{x}_{\Gamma }^{{\sigma }_{\frac{1}{\beta }}}=\sigma A{x}_{\Gamma }^{\sigma }.\end{array}\right.$$
(A.3.14)

Under the hypotheses of Theorem 3 (i.e., under (37) and (38) or (39), system (A.3.14) has the unique solution (41), (42), which is easy to establish.

The next step is to demonstrate that xΓ > 0 given by (42) is the requisite free boundary. First, observe that by (A.3.8) the set Γ can be represented as Γ = ∂Γ ∪ intΓ, where

$${\rm{int}}\Gamma \triangleq \left\{x>0:{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}<A{x}^{\sigma }+B\right\}$$

denotes the interior of the set Γ and

$$\partial \Gamma =\left\{x>0:{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}=A{x}^{\sigma }+B\right\}$$

is the free boundary separating the set C from intΓ.

Obviously, \(\Gamma \ne \varnothing \) if \({\rm{int}}\Gamma \ne \varnothing \), i.e., if there exists at least one x > 0 such that

$${A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}<A{x}^{\sigma }+B;$$

or, if \(\partial \Gamma \ne \varnothing \), then there exists at least one solution of Eq. (A.3.12) in \(x\in {{\mathbb{R}}}^{+}\). Hence, due to (A.3.9), there exists a value xΓ > 0 such that

$${x}_{\Gamma }=\arg \left\{x>0:{A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}=A{x}^{\sigma }+B\right\},$$

i.e., xΓ is a unique extreme point of the set Γ. Therefore, Γ admits of representation (44).

For finishing the proof, note that (45) is immediate from the definition of the optimal stopping time τ0 and the equality

$${\tau }^{0}={\rm inf}\left\{n\geqslant 0:{S}_{n}\in \Gamma \right\}=\left\{\begin{array}{l}{\rm inf}_{n}\left\{n\geqslant 0,{S}_{n}\in \Gamma \right\}\\ \infty \,{\rm{if}}\,{S}_{n}\notin \Gamma \,{\rm{for}}\,{\rm{any}}\,n.\end{array}\right.$$

The proof of Theorem 3 is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zverev, O., Khametov, V. & Shelemekh, E. Optimal Stopping Time for Geometric Random Walks with Power Payoff Function. Autom Remote Control 81, 1192–1210 (2020). https://doi.org/10.1134/S0005117920070036

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117920070036

Keywords