Abstract
Two optimal stopping problems for geometric random walks with the observer’s power payoff function, on the finite and infinite horizons, are solved. For these problems, an explicit form of the cut value and also optimal stopping rules are established. It is proved that the optimal stopping rules are nonrandomized thresholds and describe the corresponding free boundary. An explicit form of the free boundary is presented.
Similar content being viewed by others
Change history
10 February 2021
An Erratum to this paper has been published: https://doi.org/10.1134/S0005117920120103
References
Rozov, A.K., Optimalanye pravila ostanovki i ikh primeneniya (Optimal Stopping Rules and Their Applications), St. Petersburg: Politekhnika, 2009.
Shiryaev, A.N., Statisticheskii posledovatelanyi analiz, Moscow: Nauka, 1969, 1st ed., Translated under the title Statistical Sequential Analysis, American Mathematical Society, 1973.
Shiryaev, A. N. Osnovy stokhasticheskoi finansovoi matematiki, tom 2: Teoriya, Moscow: Fazis, 1998. Translated under the title Essentials of Stochastic Finance: Facts, Models, Theory. Singapore: World Scientific, 1999.
Arkin, V. I., Slastnikov, A. D., & Arkina, S. V., Stimulation of Investment Projects Using Amortization Mechanism, in Konsortsium ekonomicheskikh issledovanii i obrazovaniya. Nauchnye doklady (Consortium of Economic Studies and Education. Scientific Reports), Scientific Report no. 02/05. Moscow: EERC, 2002.
Ermakov, S. M. & Zhiglyavskii, A. A., Matematicheskaya teoriya optimalanogo eksperimenta (The Mathematical Theory of Optimal Experiment), Moscow: Nauka, 1987.
Shiryaev, A. N., Veroyatnost’–1, Moscow: Mosk. Tsentr Neprer. Mat. Obraz., 2004. Translated under the title Probability-1. New York: Springer-Verlag, 2016.
Wald, A., Sequential Analysis. New York: Wiley, 1947.
Ferguson, T.S., Optimal Stopping and Applications, Unpublished manuscript, 2000. www.math.ucla. edu/ ~ tom/Stopping/Contents.html (Accessed July 10, 2014).
Föllmer, H. & Schied, A., Stochastic Finance. An Introduction in Discrete Time. 2nd ed. (De Gruyter, Berlin, 2004). Translated under the title Vvedenie v stokhasticheskie finansy. Diskretnoe vremya, Moscow: Mosk. Tsentr Neprer. Mat. Obraz., 2008.
Jönsson, H., Kukush, A.G., Silvestrov, D.S., Threshold Structure of Optimal Stopping Strategies for American Type Option. I, Theory Probab. Math. Statist., 2005, no. 71, pp. 93–103.
Jönsson, H., Kukush, A. G., & Silvestrov, D. S., Threshold Structure of Optimal Stopping Strategies for American Type Option. II. Theory Probab. Math. Statist. 2006, no. 72, pp. 47–58.
Kukush, A. G. & Silvestrov, D. S., Optimal Pricing of American Type Options with Discrete Time. Theory Stoch. Proces., 2004, vol. 10(26) no. 1-2, pp. 72–96.
Novikov, A. A. & Shiryaev, A. N., On an Effective Solution of the Optimal Stopping Problem for Random Walks. Theor. Prob. App. 2005, vol. 49, no. 2, pp. 344–354.
Silaeva, M. V. & Silaev, A. M., Spros i predlozhenie (Demand and Supply), Nizhny Novgorod: Vyssh. Shk. Ekon., 2006.
Rockafellar, R. T., Convex Analysis, Princeton: Princeton Univ. Press, 1970. Translated under the title Vypuklyi analiz, Moscow: Mir, 1973.
Author information
Authors and Affiliations
Appendices
Appendix 1
The proofs of the main assertions below involve the existence conditions and properties of the generating function and corresponding distribution function \({F}_{{\rho }_{1}}(x)\) of the random variable ρ1. First, introduce necessary definitions and auxiliary results.
Let a Borel function \(\varphi :{{\mathbb{R}}}^{1}\to {{\mathbb{R}}}^{+}\), further denoted by \(\varphi \left(\sigma \right)\), be defined by the equality
where 1 < λ < ∞ is a parameter and \(\sigma \in {{\mathbb{R}}}^{1}\) is a variable. The function \(\varphi \left(\sigma \right)\) is called the generating function for the moments of the random variable ρ1 [6]. Also, introduce the notations
The value \({M}^{+}\left({M}^{-}\right)\) is the essential upper (lower) bound of the random variable ρ1 [6], and the set \(\left[{M}^{-},{M}^{+}\right]\) is its support [6].
The next simple (apparently, known) result was however not formulated and proved in the literature.
Proposition 1.
Let M− > − ∞ and M+ < ∞. Then the following assertions are true:
1) For any \(\sigma \in {{\mathbb{R}}}^{1}\), (21) holds.
2) For any \(\sigma \in {{\mathbb{R}}}^{1}\) there exist the finite derivatives \(\frac{{d}^{l}}{d{\sigma }^{l}}\varphi \left(\sigma \right)\), where \(l\in {\mathbb{N}}\) is arbitrary; moreover, \(\frac{{d}^{2}}{d{\sigma }^{2}}\varphi \left(\sigma \right)>0\), meaning that \(\varphi \left(\sigma \right)\) is a strictly convex function.
3) If \(m\left(\sigma \right)\triangleq \frac{d}{d\sigma }\mathrm{ln}\,\varphi \left(\sigma \right)\), then
Proof of Proposition 1. By the hypotheses of Proposition 1, M− and M+ are finite. Therefore, without loss of generality, assume that ρ1 ⩾ cP-a.s., where c > 0 is a constant.
1. Due to this assumption and the definition of M+, it follows that c ⩽ ρ1 ⩽ M+ < ∞ P-a.s. Hence, for any σ ⩾ 0,
P-a.s. Relations (A.1.4) give the requisite inequalities \(0<\varphi \left(\sigma \right)<\infty \).
2. Let \(l\in {\mathbb{N}}\). Then, for any σ ⩾ 0, inequality (A.1.4) yields
P-a.s., and consequently
This means that for any \(l\in {\mathbb{N}}\) there exists the lth derivative
of the generating function, and also
In particular, for any σ ⩾ 0,
From (A.1.6)–(A.1.8) it follows that \(\varphi \left(\sigma \right)\) is a strictly convex function.
3. Now establish equalities (A.1.2), (A.1.3). Let \({{\mathtt{P}}}^{\sigma ^{\prime} }\left(A\right)\) be a probability measure defined using the Esscher transform (e.g., see [3]) of the probability distribution of the random variable ρ1:
where \(A\in {\mathcal{F}}\) is arbitrary and \(\sigma ^{\prime} \geqslant 0\). As is known, \({{\mathtt{P}}}^{\sigma ^{\prime} }\) is equivalent to P; see [3]. Then (A.1.9) and (A.1.10) lead to the representation
where \({{\mathtt{E}}}^{{{\mathtt{P}}}^{\sigma ^{\prime} }}{\rho }_{1}\) denotes the expected value of the random variable ρ1 with respect to the measure \({{\mathtt{P}}}^{\sigma ^{\prime} }\). Since ρ1 ⩽ M+ < ∞ P-a.s., from (A.1.11) it follows that
for σ⩾0.
Let \({\left\{{M}_{n}\right\}}_{n\geqslant 1}\) be a number sequence such that 0 < Mn < M+ and \(\mathop{{\rm{lim}}}\limits_{n\to \infty }{M}_{n}={M}^{+}\). Then
where
Hence,
This formula, in combination with (A.1.12), gives the requisite equality \(\mathop{{\rm{lim}}}\limits_{\sigma ^{\prime} \to \infty }\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={M}^{+}\).
The equality \(\mathop{{\rm{lim}}}\limits_{\sigma ^{\prime} \to -\infty }\frac{m\left(\sigma ^{\prime} \right)}{\mathrm{ln}\,\lambda }={M}^{-}\) is established by analogy. The proof of Proposition 1 is complete.
Corollary 1.
Let the hypotheses of Proposition 1hold. Then the generating function \(\varphi \left(\sigma \right)\), where σ⩾0, has the following properties.
1. If M+ < 0, then the function φ(σ) is monotonically decreasing from value 1 to value 0.
2. If M+ > 0 and Eρ1 < 0, then there exist \(0<{\sigma }_{0}<{\sigma }_{1}<{\sigma }_{\frac{1}{\beta }}<\infty \) such that:
(a) \(0<\varphi \left({\sigma }_{0}\right)=\mathop{\min }\limits_{\sigma \in {{\mathbb{R}}}^{+}}\varphi \left(\sigma \right)<1\), i.e., σ0 is a unique nonnegative root of the equation
(b) \(\varphi \left(0\right)=\varphi \left({\sigma }_{1}\right)=1\), where σ1 ≠ 0 is a unique nontrivial root of the equation \(\varphi \left(\sigma \right)=1\), and also
—\(0<\varphi \left(\sigma \right)<1\) for any \(\sigma \in \left(0,{\sigma }_{1}\right)\),
—\(\varphi \left(\sigma \right)\geqslant 1\) for any σ⩾σ1;
(c) for any \(\beta \in \left(0,1\right)\) there exists a unique root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left(\sigma \right)=\frac{1}{\beta }\), and also
—if \(\sigma <{\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)<\frac{1}{\beta }\),
—if \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\).
3. If Eρ1 ⩾ 0, then for any \(\beta \in \left(0,1\right]\) there exists a unique nontrivial root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left({\sigma }_{\frac{1}{\beta }}\right)=\frac{1}{\beta }\), and also \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\) for \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\).
Proof of Corollary 1. In accordance with item 2 of Proposition 1 (also, see (A.1.8)), the function \(\varphi \left(\sigma \right)\) is strictly convex. As is known [15], the generating function of a strictly convex function (in particular, \(\frac{d}{d\sigma }\varphi \left(\sigma \right)\)) is continuous and monotonically increasing. In addition, the derivatives \(\frac{d\varphi }{d\sigma }\left(0\right)\) and \(\frac{{d}^{2}\varphi }{d{\sigma }^{2}}\left(0\right)\) are obviously well-defined as the right derivatives at the zero point and have the form
By the definition of the generating function (see formula (A.1.1), the parameter λ satisfies the inequality \(\mathrm{ln}\,\lambda >1;\) therefore, the sign of \(\frac{d\varphi }{d\sigma }\left(0\right)\) coincides with that of Eρ1. Consequently, \(\varphi \left(\sigma \right)\) depends on σ in one of the following possible ways.
Case 1: M+ < 0. Then Eρ1 < 0, \(\frac{d\varphi }{d\sigma }\left(0\right)<0\), and \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\uparrow 0\) as σ → ∞. In other words, for σ⩾0 the function φ(σ) is monotonically decreasing from value 1 (\(\varphi \left(0\right)=1\)) to value 0.
Case 2. Let M+ > 0, but Eρ1 < 0 (meaning that \(\frac{d\varphi }{d\sigma }\left(0\right)<0\)). In addition, in this case
where \({O}_{{M}^{+}}(\varepsilon )\) denotes the ε-neighborhood of the point M+ and ε > 0 is arbitrary. As a result, there exist values σ0, σ1, and \({\sigma }_{\frac{1}{\beta }}\in {{\mathbb{R}}}^{+}\) such that:
2.1. σ0 is a unique root of the equation \(\frac{d\varphi }{d\sigma }\left(\sigma \right)=0\) (by the Cauchy theorem, since \(\frac{d\varphi }{d\sigma }\left(0\right)<0\), \(\mathop{{\rm{lim}}}\limits_{\sigma \to \infty }\frac{d\varphi }{d\sigma }\left(\sigma \right)=+\infty \), and \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\) is monotonically increasing in σ). Moreover, due to the Fermat theorem, \(0<\varphi \left({\sigma }_{0}\right)=\mathop{\min }\limits_{\sigma \in {{\mathbb{R}}}^{1}}\varphi \left(\sigma \right)<\varphi \left(0\right)=1\);
2.2. σ1 > 0 is a unique nontrivial solution of the equation \(\varphi \left(\sigma \right)=1\left(=\varphi \left(0\right)\right)\). Moreover, if: (a) \(\sigma \in \left(0,{\sigma }_{1}\right]\), then \(\varphi \left(\sigma \right)\leqslant 1\), (b) σ > σ1, then \(\varphi \left(\sigma \right)>1\);
2.3. \({\sigma }_{\frac{1}{\beta }}>0\) is the root of the equation \(\varphi \left(\sigma \right)=\frac{1}{\beta }\), where \(\beta \in \left(0,1\right)\). Moreover, \({\sigma }_{1}<{\sigma }_{\frac{1}{\beta }}\) and if \(\sigma >{\sigma }_{\frac{1}{\beta }}\), then \(\varphi \left(\sigma \right)>\varphi \left({\sigma }_{\frac{1}{\beta }}\right)\).
Case 3. Let Eρ1 ⩾ 0. Then \(\frac{d\varphi }{d\sigma }\left(0\right)\geqslant 0\) and, since \(\frac{d\varphi }{d\sigma }\left(\sigma \right)\) is monotonically increasing in σ, obviously \(\varphi \left(\sigma \right)\geqslant 1\) is also monotonic and increasing in σ, σ ⩾ 0. Therefore, for any \(\beta \in \left(0,1\right]\) there exists a unique root \({\sigma }_{\frac{1}{\beta }}\) of the equation \(\varphi \left({\sigma }_{\frac{1}{\beta }}\right)=\frac{1}{\beta }\). Moreover, \(\varphi \left(\sigma \right)\geqslant \frac{1}{\beta }\) for \(\sigma \geqslant {\sigma }_{\frac{1}{\beta }}\).
The proof of Corollary 1 is complete.
Appendix 2
This appendix presents the proofs of Theorems 1 and 2.
For proving Theorem 1, use an auxiliary assertion on the solution of the following recursive relation in \({w}^{k}:{{\mathbb{R}}}^{+}\to {{\mathbb{R}}}^{+}:\)
Lemma 1.
Let the hypotheses of Theorem 1 hold. A family \({\{{w}^{k}(x)\}}_{k\in {N}_{0}}\) is a unique solution of (A.2.1) if and only if
where \({\{{A}_{k}\}}_{k\in {N}_{0}}\) and \({\{{B}_{k}\}}_{k\in {N}_{0}}\) are given by (19).
Proof of Lemma 1. 1. Necessity. Demonstrate that the family of functions (A.2.2) is the solution of system (A.2.1). Employ the method of mathematical induction. For k = N, from (A.2.1) it follows that \({w}^{k}\left(x\right){| }_{k = N}=A{x}^{\sigma }+B,x\in {{\mathbb{R}}}^{+}\).
Let
where
Establish the equality \({w}^{k}\left(x\right)={A}_{k}{x}^{\sigma }+{B}_{k}\). Really, (A.2.1) and the inductive hypothesis considered jointly imply
for any \(x\in {{\mathbb{R}}}^{+}\). Thus, the requiste result is obtained, and the necessity part of Lemma 1 is proved.
2. Sufficiency. Show that the family of functions (A.2.2) satisfies (A.2.1). From (A.2.2) it follows that
Next, calculate the expectation of the right- and left-hand sides of the last equality and multiply the resulting expression by β. In view of (20), this gives
which finally establishes the sufficiency part of Lemma 1. Note that uniqueness is obvious. The proof of Lemma 1 is complete.
Proof of Theorem 1. As is known [3], the solution of the recursive relation (9) has admits of representation (18). In turn, from (18) it follows that for any k ∈ N0 there are three mutually exclusive cases: 1) \({\Gamma }_{k}=\varnothing \) \(\left({C}_{k}={{\mathbb{R}}}^{+}\right)\), 2) \({C}_{k}=\varnothing \) \(\left({\Gamma }_{k}={{\mathbb{R}}}^{+}\right)\), and 3) \({C}_{k}\ne \varnothing ,{\Gamma }_{k}\ne \varnothing \), where \({C}_{k}\cup {\Gamma }_{k}={{\mathbb{R}}}^{+}\). Consider each in detail.
Case 1. From (9) it follows that for any \(x\in {{\mathbb{R}}}^{+}\) the cut value \({v}^{k}\left(x\right)\) satisfies the recursive relation
Due to Lemma 1, the unique solution of (A.2.3) has the form
where Ak and Bk satisfy the recursive relations (20).
Case 2. From (18) it follows that for any \(x\in {{\mathbb{R}}}^{+}\) the solution of (9) has the form
Case 3. Since \({C}_{k}\ne \varnothing \), from (15) it follows that for any 0 ⩽ n ⩽ k the set Cn is non-empty, \({C}_{n}\ne \varnothing \). Therefore, for any x ∈ Cn the cut value \({v}^{n}\left(x\right)\) satisfies the recursive relation (A.2.3), which has solution (A.2.4). Hence, in view of (15), for any n ⩽ k and \(x\in {{\mathbb{R}}}^{+}\) expression (18) can be written as
The last equality here is immediate from the definition of the sets Cn and Γn.
From (A.2.6) and (12) it obviously follows that, for any \(x\in {{\mathbb{R}}}^{+}\),
Recall that \({C}_{n}\ne \varnothing \). Then from (A.2.7) it follows that the set \({C}_{n}\left(n\leqslant k\right)\) can be represented as
which proves (22). Consequently, by (A.2.8) the set \({\Gamma }_{n}={{\mathbb{R}}}^{+}\backslash {C}_{n}\) admits of representation (23).
Consider the right-hand side of equality (A.2.7). Due to (A.2.8), for any \(x\in {{\mathbb{R}}}^{+}\),
This equality, in combination with (23) and (A.2.5), finally establishes the requisite equality (24). The proof of Theorem 1 is complete.
Proof of Theorem 2. In accordance with Theorem 1, for any k ∈ N0 the stopping domain Γk has form (23), hence being closed. Therefore, its interior \(\left({\rm{int}}{\Gamma }_{k}\right)\) admits of representation (25). Clearly, the set ∂Γk given by (26) consists of all boundary points of the set Γk, and for any k ∈ N0 the boundary is \(\partial {\Gamma }_{k}\ne \varnothing \). Therefore, the elements \(x\left(k\right)\in \partial {\Gamma }_{k}\) satisfy the equation
If the hypotheses of Theorem 1 and at least one of conditions I–IV of Theorem 2 are satisfied, then in each of the cases considered there exists a unique nonnegative solution \(x\left(k\right)\) of (A.2.9). Consequently, ∂Γk is a singleton, i.e., \(\partial {\Gamma }_{k}=\left\{x\left(k\right)\right\}\). Assertions I–IV of Theorem 2 are verified directly. The proof of Theorem 2 is complete.
Appendix 3
The proof of all assertions of Theorem 3 is based on the following auxiliary result.
Lemma 2.
Let the hypotheses of Theorem 3 hold for \(x\in {{\mathbb{R}}}^{+}\) and a Borel function \(w:{{\mathbb{R}}}^{+}\to {{\mathbb{R}}}^{1}\) that satisfies the equation
Then for any \(x\in {{\mathbb{R}}}^{+}\) Eq. (A.3.1) has the unique nontrivial solution
where A* > 0 is some constant and \({\sigma }_{\frac{1}{\beta }}\) is a unique root of the equation \(\beta \varphi \left(\sigma \right)=1\).
Remark 5. Lemma 2 establishes the structure of the solution of Eq. (A.3.1). However, the value of the constant A* is still unknown.
Proof of Lemma 2. Use the method of mathematical induction. Consider the recursive relation
where \(x\in {{\mathbb{R}}}^{+}\) and A* is some positive constant.
From (A.3.3) it follows that
The condition \(\beta \varphi \left({\sigma }_{\frac{1}{\beta }}\right)=1\) implies \({w}^{1}\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Now let \({w}^{k-1}(x)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Show that (A.3.3) leads to the equality \({w}^{k}(x)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\). Really, due to (A.3.3),
The inductive step is proved. Hence, for any k ⩾ 1, \(x\in {{\mathbb{R}}}^{+}\), and \(\beta \in \left(0,1\right]\), the equality \({w}^{k}\left(x\right)={A}^{* }{x}^{{\sigma }_{\frac{1}{\beta }}}\) holds. Consequently,
The uniqueness of the solution of Eq. (A.3.1) follows from the uniquness of the corresponding limit. The proof of Lemma 2 is complete.
Proof of Theorem 3. Since Eq. (A.3.1) has the unique solution Eq. (A.3.2) for any \(x\in {{\mathbb{R}}}^{+}\), it also has the same solution for any \(x\in C\subseteq {{\mathbb{R}}}^{+}\). Therefore, due to (35) and Lemma 2, the equality
holds for any x ∈ C. Hence, by (A.3.4) from (34) and (35) it follows that, for any \(x\in {{\mathbb{R}}}^{+}\),
As \({1}_{\left\{x\in \Gamma \right\}}=1-{1}_{\left\{x\in C\right\}}\), for any \(x\in {{\mathbb{R}}}^{+}\) relation (A.3.5) leads to
The inequality in (A.3.6) is immediate from inequalities (30). A direct check shows that, due to (A.3.6),
for any \(x\in {{\mathbb{R}}}^{+}\). In turn, (A.3.6) and (A.3.7) considered jointly imply (40).
To proceed, derive the requisite representation of the sets C and Γ. Really, from (40) and the definition of the set C it follows that
In view of the definition of the set \(\Gamma ={{\mathbb{R}}}^{+}\backslash C\), this gives
Now find the value of the constant A*. For this purpose, first note that inequality (30) and (A.3.7) lead to the optimization problem
From equality (A.3.6) it follows that problem (A.3.9) is equivalent to
On the other hand, (A.3.6) and (A.3.7) considered jointly imply
The natural question is whether the greatest lower bound in (A.3.11) is achieved or not. In other words, does there exist a value xΓ > 0 such that
In accordance with the Fermat theorem, such a value xΓ > 0 exists if the equation
is solvable. Hence,
Thus, (A.3.12) and (A.3.13) considered jointly yield the system of nonlinear albegraic equations in A* and xΓ of the form
Under the hypotheses of Theorem 3 (i.e., under (37) and (38) or (39), system (A.3.14) has the unique solution (41), (42), which is easy to establish.
The next step is to demonstrate that xΓ > 0 given by (42) is the requisite free boundary. First, observe that by (A.3.8) the set Γ can be represented as Γ = ∂Γ ∪ intΓ, where
denotes the interior of the set Γ and
is the free boundary separating the set C from intΓ.
Obviously, \(\Gamma \ne \varnothing \) if \({\rm{int}}\Gamma \ne \varnothing \), i.e., if there exists at least one x > 0 such that
or, if \(\partial \Gamma \ne \varnothing \), then there exists at least one solution of Eq. (A.3.12) in \(x\in {{\mathbb{R}}}^{+}\). Hence, due to (A.3.9), there exists a value xΓ > 0 such that
i.e., xΓ is a unique extreme point of the set Γ. Therefore, Γ admits of representation (44).
For finishing the proof, note that (45) is immediate from the definition of the optimal stopping time τ0 and the equality
The proof of Theorem 3 is complete.
Rights and permissions
About this article
Cite this article
Zverev, O., Khametov, V. & Shelemekh, E. Optimal Stopping Time for Geometric Random Walks with Power Payoff Function. Autom Remote Control 81, 1192–1210 (2020). https://doi.org/10.1134/S0005117920070036
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0005117920070036