Optimal control for uncertain stochastic dynamic systems with jump and application to an advertising model
Introduction
The optimal control theory has been a significant branch of modern control theory since the 1950s. It plays an important role in various fields, such as market management, aerospace, chemical industry, etc. Up to now, the study of optimal control problem is still a hot topic and has a practical meaning.
The optimal control problems are generally classified into two categories: the optimal control problem with complete information and the optimal control problem with incomplete information. For the optimal control problem with complete information, the system’s parameters are completely known and the dynamic system is characterized by the deterministic differential or difference equation. For the optimal control problem with incomplete information, the occurrence of all the system states or outputs can not be accurately captured because of the various forms of indeterminate factors in the dynamic system. To deal with the optimal control problem under two incomplete information, stochastic optimal control theory and uncertain optimal control theory are developed. They are suitable for solving optimal control problems with objective indeterminacy and subjective indeterminacy, respectively. And we can judge the indeterminacy by identifying whether the distribution function is sufficiently close to the cumulative frequency.
When the sample data of these indeterminate factors are large enough, their statistical distribution functions are close enough to the long-run cumulative frequencies. Then the indeterminate factors are considered as objective indeterminacy. The objective indeterminacy in a dynamic system is described as a random variable or a stochastic process whose probability distribution can be estimated by a large number of historical data. Stochastic optimal control theory is a combination of optimal control theory and probability theory. And the optimal control problem with objective indeterminacy was regarded as a stochastic optimal control problem [1]. Thus stochastic differential equations are used to describe the continuous-time systems with objective indeterminacy. Based on stochastic differential equations, numbers of works on optimal control problems have been made in random environment [2], [3], [4], [5], [6]. As an important stochastic differential equation, Ito stochastic differential equation driven by the Wiener process was proposed by Ito [7]. Merton [8] applied the stochastic optimal control to the option pricing problem via Ito stochastic differential equation. Moreover, Zhou and Li [9] proved the stochastic LQ control model is an effective framework to investigate the portfolio selection problem.
From another aspect, the sample data of these indeterminate factors maybe not enough to estimate their distributions due to the limitations of capital and technology. Then the indeterminate factors are considered as subjective indeterminacy because of lacking of empirical data. In uncertainty theory [10], [11], [12], the subjective indeterminacy in a dynamic system is regarded as an uncertain variable or an uncertain process and its belief degree (uncertain measure) could be evaluated by some domain experts. The belief degree depends heavily on the personal knowledge and preference concerning events. When the personal knowledge and preference change, the belief degree changes too. The main difference between probability theory and uncertainty theory lies in the measure adopted. The former uses probability measure, while the latter applies uncertain measure. Uncertain optimal control theory is a combination of optimal control theory and uncertainty theory. It is highly appropriate for solving optimal control problems with subjective indeterminacy. And the optimal control problem with subjective indeterminacy was regarded as an uncertain optimal control problem [13]. Then uncertain differential equations are used to depict the continuous-time systems with subjective indeterminacy. An uncertain differential equation driven by the Liu process was presented by Liu [14]. Some results on uncertain differential equations may refer to [15], [16], [17], [18], [19], [20]. Based on the uncertain differential equation, Zhu [13] studied an uncertain optimal control problem. Applying the method of dynamic programming, an equation of optimality as a counterpart of the Hamilton-Jacobi-Bellman equation was provided to solve an uncertain optimal control problem [13]. Subsequently, Deng and Zhu [21] introduced an uncertain differential equation driven by the Liu process and jumps process. Then optimal control problems with jump were solved in [21], [22]. Later, numbers of works on optimal control problems have been made in uncertain environment. For instance the uncertain optimal control with indefinite control weight costs [23], [24] and uncertain switched optimal control [25], [26]. The uncertain optimal control theory has been well studied by uncertainty theory and makes great achievements in both theory and applications [27].
Randomness is an objective indeterminacy, while uncertainty is a subjective indeterminacy. The readers could do better to distinguish between randomness and uncertainty via the examples of load-bearing problems of the bridge and uncertain urn problems in the references [12] and [28], respectively. Randomness and uncertainty may simultaneously appear in a dynamic system. Obviously, we cannot deal with this complex system simply by probability theory or uncertainty theory. Thus, Liu [29], [30] combined probability theory with uncertainty theory and proposed chance theory. Chance theory is a mathematical methodology for modeling complex systems including both uncertainty and randomness. The uncertain random phenomena occur in a variety of aspects, such as uncertain random programming [30], [31], [32] and uncertain random reliability analysis [33], [34]. Chance theory provides an effective tool to solve optimal control problems [35], [36]. The references [35] and [36] respectively studied the linear optimal control problem and linear quadratic optimal control problem of discrete-time systems in uncertain random environments. Moreover, Gao and Yao [37] proposed the uncertain random process to model the uncertain stochastic dynamic system.
A stochastic differential equation driven by the standard Wiener process is used to depict a dynamic system including objective randomness, while an uncertain differential equation driven by the Liu process is used to portray a dynamic system with subjective uncertainty. For describing some extreme events in practice that have a great influence on uncertain dynamic systems, Deng and Zhu [21] introduced an uncertain differential equation driven by the Liu process and jumps process. In recent years, there are many researches on objective randomness [6], subjective uncertainty [27], and extreme events [22]. However, the objective randomness, subjective uncertainty, and extreme events may be in a dynamic system at the same time. There are few literatures which focus on continuous-time uncertain stochastic optimal control problem under the framework of chance theory proposed by Liu [29], [30]. Inspired by previous work, a continuous-time uncertain stochastic dynamic system described by both a stochastic differential equation driven by the standard Wiener process and an uncertain differential equation driven by the Liu process and jumps process is introduced. Making a further study of optimal control problem with incomplete information, an uncertain stochastic optimal problem with jump is established. It is different from stochastic optimal control problem [5], uncertain optimal control problem [27] and discrete-time uncertain stochastic optimal control problem [36]. The optimal control inputs and the corresponding optimal value of such a problem are discussed. Our paper contributes to the literature in the following way. Compared with discrete-time optimal control problem in uncertain random environments [35], [36], an continuous-time uncertain stochastic optimal control problem with jump is introduced at first. Second, as an extension of [13], [22], the principle of optimality in uncertain random environments is given, and then the equation of optimality for the proposed model is obtained. Third, optimal control problems with linear and quadratic objective functions, and an advertising problem are discussed by the acquired equation. Compared with [13], [21], [25], the main conclusions of the linear or quadratic objective function are extended from the uncertain dynamic systems to uncertain stochastic dynamic systems. Different from [39], [40], [41], [42], the advertising problem including both objective randomness and subjective uncertainty is considered as an uncertain stochastic optimal control problem based on chance theory.
The rest of this paper is organized as follows. In Section 2, some basic concepts and theorems of probability theory, uncertainty theory and chance theory are reviewed. In Section 3, the uncertain stochastic dynamic system with jump is introduced, and then the uncertain stochastic optimal control problem is established. Subsequently, the principle of optimality and equation of optimality are obtained to seek the optimal results of the proposed problem. Based on the equation of optimality, the optimal control inputs and corresponding optimal values of the linear optimal control model and linear quadratic optimal control model are discussed in Sections 4 and 5, respectively. Finally, we apply the uncertain stochastic optimal model to analyze an advertising problem in uncertain random environments, and the optimal pricing policies and advertising strategies are obtained in Sections 6.
Throughout this paper, denotes the -dimensional real Euclidean space, denotes the set of all matrices, denotes the transpose of a matrix and its trace. denotes the expected value of a random variable in the sense of probability measure, denotes the expected value of an uncertain variable in the sense of uncertain measure, denotes the expected value of an uncertain random variable in the sense of chance measure. The implies the Cartesian product .
Section snippets
Preliminary
Probability theory is a branch of mathematics concerned with the analysis of frequency, and it is used to study the behavior of random phenomena. In order to deal with random phenomena, a probability measure is defined as a set function satisfying the following three axioms: (i) normality axiom, (ii) nonnegativity axiom, and (iii) additivity axiom.
Uncertainty theory is a branch of mathematics concerned with the analysis of belief degree, and it is used to study uncertain phenomena. In order to
Optimal control model
Randomness is anything that follows the laws of probability theory. Stochastic differential equation driven by the standard Wiener process is used to describe the dynamic phenomena with objective randomness. It can be written as follows:where is the state vector of the system at time with the initial condition and is the control input of the system at time . The is a vector-valued function,
Linear optimal control model
Consider an optimal control model with a linear objective function subject to an uncertain stochastic dynamic system with additive indeterminacy:where are the state vectors of the system at time with the initial conditions and
Linear quadratic optimal control model
The linear quadratic optimal control is one of the most significant and fundamental classes of optimal control theory. In this part, the linear quadratic optimal control for uncertain stochastic dynamic systems are discussed. First, let us consider an optimal control model with a linear quadratic objective function subject to a linear uncertain stochastic dynamic system with additive indeterminacy:
An advertising model
Over the years, optimal control theory has been widely used in the field of marketing. Many of these applications deal with the problem of finding the best advertising strategy over time [35], [39], [40], [41]. Following the Nerlove-Arrow advertising model and chance theory, we consider an adverting model in uncertain random environments. The problem is to determine the policies of advertising and pricing for the product in a finite period. Here, we consider that a company who wants to make
Conclusions
Randomness is an objective indeterminacy, while uncertainty is a subjective indeterminacy. For dealing with an optimal control problem including both uncertainty and randomness, this paper considered an optimal control for uncertain random continuous-time systems with jump via chance theory. The continuous-time systems are described by both a stochastic differential equation driven by the standard Wiener process and an uncertain differential equation driven by the Liu process and jumps
CRediT authorship contribution statement
Xin Chen: Conceptualization, Writing - original draft, Writing - review & editing, Formal analysis. Yuanguo Zhu: Methodology, Supervision, Project administration, Funding acquisition. Linxue Sheng: Supervision.
Acknowledgement
This work is supported by the National Natural Science Foundation of China (No.61673011), China Scholarship Council (No. 202006840145), and the Natural Science Research of Jiangsu Higher Education Institutions of China (No. 19KJB110002).
References (42)
Dynamic programming and stochastic control process
Inf. Control
(1958)- et al.
Adams method for solving uncertain differential equations
Appl. Math. Comput.
(2015) - et al.
Numerical method for solving uncertain spring vibration equation
Appl. Math. Comput.
(2018) - et al.
Optimal control of uncertain systems with jump under optimistic value criterion
Eur. J. Control
(2017) - et al.
Bang-bang control model for uncertain switched systems
Appl. Math. Model.
(2015) - et al.
Uncertain bang-bang control problem for multi-stage switched systems
Physica A
(2020) - et al.
Uncertainty theory as a basis for belief reliability
Inf. Sci.
(2018) - et al.
Existence and uniqueness of optimal dynamic pricing and advertising controls without concavity
Oper. Res. Lett.
(2018) - et al.
Optimal feedback control of a class of stochastic systems permitting jumps in the diffusion processes
Int. J. Syst. Sci.
(1977) - et al.
Stochastic optimal control theory and its computational methods
Int. J. Syst. Sci.
(1980)
Deterministic and stochastic optimal control
Continuous-Time stochastic control and optimization with financial applications
An introduction to optimal control of FBSDE with incomplete information
On stochastic differential equations
Mem. Am. Math. Soc.
Theory of rational option pricing
Bell J. Econ. Manage. Sci.
Continuous-Ttime mean-variance portfolio selection: astochastic LQ framework
Appl. Math. Optim.
Uncertainty theory, 2nd ed
Uncertainty theory: A branch of mathematics for modeling human uncertainty
Uncertainty theory, 4th ed
Uncertain optimal control with application to a portfolio selection model
Cybern. Syst.
Fuzzy process, hybrid process and uncertain process
J. Uncertain Syst.
Cited by (10)
Optimistic value-based optimal control problems with uncertain discrete-time noncausal systems
2024, Applied Mathematics and ComputationReliability analysis of uncertain random systems based on uncertain differential equation
2023, Applied Mathematics and ComputationPiecewise parameterization for multifactor uncertain system and uncertain inventory-promotion optimization
2022, Knowledge-Based SystemsOptimal control and zero-sum game subject to differential equations with Liu processes and random matrices
2024, Optimal Control Applications and Methods