The maximum entropy method applied to stationary density computation
Introduction
The concept of entropy was first introduced by Clausius into thermodynamics in the middle of the nineteenth century, and later used in a different form by L. Boltzmann in his pioneering work on the kinetic theory of gases in 1866 [9]. It is a measure of the amount of information required to specify the state of a thermodynamic system. The famous Second Law of Thermodynamics says that in an isolated system, the (thermodynamic) entropy never decreases.
A modern concept of entropy was established in information theory by C.E. Shannon in 1948. The Shannon entropy defined for all finite sample spaces with events w1, w2, … , wn with probabilities p1, p2, … , pn (discrete information sources) iswhich is a measure of uncertainty of an information source.
Based on the Shannon entropy, the Kolmogorov–Sinai–Ornstein entropy defined for measurable transformations of probability spaces was successfully used in solving the problem of isomorphism of dynamical systems. And in 1965 R. Adler, A. Konhein, and M. McAndrew introduced the topological entropy for continuous transformations of compact Hausdorff spaces (see e.g., [5]).
The principle of maximum entropy was introduced in the context of information theory in 1957 by Jaynes. Since his seminal paper [8], the maximum entropy method has been widely used in many areas of science and technology. Basically this numerical scheme is used to numerically recover the required density function with least bias among all the possible candidates which satisfy given constraints. The maximum entropy method has diverse applications in physics and engineering (see [12], [13] and references therein).
In this paper, we will introduce the basic idea of the maximum entropy method and present some of the recent progresses of the method in solving some type of operator equations. In Section 2 we give the formation of the method and study their properties. Another formulation based on the Galerkin projection will also be presented. In Section 3 we apply the method to solve the Markov operator fixed density problem which is important in the stochastic analysis of deterministic dynamical systems. Some error analysis and numerical experiments are contained in Section 4. We conclude in Section 5.
Section snippets
The Boltzmann entropy and the maximum entropy method
Let (X, Σ, μ) be a probability measure space. A nonnegative function in L1 ≡ L1(X) such that is called a density. The set of all densities is denoted by D. Let Definition 2.1 If f ⩾ 0, then the (Boltzmann) entropy of f is defined as
Some basic properties of the entropy are [2], [9]:
- (i)
H(f) is either finite or −∞.
- (ii)
H : {f ⩾ 0 : f ∈ L1} → [−∞, ∞) is a proper, upper semicontinuous concave function, strictly concave on its domain that consists of all
The maximum entropy method for Markov operator equations
The maximum entropy idea has been successfully applied to solving Fredholm integral equations [11], computing absolutely continuous invariant measures [4], and estimating the Lyapunov exponents [6]. In this section we use the same approach to solve Markov operator equations. A linear operator P : L1 → L1 is called a Markov operator if PD ⊂ D. Markov operators describe the evolution of probability densities of dynamical systems. A special subclass of Markov operators is that of Frobenius–Perron
Numerical results
We use the maximum entropy method with a high precision Gaussian quadrature to calculate the stationary density f∗ of a Markov operator. First we apply our method to a Markov operator P : L1(0, 1) → L1(0, 1) with a stochastic kernelThe unique stationary density f∗ of P iswhere the constant . In implementing our algorithm we needed to find P∗xn explicitly. Since P is an integral operator, its dual operator P∗ is given by
Conclusions
The maximum entropy method gives an alternative means to numerically determine with good accuracy some desired density of some physical process with even small number of known moments. In most examples, it is comparable or even better than the famous Ulam method for computing invariant measures [10].
References (13)
A maximum entropy method for solving Frobenius–Perron operator equations
Appl. Math. Comput.
(1998)Finite approximation for the Frobenius–Perron operator, a solution to Ulam’s conjecture
J. Approx. Theory
(1976)- et al.
Thermodynamics of Chaotic Systems
(1993) - et al.
Convergence of the best entropy estimates
SIAM J. Optim.
(1991) - et al.
On the convergence of moment problems
Trans. Am. Math. Soc.
(1991) - et al.
Entropy – an introduction
(1993)
Cited by (9)
Probability evaluation method of equipment failure due to voltage sags considering multi-uncertain properties
2011, International Journal of Electrical Power and Energy SystemsCitation Excerpt :In Fig. 3, the interval range of equipment failure under given confidence is [xmin, s] and the safety interval range under the same confidence is [s, xmax]. Based on the essential principle of the determination of membership function for fuzzy safety event [33], the membership function of equipment failure state can be determined when the area of equipment failure and the total area of this function are used. According to the principle and models mentioned above, the common evaluation processes of sensitive equipment failure probability caused by voltage sag are as follows, as shown in Fig. 4.
Maximum entropy method for position dependent random maps
2011, International Journal of Bifurcation and ChaosLyapunov exponents and the natural invariant density determination of chaotic maps: An iterative maximum entropy ansatz
2010, Journal of Physics A: Mathematical and TheoreticalA Linear Spline Maximum Entropy Method for Frobenius-Perron Operators of Multi-dimensional Transformations
2022, International Journal of Applied and Computational MathematicsPiecewise Convex Deterministic Dynamical Systems and Weakly Convex Random Dynamical Systems and Their Invariant Measures
2021, International Journal of Applied and Computational MathematicsApproximating Solutions of Fredholm Integral Equations via a General Spline Maximum Entropy Method
2020, International Journal of Applied and Computational Mathematics