Definition of the Subject
Observables in complex systems usually obtain a broad range of values. They are random variables of a stochastic process and the systemexplores a wide phase space. The observables are therefore characterized by a probability density function (PDF), representing the probability (density) to find the complex system in a state with a particularvalue of the observable. As in other areas of statistical mechanics, complex systems are often studied in computer simulations, most prominently Monte Carlo [1,2,3] and theprobability density function is recorded in form of a histogram, which frequently has power-law asymptotes. Historically, the probabilitydensity function itself plays a dominant rôle in the characterization of complex system, while more recently derived quantities, in particularmoments, are used more frequently.
Introduction
There are three main stages in the analysis of the PDF of a complex system: The first step is to generate in a computer...
Abbreviations
- Apparent exponent:
-
The apparent exponent is the power-law exponent of the probability density function \( { P(s;s_{\mathrm{c}}) } \) in the scaling region. The scaling region is the range of event sizes s which is closest to a straight line in a double-logarithmic plot of \( { P(s;s_{\mathrm{c}}) } \), the intermediate range \( { s_{\mathrm{c}}\gg s\gg s_{\mathrm{0}} } \).
- Consistent estimator:
-
An estimator is consistent if it converges to the quantity estimated (the expectation value of the population) as the sample size is increased. For example, the lack of independence of consecutive measurements generated in a numerical simulation can render an estimator inconsistent.
- Corrections to scaling:
-
In general, pure power-law behavior is found in an observable only to leading order, for example \( \langle{s^2}\rangle = aL^\alpha + b L^\beta + \dots \) with \( { \alpha > \beta } \). While the sub-leading terms can have great physical relevance more emphasis is normally given to the leading order. The quality of the data analysis when determining the leading order can improve significantly by allowing for correction terms. The exponents found in these correction terms are usually expected to be universal as well.
- Correlation time:
-
Fitting the autocorrelation function of an observable to an exponential \( { \exp(-t/\tau) } \) produces the correlation time t. Although correlations are in general more complicated than the single exponential suggests, the standard deviation of the estimator of the nth moment from N measurements is often estimated to be \( { \sqrt{(2\tau+1)/N} } \) times the estimated standard deviation of the nth moment,
$$ \overline{\sigma}^2\left(\overline{s^n}\right) = \frac{2\tau+1}{N} \overline{\sigma}^2\left({s^n}\right) \: , $$(1)as if the number of independent measurements was only \( { N/(2\tau+1) } \).
- Estimator:
-
A numerical estimator is any function that provides an estimate from the sample, that is the set of all measurements taken. A good estimator is unbiased, consistent and efficient. Very often, such an estimator coincides with the definition of the observable as taken from the exact distribution, for example \( { \overline{s^2}=\frac{1}{N}\sum_i^N s_i^2 } \) for estimating the second moment from an uncorrelated sample \( { s_1, s_2, \dots, s_N } \), with exact value \( { \langle{s^2}\rangle=\int \mskip2mu\mathrm{d}{s} s^2 P(s;s_{\mathrm{c}}) } \). However, generally, a function of observables to be estimated, is not well estimated by taking the function of the estimates. For example, the square of the first moment \( { \langle{s}\rangle^2 } \) is not well estimated by the numerical estimate \( { \overline{s}^2=(\sum s_i/N)^2 } \), as this estimator would be biased.
- Finite size scaling:
-
Observables in complex systems that display a power-law dependence on a parameter, often diverge in the thermodynamic limit. In finite systems they remain finite and their value is expected to diverge as a power-law of the system size. The relation between the value of the observable and the system size is known as “finite size scaling” (abbreviated FSS).
- Gap exponent:
-
Given that a system displays scaling, the exponents \( { \gamma_n^{\prime} } \) characterizing the dependence of the nth moment on a parameter, such as the system size in case of finite size scaling, often are linear in n. The gap exponent is the gap between consecutive moments, \( { \gamma^{\prime}_{n+1}-\gamma^{\prime}_n } \).
- Importance sampling:
-
Importance sampling is a numerical technique to bias the frequency which with configurations are generated, so that states of greater importance, e.?g. large observables, are generated more often than others, less important ones. Using a Markov chain to generate the states of an Ising model as opposed to generating them at random and applying a Boltzmann–Gibbs weight can be regarded as a form of importance sampling.
- Lower cutoff:
-
The lower cutoff in a probability density function of event sizes in complex systems is a value of the event size above which the distribution displays universal behavior. Below this value the system is governed by microscopic details. For many systems, the probability density functions for different system sizes coincide for event sizes below the lower cutoff.
- Markov chain Monte Carlo:
-
A Monte Carlo technique whereby configurations are generated by transforming the state of the system according to a transition probability. The stationary probability distribution of the different states corresponds to the target probability distribution, i.?e. the distribution to be modeled. In complex systems, Markov Chain Monte Carlo (abbreviated MCMC) is the natural method to study a model: Configurations of the system are generated with the frequency they occur in the exact distribution.
- Moment analysis:
-
In general it is a difficult to identify and quantify scaling behavior in probability density functions. The most general analysis is a data collapse, the quality of which is not easily determined. The most widely used method to determine scaling exponents and moment ratios of the universal scaling function therefore is a moment analysis. Based on the scaling assumption of the probability density function, moments scale as a power of the upper cutoff with amplitudes certain ratios of which are universal.
- Monte Carlo:
-
Technique to calculate numerical estimates for expectation values in stochastic models by generating configurations at random. More generally, Monte Carlo (abbreviated MC) is a stochastic technique to numerically integrate a high-dimensional integral, here corresponding to the calculation of an expectation value by integrating over all degrees of freedom, constituting the phase space of the system.
- Parameter space and phase space:
-
The parameter space of a complex system is the space spanned by all parameters of the model, such as system size and couplings of the interacting agents. A numerical study usually aims to probe the model throughout a large part of the parameter space. The number of parameters therefore needs to be as small as possible. Often only a single parameter exists. Leaving the parameters fixed, an individual numerical simulation samples the phase space available to the system. The phase space is the set of all possible configurations or states of the model. This space is very high dimensional and it is virtually impossible to sample this space homogeneously. A numerical simulation relies on the assumption that the sample taken nevertheless is sufficiently representative to allow for reliable estimates.
- Stationary distribution and transient:
-
Most complex systems studied possess a limiting distribution, i.?e. the probability density distribution in phase space converges. This is the stationary distribution. A free random walker, for example, does not possess a stationary distribution, while a random walker in a harmonic potential does. Due to correlations, the probability distribution of states after any finite time generally depends on the initial condition, which is therefore often chosen to be random. The measurements discarded due to these correlation are called the transient.
- Unbiased estimator:
-
An estimator is unbiased if the population average of the estimator is independent of the sample size. For example, \( { \overline{s}=\sum_i^N s_i/N } \) from an uncorrelated sample \( { s_1,\dots,s_N } \) is an unbiased estimator of the expected first moment. Estimating the variance of s as \( { \overline{\sigma}^2(s)=\overline{s^2}-\overline{s}^2 } \) however is biased, because the population mean of \( { \overline{s^2}-\overline{s}^2 } \) is \( { ((N-1)/N) (\langle{s^2}\rangle-\langle{s}\rangle^2) } \), which depends on the sample size N.
- Upper cutoff:
-
The upper cutoff is the characteristic scale of the universal part of the event size distribution. It is a measure of the event size at which the scaling function of the event size distribution breaks up. Moments \( { \langle{s^n}\rangle } \) with sufficiently large n are to leading order a power of the upper cutoff. The upper cutoff itself is expected to be a power law of the system parameter, i.?e. the system size in case of finite size scaling. The exponent controlling the relation between upper cutoff and system size is the gap exponent.
Bibliography
Primary Literature
Metropolis N, Ulam S (1949) The Monte Carlo Method. J Am Stat Ass 44:335–341
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of State Calculations by Fast Computing Machines. J Chem Phys 21:1087–1092
Landau DP, Binder K (2000) A Guide to Monte Carlo Simulations in Statistical Physics. Cambridge University Press, Cambridge
Bak P, Tang C, Wiesenfeld K (1987) Self-Organized Criticality: An Explanation of 1/f Noise. Phys Rev Lett 59:381–384
Bak P, Tang C, Wiesenfeld K (1988) Self-Organized Criticality. Phys Rev A 38:364–374
van Kampen NG (1992) Stochastic Processes in Physics and Chemistry, 3rd impression 2001, enlarged and revised edn. Elsevier, Amsterdam
Stauffer D, Aharony A (1994) Introduction to Percolation Theory. Taylor, London
Binney JJ, Dowrick NJ, Fisher AJ, Newman MEJ (1998) The Theory of Critical Phenomena. Clarendon Press, Oxford
Binder K, Heermann DW (1997) Monte Carlo Simulation in Statistical Physics. Springer, Berlin
Zia RKP, Schmittmann B (2007) Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states. J Stat Mech P07012
de Oliveira MM, Dickman R (2005) How to simulate the quasistationary state. Phys Rev E 71:016129
Spitzer F (1970) Interaction of Markov processes. Adv Math 5:256–290
Bouchaud JP, Comtet A, Georges A, Le Doussal P (1990) Classical Diffusion of a Particle in a One-Dimensional Random Force Field. Ann Phys 201:285–341
Grassberger P (2002) Go with the winners: a general Monte Carlo strategy. Comp Phys Comm 147:64–70
Pradhan P, Dhar D (2006) Sampling rare fluctuations of height in the oslo ricepile model. J Phys A 40:2639–2650
Bouchaud JP, Potters M (2003) Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management. Cambridge University Press, Cambridge
De Menech M, Stella AL, Tebaldi C (1998) Rare events and breakdown of simple scaling in the Abelian sandpile model. Phys Rev E 58:R2677–R2680
Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical Recipes in C, 2nd edn. Cambridge University Press, New York
Knuth DE (1997) The Art of Computer Programming, vol 3. Seminumerical Algorithms, 2nd edn. Addison Wesley, Massachusetts
Matsumoto M, Nishimura T (1998) Mersenne Twister: A 623-Dimensionally Equidistributed Uniform Pseudorandom Number Generator. ACM Trans Model Comp Sim 8:3–30
Gentle JE (1998) Random Number Generation and Monte Carlo Methods. Springer, Berlin
Ferrenberg AM, Landau DP (1992) Monte Carlo Simulations: Hidden Errors from “Good” Random Number Generators. Phys Rev Lett 69:3382–3384
Efron B (1982) The Jackknife, the Bootstrap and Other Resampling Plans. SIAM, Philadelphia
Berg BA (1992) Double Jackknife bias-corrected estimators. Comp Phys Com 69:7–14
Fisher ME (1967) The Theory of equilibrium critical phenomena. Rep Prog Phys 30:615–730
Privman V, Hohenberg PC, Aharony A (1991) Universal Critical-Point Amplitude Relations. In: Domb C, Lebowitz JL (eds) Phase Transitions and Critical Phenomena, vol 14. Academic Press, New York, pp 1–134
Christensen K, Corral A, Frette V, Feder J, Jøssang T (1996) Tracer Dispersion in a Self-Organized Critical System. Phys Rev Lett 77:107–110
Pruessner G, Jensen HJ (2002) Broken scaling in the forest-fire model. Phys Rev E 65:056707–1–8. Preprint cond-mat/0201306
Wegner FJ (1972) Corrections to Scaling Laws. Phys Rev B 5:4529–4536
Dowd K, Severance C (1998) High Performance Computing, 2nd edn. O'Reilly, Sebastopol
Pfeuty P, Toulouse G (1977) Introduction to the Renormalization Group and to Critical Phenomena. Wiley, Chichester
Ferrenberg AM, Landau DP, Binder K (1991) Statistical and Systematic Errors in Monte Carlo Sampling. J Stat Phys 63:867–882
Milchev A, Binder K, Heermann DW (1986) Fluctuations and Lack of Self-Averaging in the Kinetics of Domain Growth. Z Phys B 63:521–535
Christensen K, Moloney NR (2005) Complexity and Criticality. Imperial College Press, London
Pruessner G, Jensen HJ (2004) Efficient algorithm for the forest fire model. Phys Rev E 70:066707–1–25. Preprint cond-mat/0309173
Books and Reviews
Anderson TW (1964) The Statistical Analysis of Time Series. Wiley, London
Barenblatt GI (1996) Scaling, self-similarity, and intermediate asymptotics. Cambridge University Press, Cambridge
Berg BA (2004) Markov Chain Monte Carlo Simulations and Their Statistical Analysis. World Scientific, Singapore
Brandt S (1998) Data Analysis. Springer, Berlin
Jensen HJ (1998) Self-Organized Criticality. Cambridge University Press, New York
Liggett TM (2005) Stochastic Interacting Systems: Contact, Voter and Exclusion Processes. Springer, Berlin
Marro J, Dickman R (1999) Nonequilibrium Phase Transitions in Lattice Models. Cambridge University Press, New York
Newman MEJ, Barkema GT (1999) Monte Carlo Methods in Statistical Physics. Oxford University Press, New York
Stanley HE (1971) Introduction to Phase Transitions and Critical Phenomena. Oxford University Press, New York
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag
About this entry
Cite this entry
Pruessner, G. (2009). Probability Densities in Complex Systems, Measuring. In: Meyers, R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY. https://doi.org/10.1007/978-0-387-30440-3_417
Download citation
DOI: https://doi.org/10.1007/978-0-387-30440-3_417
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-75888-6
Online ISBN: 978-0-387-30440-3
eBook Packages: Physics and AstronomyReference Module Physical and Materials ScienceReference Module Chemistry, Materials and Physics