Jump to content

Donsker's theorem

From Wikipedia, the free encyclopedia
Donsker's invariance principle for simple random walk on .

In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem for empirical distribution functions. Specifically, the theorem states that an appropriately centered and scaled version of the empirical distribution function converges to a Gaussian process.

Let be a sequence of independent and identically distributed (i.i.d.) random variables with mean 0 and variance 1. Let . The stochastic process is known as a random walk. Define the diffusively rescaled random walk (partial-sum process) by

The central limit theorem asserts that converges in distribution to a standard Gaussian random variable as . Donsker's invariance principle[1][2] extends this convergence to the whole function . More precisely, in its modern form, Donsker's invariance principle states that: As random variables taking values in the Skorokhod space , the random function converges in distribution to a standard Brownian motion as

Donsker-Skorokhod-Kolmogorov theorem for uniform distributions.
Donsker-Skorokhod-Kolmogorov theorem for normal distributions

Formal statement

[edit]

Let Fn be the empirical distribution function of the sequence of i.i.d. random variables with distribution function F. Define the centered and scaled version of Fn by

indexed by x ∈ R. By the classical central limit theorem, for fixed x, the random variable Gn(x) converges in distribution to a Gaussian (normal) random variable G(x) with zero mean and variance F(x)(1 − F(x)) as the sample size n grows.

Theorem (Donsker, Skorokhod, Kolmogorov) The sequence of Gn(x), as random elements of the Skorokhod space , converges in distribution to a Gaussian process G with zero mean and covariance given by

The process G(x) can be written as B(F(x)) where B is a standard Brownian bridge on the unit interval.

Proof sketch

[edit]

For continuous probability distributions, it reduces to the case where the distribution is uniform on by the inverse transform.

Given any finite sequence of times , we have that is distributed as a binomial distribution with mean and variance .

Similarly, the joint distribution of is a multinomial distribution. Now, the central limit approximation for multinomial distributions shows that converges in distribution to a gaussian process with covariance matrix with entries , which is precisely the covariance matrix for the Brownian bridge.

[edit]

Kolmogorov (1933) showed that when F is continuous, the supremum and supremum of absolute value, converges in distribution to the laws of the same functionals of the Brownian bridge B(t), see the Kolmogorov–Smirnov test. In 1949 Doob asked whether the convergence in distribution held for more general functionals, thus formulating a problem of weak convergence of random functions in a suitable function space.[3]

In 1952 Donsker stated and proved (not quite correctly)[4] a general extension for the Doob–Kolmogorov heuristic approach. In the original paper, Donsker proved that the convergence in law of Gn to the Brownian bridge holds for Uniform[0,1] distributions with respect to uniform convergence in t over the interval [0,1].[2]

However Donsker's formulation was not quite correct because of the problem of measurability of the functionals of discontinuous processes. In 1956 Skorokhod and Kolmogorov defined a separable metric d, called the Skorokhod metric, on the space of càdlàg functions on [0,1], such that convergence for d to a continuous function is equivalent to convergence for the sup norm, and showed that Gn converges in law in to the Brownian bridge.

Later Dudley reformulated Donsker's result to avoid the problem of measurability and the need of the Skorokhod metric. One can prove[4] that there exist Xi, iid uniform in [0,1] and a sequence of sample-continuous Brownian bridges Bn, such that

is measurable and converges in probability to 0. An improved version of this result, providing more detail on the rate of convergence, is the Komlós–Major–Tusnády approximation.

See also

[edit]

References

[edit]
  1. ^ Donsker, M.D. (1951). "An invariance principle for certain probability limit theorems". Memoirs of the American Mathematical Society (6). MR 0040613.
  2. ^ a b Donsker, M. D. (1952). "Justification and extension of Doob's heuristic approach to the Kolmogorov–Smirnov theorems". Annals of Mathematical Statistics. 23 (2): 277–281. doi:10.1214/aoms/1177729445. MR 0047288. Zbl 0046.35103.
  3. ^ Doob, Joseph L. (1949). "Heuristic approach to the Kolmogorov–Smirnov theorems". Annals of Mathematical Statistics. 20 (3): 393–403. doi:10.1214/aoms/1177729991. MR 0030732. Zbl 0035.08901.
  4. ^ a b Dudley, R.M. (1999). Uniform Central Limit Theorems. Cambridge University Press. ISBN 978-0-521-46102-3.