Uniform IntegrabilityEdit
Uniform Integrability
Uniform integrability is a foundational concept in probability theory and analysis that provides a way to control the tails of a family of integrable random variables. It plays a central role in justifying the interchange of limit operations and expectations, and it helps explain when convergence in probability or almost sure convergence implies convergence in the stronger L1 sense. In practical terms, uniform integrability helps prevent “mass” from escaping to infinity as a sequence of random variables evolves, which is crucial for stable statistical inference and rigorous probabilistic reasoning. See also Lebesgue integration and L^1 space for the broader analytic setting in which these ideas live.
Definition
A family of integrable random variables {X_i} is said to be uniformly integrable if the tail of the expectation can be made uniformly small, no matter which index i is chosen. A standard formulation is:
- sup_i E[|X_i|; |X_i| > K] → 0 as K → ∞.
Here E[|X_i|; |X_i| > K] denotes the expectation of |X_i| restricted to the event { |X_i| > K }. This is equivalent to saying that for every ε > 0 there exists a K such that sup_i ∫_{|X_i| > K} |X_i| dP < ε.
There are alternative but equivalent characterizations, including:
For every ε > 0 there exists δ > 0 such that for all events A with P(A) < δ, we have sup_i ∫_A |X_i| dP < ε.
The de la Vallée Poussin criterion: there exists a convex, increasing function φ with φ(0) = 0 and φ(u)/u → ∞ as u → ∞, such that sup_i E[φ(|X_i|)] < ∞.
These viewpoints connect uniform integrability to the broader landscape of measure theory and functional analysis, including the framework of L^1 space and the growth conditions that guarantee controlled tails.
Characterizations and relationships
Relationship to domination: If there exists an integrable function g with |X_i| ≤ g for all i, then the family {X_i} is uniformly integrable. The converse is not always true, but domination is a convenient sufficient condition that often appears in proofs and applications.
de la Vallée Poussin criterion: This provides a practical check for UI by looking at a single function φ that controls all tails. If sup_i E[φ(|X_i|)] is finite, then {X_i} is uniformly integrable.
Connection to convergence theorems: Uniform integrability is a key hypothesis in results that allow interchanging limits and expectations. In particular, if X_n → X in probability and {X_n} is uniformly integrable, then E[X_n] → E[X]. More generally, Vitali's convergence theorem gives convergence in L1 under UI plus convergence in probability. For a domination-based route, the Dominated Convergence Theorem provides L1 convergence when the X_n are dominated by an integrable function and converge pointwise.
Related spaces and concepts: Uniform integrability interacts with Convergence in probability and with convergence concepts in L^p space scales. It is also closely tied to the idea of tightness in probability, which is captured by results like Prokhorov's theorem in the context of weak convergence.
Examples and non-examples
Non-UI example: Consider X_n = n on a set of probability P(A_n) = 1/n and zero elsewhere. Then E[|X_n|] = 1 for all n, but for any fixed K, eventually n > K for all ω ∈ A_n, so E[|X_n|; |X_n| > K] = 1, and the supremum over i does not go to zero. Thus {X_n} is not uniformly integrable.
UI example: If {X_i} is bounded in L1, and there exists a dominating integrable function g with |X_i| ≤ g almost surely for all i, then {X_i} is uniformly integrable. More generally, if sup_i E|X_i| < ∞ and the tail behavior is controlled uniformly via a criterion like de la Vallée Poussin, the family is UI.
Tail-control examples: Uniformly integrable families often arise in econometrics and statistics when researchers impose moment bounds and tail restrictions to ensure stable limit behavior of estimators or test statistics.
Theorems and consequences
Interchange of limit and expectation: If X_n → X almost surely (or in probability) and {X_n} is uniformly integrable, then E[X_n] → E[X]. This sensitivity to tails is what UI protects against.
Vitali convergence and L1 convergence: Uniform integrability together with convergence in probability yields convergence in L1, i.e., E|X_n − X| → 0. This strengthens convergence results beyond mere pointwise or probabilistic convergence.
Dominated vs. uniform integrability: Dominated Convergence Theorem gives L1 convergence when there is a single integrable dominating function. Uniform integrability generalizes the domination idea: it replaces a single global bound with a tail-control condition that can hold for families without a common dominating bound.
Applications
Probability theory and statistics: UI is a standard tool in proving limit theorems for sequences of random variables, including the convergence of expected values in the presence of varying distributions. See Convergence in probability and Vitali convergence theorem for standard results that rely on UI.
Stochastic processes: In the study of process convergence and sample-path properties, UI helps justify exchanging limits with expectations along stopping times or in the evaluation of long-run averages.
Econometrics and risk assessment: When modeling uncertain quantities with heavy tails or heterogeneous data sources, uniform integrability provides a framework to guarantee meaningful limiting behavior of risk measures and estimators.
Functional analysis and optimization: In the context of integrals over function spaces, UI interacts with compactness arguments and with criteria like the de la Vallée Poussin criterion to control tails of families of functions.