◀ Back to overview
## Ergodic Theory

### Summary

EC | 8 |

Location | Vrije Universiteit |

Weeks | 37 - 51 |

Lecture | Tuesday, 14:00 - 16:45 |

Provider | Discrete Mathematics, Algebra and Number Theory (Diamant), Stochastics (Star) |

Links | Course page (requires login) |

**[FALL 2020]**

**Prerequisites**

A background in measure theory, and some elementary knowledge of undergraduate Functional Analysis and Point Set Topology. For example, the contents of the first four chapters of the book Rudin, Walter Real and complex analysis. Third edition. *McGraw-Hill Book Co., New York,* 1987. xiv+416 pp. ISBN: 0-07-054234-1 would provide sufficient background for this course.

**General Description**

The roots of ergodic theory go back to Boltzmann's ergodic hypothesis concerning the equality of the time mean and the space mean of molecules in a gas, i.e., the long term time average along a single trajectory should equal the average over all trajectories. The hypothesis was quickly shown to be incorrect, and the concept of ergodicity (`weak average independence') was introduced to give necessary and sufficient conditions for the equality of these averages. Nowadays, ergodic theory is known as the probabilistic (or measurable) study of the average behavior of ergodic systems, i.e., systems evolving in time that are in equilibrium and ergodic. The evolution is represented by the repeated application of a single map (in case of discrete time), and by repeated applications of two (or more) commuting maps in case of `higher dimensional discrete time'. The first major contribution in ergodic theory is the generalization of the strong law of large numbers to stationary and ergodic processes (seen as sequences of measurements on your system). This is known as the Birkhoff ergodic theorem. The second contribution is the introduction of entropy to ergodic theory by Kolmogorov. This notion was borrowed from the notion of entropy in information theory defined by Shannon. Roughly speaking, entropy is a measure of randomness of the system, or the average information acquired under a single application of the underlying map. Entropy can be used to decide whether two ergodic systems are not `the same' (not isomorphic).

**Content**

In this course the following concepts will be represented:

(1) The notion of measure preserving (stationarity), Several interpretations examples and the Poincare Recurrence Theorem. The notion of ergodicity (which is a weak notion of independence), and its characterization, the notion of conservativity (for infinite measure preserving systems)

(2) Ergodic Theorems (generalizations of the Strong Law of Large Numbers) such as Birkhoff and Von Neumann’s Ergodic Theorems. Some consequences of the Ergodic Theorems and the notions of weakly and strongly mixing,

(3) Isomorphism, factor maps and natural extensions.

(4) Some examples: continued fractions, normal numbers.

(5) The notion of entropy, the Shannon-Mcmillan Breiman Theorem, and Lochs Theorem.

(7) Construction of invariant and ergodic measures for continuous transformations, unique ergodicity, uniform distribution and Benford’s Law.

(8) The Perron-Frobenius operator and the existence of absolutely continuous invariant measures.

(9) Topological entropy and measures of maximal entropy.

**Lecturers**

Karma Dajani and Charlene Kalle