Tuesday, November 8, 2011

Lecture 10


We only proved the first part of Theorem 10.2 (Ergodic theorem). The second part is a simple corollary of the first part. For details you can look at page 3 in Section 1.10 of Jame's Norris's notes.

The material on the random target lemma and Kemeny's constant is non-examinable, but I have presented because I think it is fun. It is surprising (don't you think?) that the expected time to reach equilibrium (in the sense of this lemma) is independent of the starting state. (As well being a co-author with Laurie Snell of the book Finite Markov Chains, John Kemeny was President of Dartmouth College, and one of the inventors of the BASIC programming language.)

Of course there are other ways to think about the time that is required for a Markov chain to reach equilibrium. One is the mixing time, τ(ε), which is defined for ε>0 as the the least time such that
maxi ∑ j | pij(n) - πj | < ε for all n ≥ τ(ε).
This is closely related to the magnitude of the second-largest eigenvalue of P, say λ2. The smaller is |λ2| then the smaller is the mixing time. In fact, one can prove bounds such as
2| log(1/(2 ε))/(2(1−|λ2|)) ≤ τ(ε) ≤ log(1/(π* ε))/(1−|λ2|)
where π* is the smallest component of π (the invariant distribution).
Other quantities that are interesting are the coupling time (the time for two independent copies of the Markov chain to couple) and the cover time (the time for the Markov chain to visit every state at least once).

I concluded the lecture with a demonstation how Engel's probabilistic abacus can be used to calculate the invariant distribution of a finite-state Markov chain in which all p_{ij} are rational numbers. The puzzle is "why does this algorithm always terminate in a finite number of steps?" There is more about this algorithm in Example Sheet 2, #15 and Section 12.5 of the notes. I first heard about this problem from Laurie Snell circa 1975. At that time he said that the questions as why it works was still open.