Tuesday, November 1, 2011

Lecture 8


Following the lecture I have made a correction to Theorem 8.4 (iii) which should be "0< γik < ∞ for all i". Earlier, I made a correction to Page 31, line 4. I have also added the words "under P_k" to the fourth line of the proof of Theorem 8.4. As I commented in lectures, the hypothesis that P is recurrent is used in this proof, since the proof relies on the fact that T_k < ∞. This comment is written at 2pm Tuesday - so re-download the notes if your copy is prior to then.

Notice that the final bits of the proofs of both Theorems 8.4 and 8.5 follow from Theorem 8.3. In Theorem 8.4, the fact of (iii) follows from Theorem 8.3 once we have proved (i) and (ii). And in Theorem 8.5 the fact that mu=0 follows from Theorem 8.3 because mu_k=0. However, in the notes the proofs of Theorems 8.4 and 8.5 are self-contained and repeat the arguments used in Theorem 8.3. (These proofs are taken directly from James Norris's book.)

In today's lecture we proved that if the Markov chain is irreducible and recurrent then an invariant measure exists and is unique (up to a constant multiple) and is essentially γk, where γik is the expected number of hits on i between successive hits on k. The existence of a unique positive left-hand (row) eigenvector is also guaranteed by Perron-Frebonius theory when P is irreducible (see the blog on Lecture 2). This again points up the fact that many results in the theory of Markov chains can be proved either by a probabilistic method or by a matrix-algebraic method.

If the state space is finite, then clearly sum_i γik < ∞, so there exists an invariant distribution. If the state space in infinite sum_i γik may be < ∞ or =∞, and these correspond to the cases of positive and null recurrence, which we will discuss further in Lecture 9.

We looked at a very simple model of Google PageRank. This is just one example of a type of algorithm that has become very important in today's web-based multi-agent systems. Microsoft, Yahoo, and others have proprietary algorithms for their search engines. Similarly, Amazon and E-bay run so-called recommender and reputation systems. Ranking, reputation, recommender, and trust systems are all concerned with aggregating agents' reviews of one another, and of external events, into valuable information. Markov chain models can help in designing these systems and to make forecasts of how well they should work.