## Tuesday, October 30, 2012

### Lecture 8

In today's lecture we proved that if the Markov chain is irreducible and recurrent then an invariant measure exists and is unique (up to a constant multiple) and is essentially $\gamma^k$, where $\gamma_i^k$ is the expected number of hits on $i$ between successive hits on $k$. The existence of a unique positive left-hand (row) eigenvector is also guaranteed by Perron-Frebonius theory when $P$ is irreducible (see the blog entry on Lecture 2). This again points up the fact that many results in the theory of Markov chains can be proved either by a probabilistic method or by a matrix-algebraic method. Essentially, an invariant measure is a left-hand (row) eigenvector with eigenvalue 1.

In proving Theorem 8.5 today I raised a query as to whether (top of page 31) we should be saying
$p_{kj}+\sum_{i_0\neq k}p_{ki_0}p_{i_0,j}+\cdots\ =P_k(X_1=j\text{ and }T_k\geq1)+P_k(X_2=j\text{ and }T_k\geq 2)+\cdots$or
$p_{kj}+\sum_{i_0\neq k}p_{ki_0}p_{i_0,j}+\cdots\ =P_k(X_1=j\text{ and }T_k> 1)+P_k(X_2=j\text{ and }T_k> 2)+\cdots$In fact, the first is correct if we are allowing $j=k$, and the second will be true if we restrict to $j\neq k$ (which is what I was saying in the lecture). But either way of doing it is fine. We are really only trying to see that $\lambda_j\geq \gamma_j^k$ for $j\neq k$, since by assumption $\lambda_k=1=\gamma_k^k$.

I have been thinking a bit more about how better to explain why the assumption of recurrence is used in proving Theorem 8.4. This is rather subtle. Recurrence means that $P_k(T_k<\infty)=1$, which is equivalent to saying $P_k\left(\bigcup_{t=1}^\infty \{T_k=t\}\right)=1$, i.e. that we can cover the whole sample space by disjoint events of the form $\{T_k=t\}$ with $t$ finite. This justifies us writing
$\gamma_j^k = E_k \sum_{t=1}^\infty\sum_{n=1}^{t}1_{\{X_n=j\text{ and } T_k=t\}}= E_k \sum_{n=1}^{\infty}\sum_{t=n}^\infty1_{\{X_n=j\text{ and } T_k=t\}}= E_k\sum_{n=1}^{\infty}1_{\{X_n=j\text{ and } n\leq T_k\}},$which is the first line of the multi-line algebra on page 30. I have now rewritten this part of the notes so as to spell out the details of this. Notice that in the case $j=k$ this formula is correct and gives $\gamma_k^k=1$ because $X_{T_k}=k$.

The final bits of the proofs of both Theorems 8.4 and 8.5 follow from Theorem 8.3. In Theorem 8.4, the fact of (iii) follows from Theorem 8.3 once we have proved (i) and (ii). And in Theorem 8.5 the fact that $\mu=0$ follows from Theorem 8.3 because $\mu_k=0$. However, in the notes the proofs of Theorems 8.4 and 8.5 are self-contained and repeat the arguments used in Theorem 8.3. (These proofs are taken directly from James Norris's book.)

I described a very simple model of Google PageRank (about which you can read a bit more at this Wikipedia link). This is just one example of a type of algorithm that has become very important in today's web-based multi-agent systems. Microsoft, Yahoo, and others have proprietary algorithms for their search engines. Similarly, Amazon and E-bay run so-called recommender and reputation systems. Ranking, reputation, recommender, and trust systems are all concerned with aggregating agents' reviews of one another, and of external events, into valuable information. Markov chain models can help in designing these systems and to make forecasts of how well they should work. Last year I set a Part III essay topic on Ranking, Reputation and Recommender Systems. Look at this link if you would like to read more about the topic, or get an idea of the sort of thing that students write about in Part III.