Tuesday, October 16, 2012

Lecture 4

Section 4.1 in today's lecture presents a general version of Example Sheet 1 #12. When doing #12, make sure you understand how the condition of "minimal non-negative solution" is being used.

You might be amused to know that Question #10 on Example Sheet 1 is a actually a tripos question from 1972, (Paper IV, 10C). I took IB in 1973 so I also saw this question as a student.

I made the comment that the the proof of Theorem 4.2 needs Fubini's theorem. This theorem says that we can reverse the order of sums (or integrals), i.e.
\[
\sum_i \sum_j a_{ij} = \sum_j \sum_i a_{ij}
\]when these are sums over countable sets and the sum is absolutely convergent. Fubini's theorem is presented in more generality in the Part II course Probability and Measure. Here we are using it to interchange the order of sums in the second line below. For $i\not\in A$,
\begin{align*}
k_i^A&=\sum_{t=1}^\infty P(H^A\geq t)=\sum_{t=1}^\infty\sum_{j\in I}P(H^A\geq t\mid X_1=j)P_i(X_1=j)\\[6pt]
&=\sum_{j\in I}\sum_{t=1}^\infty P(H^A\geq t\mid X_1=j)P_i(X_1=j)\\[6pt]
&=\sum_{j\in I} E_i(H^A \mid X_1=j)P_i(X_1=j)\\[6pt]
&=\sum_{j\in I}p_{ij}(1+k_j^A)\\[6pt]
&=1 + \sum_{j\in I} p_{ij}k_j^A
\end{align*}Notice that we did not need this extra subtlety when proving Theorem 3.4 in Section 3.3.

In today's lecture we had the definition of a stopping time. This brings to my mind a small riddle (which I think I heard from David Kendall). "How do you make perfect toast? Answer: Wait until it smokes – then 10 seconds less."

Stopping times play a large role in probability theory. One very important idea is the following. Consider Example Sheet 1 # 10, the gambler's ruin problem played on $\{0,1,\dotsc,10\}$. In in the fair game case or $p=q=1/2$ the gambler has $i$ and bets $j$ ($1\leq j\leq i$), then she is equally likely to next have $i−j$ or $i+j$. So $E(X_{n+1} \mid X_n)=X_n$, no matter how much she bets. This implies that $E(X_{n+1} \mid X_0)=X_0$ no matter how she bets. It is a theorem (Doob's optional sampling theorem) that $E(X_T \mid X_0)=X_0$ for any stopping time $T$ (such that $ET< \infty$, as indeed must be the case for any stopping time in this problem). Thus, we see that there is no way that the gambler can make any expected profit (or loss), no matter how clever a strategy she uses in choosing the sizes of her bets and when to stop gambling.

The optional sampling theorem also gives us a quick way to answer the first part of Example Sheet 1 #10. If $T$ is the first time that $X_n$ hits either $0$ or $10$, then $E(X_T \mid X_0=2)=2$ implies $P_2(\text{hit }0)0+P_2(\text{hit }10)10 = 2$. Hence $P_2(\text{hit } 10)=1−P_2(\text{hit }0)=1/5$. That's even easier than solving the RHEs!

The probabilistic abacus

I spent the final few minutes of this lecture describing Arthur Engel's probabilistic abacus (a chip firing game) for calculating absorption probabilities in a finite-state Markov chain in which all entries of $P$ are rational. My slides and commentary are in Appendix C, and also an exposition of Peter Doyle's proof that the algorithm really works. I first heard about this abacus in 1976 when Laurie Snell was visiting the Statistical Laboratory. Snell (1925-2011) was a student of Joe Doob, one of the 'greats' of probability theory, whose optional sampling theorem I have mentioned above. Snell is the author (with John Kemeny) of several classic textbooks (including one called "Finite Markov Chains"). He is founder of Chance News (which can be fun to browse). A particular memory that I have of Professor Snell is that he liked to go to London to play roulette at the casinos there. This struck me that is a very peculiar recreation for an expert in probability. But I think it was for fun - he never claimed to make money this way.