Tuesday, October 18, 2011

Lecture 4


In today's lecture we had the definition of a stopping time. This brings to my mind a small riddle (which I think I heard from David Kendall). "How do you make perfect toast? Answer: Wait until it smokes – then 10 seconds less."

Stopping times play a large role in probability theory. One very important idea is the following. Consider Example Sheet 1 # 10, the gambler's ruin problem played on {0,1,...,10} in the fair game case (p=q=1/2). If the gambler has i and bets j (1≤j≤i), she is equally likely to next have i−j or i+j. So E(X_{n+1} | X_n)=X_n, no matter how much she bets. This implies that E(X_{n+1} | X_0)=X_0 no matter how she bets. It is a theorem (Doob's optional sampling theorem) that E(X_T | X_0)=X_0 for any stopping time T (such that ET< infinity, as indeed must be the case for any stopping time in this problem). Thus, we see that there is no way that the gambler can make any expected profit (or loss), no matter how clever a strategy she uses in choosing the sizes of her bets and when to stop gambling.

The optimal sampling theorem also gives us a quick way to answer the first part of Example Sheet 1 #10. If T is the first time that X_n hits either 0 or 10, then E(X_T | X_0=2)=2 implies P_2(hit 0)0+P_2(hit 10)10 = 2. Hence P_2(hit 10)=1−P_2(hit 0)=1/5. That's even easier than solving the RHEs!

In lectures today I forget to make the comment that the interchange of E_i and sum_{j in I} that occurs halfway down page 14 is an application of Fubini's theorem. This theorem says that we can reverse the order of sums, i.e.

sum_i sum_j a_{ij} i = sum_j sum_i a_{ij}

when these are sums over countable sets and the sum is absolutely convergent. Fubini's theorem is presented in more generality in Part II Probability and Measure.

Section 4.1 is of course a general version of Example Sheet 1 #12. When doing #12, make sure you understand how the condition of "minimal non-negative solution" is being used.

I spent the final 10 minutes of this lecture describing Arthur Engel's probabilistic abacus (a chip firing game) for calculating absorption probabilities in a finite-state Markov chain in which all entries of P are rational. My slides and commentary are in Appendix C, and also an exposition of Peter Doyle's proof that the algorithm really works. I first heard about this abacus in 1976 when Laurie Snell was visiting the Statistical Laboratory. Snell (1925-2011) was a student of Joe Doob, one of the 'greats' of probability theory, whose optional sampling theorem I have mentioned above. Snell is the author (with John Kemeny) of several classic textbooks (including one called "Finite Markov Chains"). He is founder of Chance News (which can be fun to browse). A particular memory that I have of Professor Snell is that he liked to go to London to play roulette at the casinos there. This struck me that is a very peculiar recreation for an expert in probability. But I think it was for fun - he never claimed to make money this way.

You might be amused to know that Question #10 on Example Sheet 1 is a actually an old tripos question from 1972, (Paper IV, 10C). I took IB in 1973 so you and I share at least one question upon which we have practiced as students.

Until now there were several typos in the published notes for Lecture 4. Please make sure you have a current copy, which I think is now fully accurate.