## Thursday, October 27, 2011

### Lecture 7

I played to you some Markov music, taken from the web site: Mathematics, Poetry, and Music by Carlos Pasquali. The tune is very monotonous and simple sounding because the distribution of nth note depends only on the (n−1)th note. More interesting music could be obtained by lettting the state be the last k notes, k>1. You might enjoy reading the Wikipedia article on Markov chains. It has a nice section about various applications of Markov chains to: physics, chemistry, testing, Information sciences, queueing theory, the Internet, statistics, economics and finance, social sciences, mathematical biology, games, music, baseball and text generation.

There are lots of other crazy things that people have done with Markov chains. For example, at the Garkov web site the author attempts to use a Markov chain to re-create Garfield comic strips. A much more practical use of Markov chains is in the important area of speech recognition. So-called hidden Markov models are used to help accurately turn speech into text, essentially by comparing the sound that is heard at time t with the possibilities that are most likely, given the sound heard at time t−1 and the modelling assumption that some underlying Markov chain is creating the sounds.

Today's lecture was an introduction to the ideas of an invariant measure and invariant distribution. I mentioned that the invariant distribution is also called a stationary distribution, equilibrium distribution, or steady-state distribution.

Several times today I referred to cards and card-shuffing (which have facinated me from childhood, and in my undergraduate days when I was secretary of the Pentacle Club). It is always good to have a few standard models in your mathematical locker (such as card-shuffling, birth-death chains, random walk on Z and Z^2, etc) with which you can test your intuition and make good guesses about what might or might not be true. The state space of a deck of cards is of size 52! (=80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000) (all the possible orders). Clearly, the equlibrium distribution is one in which each state has equal probability pi_i = 1/52!. I mentioned today that m_i=1/pi_i (which is intuitively obvious, and we prove rigorously in Lecture 9). This means that if you start with a new deck of cards, freshly unwrapped, and then shuffle once every 5 seconds (night and day), it will take you 1.237 x 10^{61} years (on average) before the deck returns to its starting condition. Notice that this result holds for any sensible meaning of a shuffle, provided that it has the effect of turning the state space into one closed class (an irreducible chain).

A very important question in many applications of Markov chains is "how long does it take to reach equilbrium?" (i.e. how large need be n so that the distribution of X_n is almost totally independent of X_0?) You might enjoy reading about the work of mathematician/magician Persi Diaconis in answering the question "how many shuffles does it take to randomize a deck of cards?".