Lecture 1 was on Thursday October 6 at 10am. It was attended by almost all IB students. Some notes are already on line. Something fun is Appendix C.

The introduction of a jumping frog (who I called Fred) into Example 1.1 is window-dressing. However, the frog and lily pad metaphor was used by Ronald A. Howard in his classic book Dynamic Programming and Markov Processes (1960). I read this book long ago and the image has stuck and been helpful. It gets you thinking in pictures, which is good.

Markov chains are a type of mathematics that I find to be highly visual, by which I mean that the problems, and even the proofs (I'll give examples later), can be run through my mind (and yours) in a very graphic way - almost like watching a movie play out.

Our course only deals with Markov chains in which the state space is countable. Also, we focus on a discrete-time process, X_0, X_1, X_2, ... , It is really not fundamentally different to deal with Markov processes that have an uncountable state space (like the non-negative real numbers) and which move in continuous time. The mathematics becomes only mildly more tricky. An important and very interesting continuous-time Markov process is Brownian motion. This is a process (X_t)_{t ≥ 0} which is continuous and generalizes the idea of random walk. It is very useful in financial mathematics.

Someone asked me afterwards to say more about simulation of a Markov chain. I am talking about

The introduction of a jumping frog (who I called Fred) into Example 1.1 is window-dressing. However, the frog and lily pad metaphor was used by Ronald A. Howard in his classic book Dynamic Programming and Markov Processes (1960). I read this book long ago and the image has stuck and been helpful. It gets you thinking in pictures, which is good.

Markov chains are a type of mathematics that I find to be highly visual, by which I mean that the problems, and even the proofs (I'll give examples later), can be run through my mind (and yours) in a very graphic way - almost like watching a movie play out.

Our course only deals with Markov chains in which the state space is countable. Also, we focus on a discrete-time process, X_0, X_1, X_2, ... , It is really not fundamentally different to deal with Markov processes that have an uncountable state space (like the non-negative real numbers) and which move in continuous time. The mathematics becomes only mildly more tricky. An important and very interesting continuous-time Markov process is Brownian motion. This is a process (X_t)_{t ≥ 0} which is continuous and generalizes the idea of random walk. It is very useful in financial mathematics.

Someone asked me afterwards to say more about simulation of a Markov chain. I am talking about

**simulation of a Markov chain using a computer**. Think about how we might simulate the outcome of tossing a fair coin on a computer. What we do it ask the computer to provide a sample of a uniform random variable on [0,1]. In

*Mathematica*, for example, this could be done with the function call U=Random[ ]. If U<1/2 we say Tails, and if U≥1/2 we say Heads.

Similarly, if we wished to simulate the two-state Markov chain in Example 1.4 we would, if X_n=1, set X_{n+1}=1 if U lies in the interval [0,1−alpha) and set X_{n+1}=2 if U lies in [1−alpha,1]. If X_n=2, we would set X_{n+1}=1 if U lies in [0,beta) and set X_{n+1}=2 if U lies in [beta,1]. The content of Section 1.4 generalizes this idea in an obvious way. Simulation of random processes via a computer is an important tool. For example, we might like to run a computer simulation of road traffic network to see what delays occur when we try different strategies for sequencing the traffic lights at intersections.

Section 1.6 is about the calculation of P^n when P is a 2x2 matrix. We found P^n by writing P=UDU^{−1}, where D=DiagonalMatrix[{1,1−alpha−beta}]. We did not need to know U explicitly. But, as an exercise, we can easily find it. Notice that for any stochastic matrix P there is always a right-hand eigenvector of (1,1,....,1) (a column vector), since row sums of P are 1. So there is always an eigenvalue of 1. The other right-hand eigenvector is (alpha,−beta), with eigenvalue of 1−alpha−beta. So we could take U^{−1}={{1,alpha},{1,−beta}}. Can you also find the left-hand eigenvectors? They are of course the rows of U.

In Section 1.6 I gave two methods of finding p_{11}^{(n)}. The first method obviously generalizes to larger matrices. The second method is specific to this 2x2 example, but it is attractive because it gets to the answer so quickly.

Notice that throughout my blog I use pseudo-LaTeX notation for mathematics. So x_i is "x sub i", alpha is the Greek character alpha, and so forth. (I say pseudo-LaTeX, because in actual-LaTeX one has to put $ signs around things, e.g. $x_i$, and put a \ in front of a Greek character name, e.g. \alpha). LaTeX is the language in which my lecture notes (and almost all modern mathematical papers) are written. When I want to do some algebra or work with mathematical objects I sometimes use

There was one typo in the notes that were on this site until today. You should note this if you downloaded a copy of the notes before today. In Theorem 1.3 the statement should be "if and only if" not just "if". As I said, this theorem is not particularly deep, but it gives us practice in using the notation and understanding how Markov chains work.

Notice that throughout my blog I use pseudo-LaTeX notation for mathematics. So x_i is "x sub i", alpha is the Greek character alpha, and so forth. (I say pseudo-LaTeX, because in actual-LaTeX one has to put $ signs around things, e.g. $x_i$, and put a \ in front of a Greek character name, e.g. \alpha). LaTeX is the language in which my lecture notes (and almost all modern mathematical papers) are written. When I want to do some algebra or work with mathematical objects I sometimes use

*Mathematica*'s notation. So DiagonalMatrix[{1,1−alpha−beta}] is the 2x2 diagonal matrix which has 1 and 1−alpha−beta on the diagonal. Also, {{1,alpha},{1,−beta}} is the 2x2 matrix whose rows are {1,alpha} and {1,−beta}.There was one typo in the notes that were on this site until today. You should note this if you downloaded a copy of the notes before today. In Theorem 1.3 the statement should be "if and only if" not just "if". As I said, this theorem is not particularly deep, but it gives us practice in using the notation and understanding how Markov chains work.

You will need the ideas in Section 1.2 to do Example Sheet 1 #2, #3 and #4.