Today's lecture was on

**recurrence**and**transience**. First let me clear up a two points that arise from some students' questions.
1. We make the definition that

**state i is recurrent if P_i(V_i = ∞)=1**.**It is defined to be transient otherwise, i.e if P_i(V_i = ∞)<1.**Later (in Theorem 5.3) we show that if i is transient then actually P_i(V_i = ∞)=0 (but this is a consequence, not part of our starting definition of transience).
2. In the proof of Theorem 5.4 we use the fact that p_{ii}^(n+m+r) ≥ p_{ij}^(n) p_{jj}^(r) p_{ji}^(m). Please don't think that we are using any summation notation! (We never use summation convention in this course.) This inequality is a simply product of three terms on the right hand side and is a simple consequence of the fact that one way to go i→i in n+m+r steps is to first take n steps to go i→j, then r steps to go j→j, and finally m steps to go j→i. There is a ≥ because there are other ways to go i→i in n+m+r steps.

In Theorem 5..5 we gave an important way to check if a state is recurrent or transient, in terms of the summability of the p_{ii}^(n). This criterion will be used in Lecture 6. There are other ways to check for transience. One other way is to solve the RHE for the minimal solution to

y_j = sum_k p_{jk} y_k, j neq i, and

y_i=1.

So y_j =P_j(return to i). Now check the value of sum_k p_{ik} y_k. If it is <1 then i is transient. This is essentially the content of Theorem 5.9, which I have put in my published notes but am not going to discuss in lectures. However, you may find it helpful to read the Theorem. It's proof is simple.

I talked for a few minutes about my research on

**on-line bin packing**, in the paper Markov chains, computer proofs, and average-case analysis of best fit bin packing. In this research we consider items, of sizes which are uniformly chosen amongst the intergers 1,2,...,8, say, that arrive in a stream, and as each item arrives it must be packed in a bin. Initially there are an infinite number of empty bins, of some size, say 11. In the Markov chain that models the process of on-line best-fit bin packing the state can be represented as (x1,x2,..,x10), where xi is the number of bins that we have started, but which are not yet full, and which have a gap of i. It is interesting to ask if infinitely there is a return to the state (0,0,...,0) in which there are no partially-full bins present (i.e. if the Markov chain is recurrent). You might like to view these seminar sildes for more details. (These were for a faculty colloquium and aimed at a general audience of mathematicians, and so it should be well within your knowledge of mathematics to understand these slides.)
In the on-line bin packing research (and many other problems in queueing theory) researchers often prove results about recurrence and transience using some more sophisticated ideas than in Theorems 5.4. and 5.9 of today's lecture. One of these ideas is

**Foster's criterion**. This says that an irreducible Markov chain is recurrent if we can find a function f : I → R (called a**Lyapounov function**) and a finite subset of the state space, say J, such that (a) E[ f(X_1) | X_0=i ] ≤ f(i) for all i not in J, and (b) for each M>0 the set of states for which f(i)≤M is finite. Part (a) is essentially saying that outside J there is always drift back to states where f is smaller.