What is a Markov chain?

Markov chain is a model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event i.e if we can make predictions for a process’s future based only on it’s present state — just as well as knowing the process’s complete history, then the process is know as a “Markov process”. Let’s jump right into it with a problem.

Problem :

Let’s model a mouse moving around a maze. The maze is a closed space containing nine rooms — and there are doorways connecting the rooms.

THE MOUSE AND THE MAZE

Mouse snacking before entering the maze !

There are doors leading to adjacent rooms, i.e there are doors :

· from 1 to 2, 4

· from 2 to 1, 3, 5

· from 3 to 2, 6

· from 4 to 1, 5, 7

· from 5 to 2, 4, 6, 8

· from 6 to 3, 5, 9

· from 7 to 4, 8

· from 8 to 5, 7, 9

· from 9 to 6,8

We assume that the mouse is a “Markov mouse” i.e the mouse moves randomly from one room to another where the probability of the mouse reaching the next room is dependent only on the room it is currently present in — and not dependent on how it got to the current room.

Below is the transition probability data we can create with the information provided, known as the transition matrix :

Transition Matrix

It provides us with the probability of the mouse going to a destination room from a source room. For example, if the mouse is present in room 1, it can go to room 2 with a probability of 1/2 or it can go to room 4 with a probability of 1/2. Similarly, if the mouse is in room 2, it can go to rooms 1, 3 or 5 — each with a probability of 1/3.

Notations :

The transition matrix is denoted by P. The matrix element in the upper left corner is denoted by P(1,1) — while the matrix element in the lower right corner is P(9,9). Examples :

P(1,1) = Probability of mouse moving from room 1 to room 1 = 0

P(1,2) = Probability of mouse moving from room 1 to room 2 = 1/2

P(3,6) = Probability of mouse moving from room 3 to room 6 = 1/2

Reiterating the Markov property, P(2,3) is the probability of the mouse going next to state 3 given that the mouse is starting in state 2. The Markov property implies that the probability does not depend on the earlier history.

Now that we have modeled the process, let’s look at two interesting problems :

Question 1

What is the probability of the mouse starting from room 1 and reaching room 6 in two transitions?

Solution 1

Let’s start this problem with intuition — in the first transition the mouse can go from state 1 to either states 2 or 4.

· If the mouse goes to state 2, then in the second transition it can only go to states 3, 4 or 5.

· If the mouse goes to state 4, then in the second transition it can only go to rooms 4,5 or 7.

Hence, there is no possibility of the mouse reaching room 6 from room 1 in two transitions. Probability of the mouse starting from room 1 and reaching room 6 is 0.

Question 2

What is the probability of the mouse starting from room 2 and reaching room 2 again in two transitions ?

Solution 2

Starting from room 2, the mouse can reach room 2 again in the following ways :

i. 2 →1 → 2

→ Probability = P(2,1)*P(1,2) = 1/3 * 1/2 = 1/6

ii. 2 → 3 → 2

→ Probability = P(2,3)*P(3,2) = 1/3 * 1/2 = 1/6

iii. 2 →5 →2

→ Probability = P(2,5)*P(5,2) = 1/3 * 1/4 = 1/12

Summing all in the individual probabilities, we get 1/6 + 1/6 + 1/12 = 5/12. Hence, if the mouse starts in room 2, it can reach state 2 again in two transitions with a probability of 5/12 = 0.4167.

Another way to arrive at the solution for both the above questions is matrix multiplication. If we raise the transition matrix to the power 2 (P²), we would get the transition matrix or the probability for the mouse to reach any room starting from any other room in two transitions. Below is the P² matrix.

P² Matrix

From the above matrix we can see that in two transitions P(2,2) = 0.4167 and P(1,6) = 0, same as what we calculated above.

This is a good starting point to understand Markov chains, when we go further we can answer more interesting questions such as :

· How long (percentage of duration) will the mouse spend in each room, if it starts in room 1?

· If there are infinite transitions, would we reach a steady state, and how would it look like ?

· Starting in a particular room, where would the mouse most likely be in 100 transitions?

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Connect on LinkedIn

Sources : http://www.columbia.edu/~ww2040/4701Sum07/MarkovMouse.pdf