11.2.7 Solved Problems

Problem

Consider the Markov chain with three states, $S=\{1, 2, 3 \}$, that has the following transition matrix \begin{equation} \nonumber P = \begin{bmatrix} \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\[5pt] \frac{1}{3} & 0 & \frac{2}{3} \\[5pt] \frac{1}{2} & \frac{1}{2} & 0 \end{bmatrix}. \end{equation}

  1. Draw the state transition diagram for this chain.
  2. If we know $P(X_1=1)=P(X_1=2)=\frac{1}{4}$, find $P(X_1=3,X_2=2,X_3=1)$.

  • Solution
      1. The state transition diagram is shown in Figure 11.6
        MC-diagram-solved
        Figure 11.6 - A state transition diagram.
      2. First, we obtain \begin{align*} P(X_1=3)&=1-P(X_1=1)-P(X_1=2) \\ &=1-\frac{1}{4}-\frac{1}{4}\\ &=\frac{1}{2}. \end{align*} We can now write \begin{align*} P(X_1=3,X_2=2,X_3=1)&=P(X_1=3) \cdot p_{32} \cdot p_{21} \\ &=\frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{3}\\ &=\frac{1}{12}. \end{align*}


Problem

Consider the Markov chain in Figure 11.17. There are two recurrent classes, $R_1=\{1,2\}$, and $R_2=\{5,6,7\}$. Assuming $X_0=3$, find the probability that the chain gets absorbed in $R_1$.

MC-diagram-rec-abs
Figure 11.17 - A state transition diagram.
  • Solution
    • Here, we can replace each recurrent class with one absorbing state. The resulting state diagram is shown in Figure 11.18
      MC-diagram-rec-abs-eq
      Figure 11.18 - The state transition diagram in which we have replaced each recurrent class with one absorbing state.
      Now we can apply our standard methodology to find probability of absorption in state $R_1$. In particular, define \begin{align*} a_i&=P(\textrm{absorption in }R_1 | X_0=i), \quad \textrm{ for all }i \in S. \end{align*} By the above definition, we have $a_{R_1}=1$, and $a_{R_2}=0$. To find the unknown values of $a_i$'s, we can use the following equations \begin{align*} a_i &=\sum_{k} a_k p_{ik}, \quad \textrm{ for }i\in S. \end{align*} We obtain \begin{align*} a_3 &=\frac{1}{2} a_{R_1}+ \frac{1}{2} a_4\\ &=\frac{1}{2}+ \frac{1}{2} a_4, \end{align*} \begin{align*} a_4 &=\frac{1}{4} a_{R_1}+ \frac{1}{4} a_3+\frac{1}{2} a_{R_2}\\ &=\frac{1}{4}+ \frac{1}{4} a_3. \end{align*} Solving the above equations, we obtain \begin{align*} a_3 =\frac{5}{7}, \quad a_4=\frac{3}{7}. \end{align*} Therefore, if $X_0=3$, the chain will end up in class $R_1$ with probability $a_3 =\frac{5}{7}$.


Problem

Consider the Markov chain of Example 2. Again assume $X_0=3$. We would like to find the expected time (number of steps) until the chain gets absorbed in $R_1$ or $R_2$. More specifically, let $T$ be the absorption time, i.e., the first time the chain visits a state in $R_1$ or $R_2$. We would like to find $E[T|X_0=3]$.

  • Solution
    • Here we follow our standard procedure for finding mean hitting times. Consider Figure 11.18. Let $T$ be the first time the chain visits $R_1$ or $R_2$. For all $i \in S$, define \begin{align*} t_i&=E[T |X_0=i]. \end{align*} By the above definition, we have $t_{R_1}=t_{R_2}=0$. To find $t_3$ and $t_4$, we can use the following equations \begin{align*} t_i &=1+\sum_{k} t_k p_{ik}, \quad \textrm{ for }i=3, 4. \end{align*} Specifically, we obtain \begin{align*} t_3 &=1+\frac{1}{2} t_{R_1}+ \frac{1}{2} t_4\\ &=1+ \frac{1}{2} t_4, \end{align*} \begin{align*} t_4 &=1+\frac{1}{4} t_{R_1}+ \frac{1}{4} t_3+\frac{1}{2} t_{R_2}\\ &=1+ \frac{1}{4} t_3. \end{align*} Solving the above equations, we obtain \begin{align*} t_3 =\frac{12}{7}, \quad t_4=\frac{10}{7}. \end{align*} Therefore, if $X_0=3$, it will take on average $\frac{12}{7}$ steps until the chain gets absorbed in $R_1$ or $R_2$.


Problem

Consider the Markov chain shown in Figure 11.19. Assume $X_0=1$, and let $R$ be the first time that the chain returns to state $1$, i.e., \begin{align*} R = \min \{n \geq 1: X_n=1 \}. \end{align*} Find $E[R|X_0=1]$.

MC-diagram-solved-tek
Figure 11.19 - A state transition diagram.
  • Solution
    • In this question, we are asked to find the mean return time to state $1$. Let $r_1$ be the mean return time to state $1$, i.e., $r_1=E[R|X_0=1]$. Then \begin{align*} r_1 &=1+\sum_{k} t_k p_{1k}, \end{align*} where $t_k$ is the expected time until the chain hits state $1$ given $X_0=k$. Specifically, \begin{align*} t_1&=0,\\ t_k&=1+\sum_{j} t_j p_{kj}, \quad \textrm{ for } k\neq 1. \end{align*} So, let's first find $t_k$'s. We obtain \begin{align*} t_2 &=1+\frac{1}{3} t_1+ \frac{2}{3} t_3\\ &=1+ \frac{2}{3} t_3, \end{align*} \begin{align*} t_3 &=1+\frac{1}{2} t_{3}+ \frac{1}{2} t_1\\ &=1+\frac{1}{2} t_{3}. \end{align*} Solving the above equations, we obtain \begin{align*} t_3 =2, \quad t_2=\frac{7}{3}. \end{align*} Now, we can write \begin{align*} r_1 &=1+\frac{1}{4} t_{1}+ \frac{1}{2} t_2+\frac{1}{4} t_{3}\\ &=1+\frac{1}{4} \cdot 0+ \frac{1}{2} \cdot \frac{7}{3}+\frac{1}{4} \cdot 2\\ &=\frac{8}{3}. \end{align*}


Problem

Consider the Markov chain shown in Figure 11.20.

MC-diagram-solved-stationary
Figure 11.20 - A state transition diagram.
  1. Is this chain irreducible?
  2. Is this chain aperiodic?
  3. Find the stationary distribution for this chain.
  4. Is the stationary distribution a limiting distribution for the chain?
  • Solution
      1. The chain is irreducible since we can go from any state to any other states in a finite number of steps.
      2. The chain is aperiodic since there is a self-transition, i.e., $p_{11}>0$.
      3. To find the stationary distribution, we need to solve \begin{align*} & \pi_1 =\frac{1}{2} \pi_1+\frac{1}{3} \pi_2+\frac{1}{2} \pi_3, \\ & \pi_2 =\frac{1}{4} \pi_1+\frac{1}{2} \pi_3,\\ & \pi_3 =\frac{1}{4} \pi_1+\frac{2}{3} \pi_2, \\ & \pi_1+\pi_2+\pi_3=1. \end{align*} We find \begin{align*} \pi_1 \approx 0.457, \; \pi_2 \approx 0.257, \; \pi_3 \approx 0.286 \end{align*}
      4. The above stationary distribution is a limiting distribution for the chain because the chain is irreducible and aperiodic.


Problem

Consider the Markov chain shown in Figure 11.21. Assume that $\frac{1}{2} \lt p \lt 1$. Does this chain have a limiting distribution? For all $i,j \in \{0,1,2, \cdots \}$, find \begin{align*} \lim_{n \rightarrow \infty} P(X_n=j |X_0=i). \end{align*}

MC-diagram-inf-2
Figure 11.21 - A state transition diagram.
  • Solution
    • This chain is irreducible since all states communicate with each other. It is also aperiodic since it includes a self-transition, $P_{00}>0$. Let's write the equations for a stationary distribution. For state $0$, we can write \begin{align*} \pi_0 &=(1-p)\pi_0+(1-p) \pi_1, \end{align*} which results in \begin{align*} \pi_1 &=\frac{p}{1-p}\pi_0. \end{align*} For state $1$, we can write \begin{align*} \pi_1 &= p \pi_0+(1-p) \pi_2\\ &=(1-p) \pi_1+(1-p) \pi_2, \end{align*} which results in \begin{align*} \pi_2 &=\frac{p}{1-p}\pi_1. \end{align*} Similarly, for any $j \in \{1,2,\cdots \}$, we obtain \begin{align*} \pi_{j} &=\alpha \pi_{j-1}, \end{align*} where $\alpha=\frac{p}{1-p}$. Note that since $\frac{1}{2} \lt p \lt 1$, we conclude that $\alpha>1$. We obtain \begin{align*} \pi_{j} &=\alpha^j \pi_{0}, \quad \textrm{ for $j= 1,2,\cdots $ }. \end{align*} Finally, we must have \begin{align*} 1 &=\sum_{j=0}^{\infty} \pi_j\\ &= \sum_{j=0}^{\infty} \alpha^j \pi_0, &(\textrm{where } \alpha> 1)\\ &=\infty \pi_0 . \end{align*} Therefore, the above equation cannot be satisfied if $\pi_0>0$. If $\pi_0=0$, then all $\pi_j$'s must be zero, so they cannot sum to $1$. We conclude that there is no stationary distribution. This means that either all states are transient, or all states are null recurrent. In either case, we have \begin{align*} \lim_{n \rightarrow \infty} P(X_n=j |X_0=i)=0, \textrm{ for all }i,j. \end{align*} We will see how to figure out if the states are transient or null recurrent in the End of Chapter Problems (see Problem 15 in Section 11.5).




The print version of the book is available on Amazon.

Book Cover


Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

ractical Uncertaintly Cover