11.2.3 Probability Distributions

State Probability Distributions:

Consider a Markov chain $\{X_n, n=0,1,2,...\}$, where $X_n \in S=\{1,2,\cdots, r\}$. Suppose that we know the probability distribution of $X_0$. More specifically, define the row vector $\pi^{(0)}$ as \begin{equation} \pi^{(0)} = \begin{bmatrix} P(X_0=1) & P(X_0=2) & \cdots & P(X_0=r) \end{bmatrix}. \end{equation} How can we obtain the probability distribution of $X_1$, $X_2$, $\cdots$? We can use the law of total probability. More specifically, for any $j \in S$, we can write \begin{align*} P(X_1=j)&=\sum_{k=1}^{r} P(X_1=j | X_0=k) P(X_0=k)\\ &=\sum_{k=1}^{r} p_{kj} P(X_0=k). \end{align*} If we generally define \begin{equation} \pi^{(n)} = \begin{bmatrix} P(X_n=1) & P(X_n=2) & \cdots & P(X_n=r) \end{bmatrix}, \end{equation} we can rewrite the above result in the form of matrix multiplication \begin{align*} \pi^{(1)}= \pi^{(0)} P, \end{align*} where $P$ is the state transition matrix. Similarly, we can write \begin{align*} \pi^{(2)}= \pi^{(1)} P=\pi^{(0)} P^2. \end{align*} More generally, we can write
\begin{align*} &\pi^{(n+1)}= \pi^{(n)} P, \; \textrm{ for }n=0,1,2, \cdots;\\ &\pi^{(n)}= \pi^{(0)} P^n, \; \textrm{ for }n=0,1,2, \cdots. \end{align*}


Example
Consider a system that can be in one of two possible states, $S=\{0,1\}$. In particular, suppose that the transition matrix is given by \begin{equation} P = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\[5pt] \frac{1}{3} & \frac{2}{3} \end{bmatrix}. \end{equation} Suppose that the system is in state $0$ at time $n=0$, i.e., $X_0=0$.
  1. Draw the state transition diagram.
  2. Find the probability that the system is in state $1$ at time $n=3$.
  • Solution
      1. The state transition diagram is shown in Figure 11.8.
      2. MC-diagram-2
        Figure 11.8 - A state transition diagram.
      3. Here, we know \begin{align*} \pi^{(0)} &= \begin{bmatrix} P(X_0=0) & P(X_0=1) \end{bmatrix}\\ &=\begin{bmatrix} 1 & 0 \end{bmatrix}. \end{align*} Thus, \begin{align*} \pi^{(3)}&= \pi^{(0)} P^3\\ &= \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\[5pt] \frac{1}{3} & \frac{2}{3} \end{bmatrix}^3\\ &= \begin{bmatrix} \frac{29}{72} & \frac{43}{72} \end{bmatrix}. \end{align*} Thus, the probability that the system is in state $1$ at time $n=3$ is $\frac{43}{72}$.


$n$-Step Transition Probabilities:

Consider a Markov chain $\{X_n, n=0,1,2,...\}$, where $X_n \in S$. If $X_0=i$, then $X_1=j$ with probability $p_{ij}$. That is, $p_{ij}$ gives us the probability of going from state $i$ to state $j$ in one step. Now suppose that we are interested in finding the probability of going from state $i$ to state $j$ in two steps, i.e., \begin{align*} p^{(2)}_{ij}=P(X_2=j |X_0=i). \end{align*} We can find this probability by applying the law of total probability. In particular, we argue that $X_1$ can take one of the possible values in $S$. Thus, we can write \begin{align*} p^{(2)}_{ij}=P(X_2=j |X_0=i)&=\sum_{k \in S} P(X_2=j |X_1=k, X_0=i) P(X_1=k|X_0=i)\\ &=\sum_{k \in S} P(X_2=j |X_1=k) P(X_1=k|X_0=i) \quad (\textrm{by Markov property}) \\ &=\sum_{k \in S} p_{kj} p_{ik}. \end{align*} We conclude \begin{align} p^{(2)}_{ij}=P(X_2=j |X_0=i)=\sum_{k \in S} p_{ik}p_{kj} \hspace{30pt} (11.4) \end{align} We can explain the above formula as follows. In order to get to state $j$, we need to pass through some intermediate state $k$. The probability of this event is $p_{ik}p_{kj}$. To obtain $p^{(2)}_{ij}$, we sum over all possible intermediate states. Accordingly, we can define the two-step transition matrix as follows: \begin{equation} P^{(2)} = \begin{bmatrix} p^{(2)} _{11} & p^{(2)} _{12} & ... & p^{(2)} _{1r} \\%[5pt] p^{(2)} _{21} & p^{(2)}_{22} & ... & p^{(2)} _{2r} \\%[5pt] . & . & . & .\\[-10pt] . & . & . & . \\[-10pt] . & . & . & . \\[5pt] p^{(2)}_{r1} & p^{(2)} _{r2} & ... & p^{(2)} _{rr} \end{bmatrix}. \end{equation} Looking at Equation 11.4, we notice that $p^{(2)}_{ij}$ is in fact the element in the $i$th row and $j$th column of the matix \begin{equation} P^2 = \begin{bmatrix} p_{11} & p_{12} & ... & p_{1r} \\%[5pt] p_{21} & p_{22} & ... & p_{2r} \\%[5pt] . & . & . & .\\[-10pt] . & . & . & . \\[-10pt] . & . & . & . \\[5pt] p_{r1} & p_{r2} & ... & p_{rr} \end{bmatrix} \cdot \begin{bmatrix} p_{11} & p_{12} & ... & p_{1r} \\%[5pt] p_{21} & p_{22} & ... & p_{2r} \\%[5pt] . & . & . & .\\[-10pt] . & . & . & . \\[-10pt] . & . & . & . \\[5pt] p_{r1} & p_{r2} & ... & p_{rr} \end{bmatrix}. \end{equation} Thus, we conclude that the two-step transition matrix can be obtained by squaring the state transition matrix, i.e., \begin{align*} P^{(2)}=P^2. \end{align*} More generally, we can define the $n$-step transition probabilities $p^{(n)}_{ij}$ as \begin{align} p^{(n)}_{ij}=P(X_n=j |X_0=i), \hspace{20pt} \textrm{for }n=0,1,2,\cdots, \hspace{40pt} (11.5) \end{align} and the $n$-step transition matrix, $P^{(n)}$, as \begin{equation} P^{(n)} = \begin{bmatrix} p^{(n)} _{11} & p^{(n)} _{12} & ... & p^{(n)} _{1r} \\%[5pt] p^{(n)} _{21} & p^{(n)}_{22} & ... & p^{(n)} _{2r} \\%[5pt] . & . & . & .\\[-10pt] . & . & . & . \\[-10pt] . & . & . & . \\[5pt] p^{(n)}_{r1} & p^{(n)} _{r2} & ... & p^{(n)} _{rr} \end{bmatrix}. \end{equation} We can now generalize Equation 11.5. Let $m$ and $n$ be two positive integers and assume $X_0=i$. In order to get to state $j$ in $(m+n)$ steps, the chain will be at some intermediate state $k$ after $m$ steps. To obtain $p^{(m+n)}_{ij}$, we sum over all possible intermediate states: \begin{align*} p^{(m+n)}_{ij}&=P(X_{m+n}=j |X_0=i)\\ &=\sum_{k \in S} p^{(m)}_{ik}p^{(n)}_{kj}. \end{align*} The above equation is called the Chapman-Kolmogorov equation. Similar to the case of two-step transition probabilities, we can show that $P^{(n)}=P^n$, for $n=1,2,3,\cdots$.
The Chapman-Kolmogorov equation can be written as \begin{align*} p^{(m+n)}_{ij}&=P(X_{m+n}=j |X_0=i)\\ &=\sum_{k \in S} p^{(m)}_{ik}p^{(n)}_{kj}. \end{align*} The $n$-step transition matrix is given by \begin{align*} P^{(n)}=P^n, \; \textrm{ for }n=1,2,3,\cdots. \end{align*}


The print version of the book is available on Amazon.

Book Cover


Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

ractical Uncertaintly Cover