11.3.2 Stationary and Limiting Distributions

Here we introduce stationary distributions for continuous Markov chains. As in the case of discrete-time Markov chains, for "nice" chains, a unique stationary distribution exists and it is equal to the limiting distribution. Remember that for discrete-time Markov chains, stationary distributions are obtained by solving $\pi=\pi P$. We have a similar definition for continuous-time Markov chains.
Let $X(t)$ be a continuous-time Markov chain with transition matrix $P(t)$ and state space $S=\{0, 1, 2, \cdots \}$. A probability distribution $\pi$ on $S$, i.e, a vector $\pi=[\pi_0, \pi_1, \pi_2, \cdots ]$, where $\pi_i \in [0,1]$ and \begin{align*} \sum_{i \in S} \pi_i=1, \end{align*} is said to be a stationary distribution for $X(t)$ if \begin{align*} \pi=\pi P(t), \quad \textrm{ for all }t\geq 0. \end{align*}
The intuition here is exactly the same as in the case of discrete-time chains. If the probability distribution of $X(0)$ is $\pi$, then the distribution of $X(t)$ is also given by $\pi$, for any $t \geq 0$.

Example
Consider the continuous Markov chain of Example 11.17: A chain with two states $S=\{0, 1\}$ and $\lambda_0=\lambda_1=\lambda>0$. In that example, we found that the transition matrix for any $t \geq 0$ is given by \begin{equation} \nonumber P(t) = \begin{bmatrix} \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} \\[5pt] \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} \\[5pt] \end{bmatrix}. \end{equation} Find the stationary distribution $\pi$ for this chain.
  • Solution
    • For $\pi=[\pi_0, \pi_1]$, we obtain \begin{equation} \nonumber \pi P(t) = [\pi_0, \pi_1] \begin{bmatrix} \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} \\[5pt] \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} \\[5pt] \end{bmatrix} = [\pi_0, \pi_1]. \end{equation} We also need \begin{align*} \pi_0+\pi_1=1. \end{align*} Solving the above equations, we obtain \begin{align*} \pi_0=\pi_1=\frac{1}{2}. \end{align*}


Similar to the case of discrete-time Markov chains, we are interested in limiting distributions for continuous-time Markov chains.
Limiting Distributions

The probability distribution $\pi=[\pi_0, \pi_1, \pi_2, \cdots ]$ is called the limiting distribution of the continuous-time Markov chain $X(t)$ if \begin{align*} \pi_j=\lim_{t \rightarrow \infty} P(X(t)=j |X(0)=i) \end{align*} for all $i, j \in S$, and we have \begin{align*} \sum_{j \in S} \pi_j=1. \end{align*}

As we will see shortly, for "nice" chains, there exists a unique stationary distribution which will be equal to the limiting distribution. In theory, we can find the stationary (and limiting) distribution by solving $\pi P(t)=\pi$, or by finding $\lim_{t\rightarrow \infty} P(t)$. However, in practice, finding $P(t)$ itself is usually very difficult. It is easier if we think in terms of the jump (embedded) chain. The following intuitive argument gives us the idea of how to obtain the limiting distribution of a continuous Markov chain from the limiting distribution of the corresponding jump chain.


Suppose that $\tilde{\pi}=\big[ \tilde{\pi}_0, \tilde{\pi}_1, \tilde{\pi}_2, \cdots \big]$ is the limiting distribution of the jump chain. That is, the discrete-time Markov chain associated with the jump chain will spend a fraction $\tilde{\pi}_j$ of time in state $j$ in the long run. Note that, for the corresponding continuous-time Markov chain, any time that the chain visits state $j$, it spends on average $\frac{1}{\lambda_j}$ time units in that state. Thus, we can obtain the limiting distribution of the continuous-time Markov chain by multiplying each $\tilde{\pi}_j$ by $\frac{1}{\lambda_j}$. We also need to normalize (divide by $\sum \frac{\tilde{\pi}_k}{\lambda_k}$) to get a valid probability distribution. The following theorem states this result more accurately. (It is worth noting that in the discrete-time case, we worried about periodicity. However, for continuous-time Markov chains, this is not an issue. This is because the times could any take positive real values and will not be multiples of a specific period.)

Theorem
Let $\{X(t), t \geq 0 \}$ be a continuous-time Markov chain with an irreducible positive recurrent jump chain. Suppose that the unique stationary distribution of the jump chain is given by $$\tilde{\pi}=\big[ \tilde{\pi}_0, \tilde{\pi}_1, \tilde{\pi}_2, \cdots \big].$$ Further assume that $$0 \lt \sum_{k \in S} \frac{\tilde{\pi}_k}{\lambda_k} \lt \infty.$$ Then, \begin{align*} \pi_j=\lim_{t \rightarrow \infty} P(X(t)=j |X(0)=i)=\frac{\frac{\tilde{\pi}_j}{\lambda_j}}{\sum_{k \in S} \frac{\tilde{\pi}_k}{\lambda_k}}. \end{align*} for all $i, j \in S$. That is, $\pi=[\pi_0, \pi_1, \pi_2, \cdots ]$ is the limiting distribution of $X(t)$.


Example
Consider a continuous-time Markov chain $X(t)$ that has the jump chain shown in Figure 11.23. Assume the holding time parameters are given by $\lambda_1=2$, $\lambda_2=1$, and $\lambda_3=3$. Find the limiting distribution for $X(t)$.
jump-chain-1
Figure 11.23 - The jump chain for the Markov chain of Example 11.19.
  • Solution
    • We first note that the jump chain is irreducible. In particular, the transition matrix of the jump chain is given by \begin{equation} \nonumber P = \begin{bmatrix} 0 & 1 & 0 \\[5pt] 0 & 0 & 1\\[5pt] \frac{1}{2} & \frac{1}{2} & 0 \\[5pt] \end{bmatrix}. \end{equation} The next step is to find the stationary distribution for the jump chain by solving $\tilde{\pi} P=\tilde{\pi}$. We obtain $$\tilde{\pi}=\frac{1}{5} [1, \; 2, \; 2].$$ Finally, we can obtain the limiting distribution of $X(t)$ using \begin{align*} \pi_j=\frac{\frac{\tilde{\pi}_j}{\lambda_j}}{\sum_{k \in S} \frac{\tilde{\pi}_k}{\lambda_k}}. \end{align*} We obtain \begin{align*} \pi_1&=\frac{\frac{\tilde{\pi}_1}{\lambda_1}}{\frac{\tilde{\pi}_1}{\lambda_1}+\frac{\tilde{\pi}_2}{\lambda_2}+\frac{\tilde{\pi}_3}{\lambda_3}}\\ &=\frac{\frac{1}{2}}{\frac{1}{2}+\frac{2}{1}+\frac{2}{3}}\\ &=\frac{3}{19}. \end{align*} \begin{align*} \pi_2&=\frac{\frac{\tilde{\pi}_2}{\lambda_2}}{\frac{\tilde{\pi}_1}{\lambda_1}+\frac{\tilde{\pi}_2}{\lambda_2}+\frac{\tilde{\pi}_3}{\lambda_3}}\\ &=\frac{\frac{2}{1}}{\frac{1}{2}+\frac{2}{1}+\frac{2}{3}}\\ &=\frac{12}{19}. \end{align*} \begin{align*} \pi_3&=\frac{\frac{\tilde{\pi}_3}{\lambda_3}}{\frac{\tilde{\pi}_1}{\lambda_1}+\frac{\tilde{\pi}_2}{\lambda_2}+\frac{\tilde{\pi}_3}{\lambda_3}}\\ &=\frac{\frac{2}{3}}{\frac{1}{2}+\frac{2}{1}+\frac{2}{3}}\\ &=\frac{4}{19}. \end{align*} Thus, we conclude that $\pi=\frac{1}{19}[3, 12, 4]$ is the limiting distribution of $X(t)$.




The print version of the book is available on Amazon.

Book Cover


Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

ractical Uncertaintly Cover