## 8.4.5 Likelihood Ratio Tests

So far we have focused on specific examples of hypothesis testing problems. Here, we would like to introduce a relatively general hypothesis testing procedure called the likelihood ratio test. Before doing so, let us quickly review the definition of the likelihood function, which was previously discussed in Section 8.2.3.

### Review of the Likelihood Function:

Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$.
- If the $X_i$'s are discrete, then the likelihood function is defined as \begin{align} \nonumber L(x_1, x_2, \cdots, x_n; \theta)=P_{X_1 X_2 \cdots X_n}(x_1, x_2, \cdots, x_n; \theta). \end{align}
- If the $X_i$'s are jointly continuous, then the likelihood function is defined as \begin{align} \nonumber L(x_1, x_2, \cdots, x_n; \theta)=f_{X_1 X_2 \cdots X_n}(x_1, x_2, \cdots, x_n; \theta). \end{align}

### Likelihood Ratio Tests:

Consider a hypothesis testing problem in which both the null and the alternative hypotheses are simple. That is

$\quad$ $H_0$: $\theta = \theta_0$,

$\quad$ $H_1$: $\theta = \theta_1$.

Now, let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$. One way to decide between $H_0$ and $H_1$ is to compare the corresponding likelihood functions: \begin{align} \nonumber l_0=L(x_1, x_2, \cdots, x_n; \theta_0), \quad \quad l_1=L(x_1, x_2, \cdots, x_n; \theta_1). \end{align} More specifically, if $l_0$ is much larger than $l_1$, we should accept $H_0$. On the other hand if $l_1$ is much larger, we tend to reject $H_0$. Therefore, we can look at the ratio $\frac{l_0}{l_1}$ to decide between $H_0$ and $H_1$. This is the idea behind likelihood ratio tests.
Likelihood Ratio Test for Simple Hypotheses

Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$. To decide between two simple hypotheses

$\quad$ $H_0$: $\theta = \theta_0$,

$\quad$ $H_1$: $\theta = \theta_1$,

we define \begin{align}%\label{} \lambda(x_1,x_2,\cdots, x_n)=\frac {L(x_1, x_2, \cdots, x_n; \theta_0)}{L(x_1, x_2, \cdots, x_n; \theta_1)}. \end{align} To perform a likelihood ratio test (LRT), we choose a constant $c$. We reject $H_0$ if $\lambda \lt c$ and accept it if $\lambda \geq c$. The value of $c$ can be chosen based on the desired $\alpha$.
Let's look at an example to see how we can perform a likelihood ratio test.

Example
Here, we look again at the radar problem (Example 8.23). More specifically, we observe the random variable $X$: \begin{align}%\label{} X&=\theta+W, \end{align} where $W \sim N(0, \sigma^2=\frac{1}{9})$. We need to decide between

$\quad$ $H_0$: $\theta = \theta_0=0$,

$\quad$ $H_1$: $\theta = \theta_1=1$.

Let $X=x$. Design a level $0.05$ test ($\alpha=0.05)$ to decide between $H_0$ and $H_1$.
• Solution
• If $\theta = \theta_0=0$, then $X \sim N(0, \sigma^2=\frac{1}{9})$. Therefore, \begin{align} \nonumber L(x; \theta_0)=f_{X}(x; \theta_0)=\frac{3}{\sqrt{2 \pi}} e^{-\frac{9x^2}{2}}. \end{align} On the other hand, if $\theta = \theta_1=1$, then $X \sim N(1, \sigma^2=\frac{1}{9})$. Therefore, \begin{align} \nonumber L(x; \theta_1)=f_{X}(x; \theta_1)=\frac{3}{\sqrt{2 \pi}} e^{-\frac{9(x-1)^2}{2}}. \end{align} Therefore, \begin{align}%\label{} \lambda(x)=\frac {L(x; \theta_0)}{L(x; \theta_1)}&=\exp \left\{-\frac{9x^2}{2}+ \frac{9(x-1)^2}{2} \right\}\\ &=\exp \left\{ \frac{9(1-2x)}{2} \right\}. \end{align} Thus, we accept $H_0$ if \begin{align}%\label{} \exp \left\{ \frac{9(1-2x)}{2} \right\} \geq c, \end{align} where $c$ is the threshold. Equivalently, we accept $H_0$ if \begin{align}%\label{} x \leq \frac{1}{2} \left(1-\frac{2}{9} \ln c\right). \end{align} Let us define $c'=\frac{1}{2} \left(1-\frac{2}{9} \ln c\right)$, where $c'$ is a new threshold. Remember that $x$ is the observed value of the random variable $X$. Thus, we can summarize the decision rule as follows. We accept $H_0$ if \begin{align}%\label{} X \leq c'. \end{align} How to do we choose $c'$? We use the required $\alpha$. \begin{align} P(\textrm{type I error}) &= P(\textrm{Reject }H_0 \; | \; H_0) \\ &= P(X > c' \; | \; H_0)\\ &= P(X>c') \quad \big( \textrm{where }X \sim N\left(0, \frac{1}{9}\right) \big) \\ &=1-\Phi(3c'). \end{align} Letting $P(\textrm{type I error})=\alpha$, we obtain \begin{align} c' = \frac{1}{3} \Phi^{-1}(1-\alpha). \end{align} Letting $\alpha=0.05$, we obtain \begin{align} c' = \frac{1}{3} \Phi^{-1}(.95) =0.548 \end{align} As we see, in this case, the likelihood ratio test is exactly the same test that we obtained in Example 8.23.

How do we perform the likelihood ratio test if the hypotheses are not simple? Suppose that $\theta$ is an unknown parameter. Let $S$ be the set of possible values for $\theta$ and suppose that we can partition $S$ into two disjoint sets $S_0$ and $S_1$. Consider the following hypotheses:

$\quad$ $H_0$: $\theta \in S_0$,

$\quad$ $H_1$: $\theta \in S_1$.

The idea behind the general likelihood ratio test can be explained as follows: We first find the likelihoods corresponding to the most likely values of $\theta$ in $S_0$ and $S_1$ respectively. That is, we find \begin{align}%\label{} l_0&=\max \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S_0 \},\\ l &=\max \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S \}. \end{align} (To be more accurate, we need to replace $\max$ by $\sup$.) Let us consider two extreme cases. First, if $l_0=l$, then we can say that the most likely value of $\theta$ belongs to $S_0$. This indicates that we should not reject $H_0$. On the other hand, if $\frac{l_0}{l_1}$ is much smaller than $1$, we should probably reject $H_0$ in favor of $H_1$. To conduct a likelihood ratio test, we choose a threshold $0 \leq c \leq 1$ and compare $\frac{l_0}{l}$ to $c$. If $\frac{l_0}{l} \geq c$, we accept $H_0$. If $\frac{l_0}{l} \lt c$, we reject $H_0$. The value of $c$ can be chosen based on the desired $\alpha$.
Likelihood Ratio Tests

Let $X_1$, $X_2$, $X_3$, $...$, $X_n$ be a random sample from a distribution with a parameter $\theta$. Suppose that we have observed $X_1=x_1$, $X_2=x_2$, $\cdots$, $X_n=x_n$. Define \begin{align}%\label{} \lambda(x_1,x_2,\cdots, x_n)=\frac{\sup \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S_0 \}}{\sup \{L(x_1, x_2, \cdots, x_n; \theta) : \theta \in S \}}. \end{align} To perform a likelihood ratio test (LRT), we choose a constant $c$ in $[0,1]$. We reject $H_0$ if $\lambda \lt c$ and accept it if $\lambda \geq c$. The value of $c$ can be chosen based on the desired $\alpha$.

The print version of the book is available through Amazon here.