9.1.8 Bayesian Hypothesis Testing
To be more specific, according to the MAP test, we choose $H_0$ if and only if
\begin{align} P(H_0Y=y) \geq P(H_1Y=y). \end{align} In other words, we choose $H_0$ if and only if \begin{align} f_{Y}(yH_0)P(H_0) \geq f_{Y}(yH_1)P(H_1). \end{align} Note that as always, we use the PMF instead of the PDF if $Y$ is a discrete random variable. We can generalize the MAP test to the case where you have more than two hypotheses. In that case, again we choose the hypothesis with the highest posterior probability.Choose the hypothesis with the highest posterior probability, $P(H_iY=y)$. Equivalently, choose hypothesis $H_i$ with the highest $f_{Y}(yH_i)P(H_i)$.
Example
Suppose that the random variable $X$ is transmitted over a communication channel. Assume that the received signal is given by \begin{align} Y=X+W, \end{align} where $W \sim N(0,\sigma^2)$ is independent of $X$. Suppose that $X=1$ with probability $p$, and $X=1$ with probability $1p$. The goal is to decide between $X=1$ and $X=1$ by observing the random variable $Y$. Find the MAP test for this problem.
 Solution

Here, we have two hypotheses:
$\quad$ $H_0$: $X=1$,
Under $H_0$, $Y=1+W$, so $YH_0 \; \sim \; N(1, \sigma^2)$. Therefore, \begin{align} f_{Y}(yH_0)=\frac{1}{ \sigma\sqrt{2 \pi}} e^{\frac{(y1)^2}{2\sigma^2}}. \end{align} Under $H_1$, $Y=1+W$, so $YH_1 \; \sim \; N(1, \sigma^2)$. Therefore, \begin{align} f_{Y}(yH_1)=\frac{1}{ \sigma\sqrt{2 \pi}} e^{\frac{(y+1)^2}{2\sigma^2}}. \end{align} Thus, we choose $H_0$ if and only if \begin{align} \frac{1}{ \sigma\sqrt{2 \pi}} e^{\frac{(y1)^2}{2\sigma^2}} P(H_0) \geq \frac{1}{ \sigma\sqrt{2 \pi}} e^{\frac{(y+1)^2}{2\sigma^2}} P(H_1). \end{align} We have $P(H_0)=p$, and $P(H_1)=1p$. Therefore, we choose $H_0$ if and only if \begin{align} \exp \left (\frac{2y}{\sigma^2} \right) \geq \frac{1p}{p}. \end{align} Equivalently, we choose $H_0$ if and only if \begin{align} y \geq \frac{\sigma^2}{2} \ln \left(\frac{1p}{p}\right). \end{align}
$\quad$ $H_1$: $X=1$.

Here, we have two hypotheses:
Note that the average error probability for a hypothesis test can be written as \begin{align} P_e =P( \textrm{choose }H_1  H_0) P(H_0)+ P( \textrm{choose }H_0  H_1) P(H_1). \hspace{30pt} (9.6) \end{align} As we mentioned earlier, the MAP test achieves the minimum possible average error probability.
Example
Find the average error probability in Example 9.10
 Solution
 in Example 9.10, we arrived at the following decision rule: We choose $H_0$ if and only if \begin{align} y \geq c, \end{align} where \begin{align} c=\frac{\sigma^2}{2} \ln \left(\frac{1p}{p}\right). \end{align} Since $YH_0 \; \sim \; N(1, \sigma^2)$, \begin{align} P( \textrm{choose }H_1  H_0)&=P(Y \lt cH_0)\\ &=\Phi\left(\frac{c1}{\sigma} \right)\\ &=\Phi\left(\frac{\sigma}{2} \ln \left(\frac{1p}{p}\right)\frac{1}{\sigma}\right). \end{align} Since $YH_1 \; \sim \; N(1, \sigma^2)$, \begin{align} P( \textrm{choose }H_0  H_1)&=P(Y \geq cH_1)\\ &=1\Phi\left(\frac{c+1}{\sigma} \right)\\ &=1\Phi\left(\frac{\sigma}{2} \ln \left(\frac{1p}{p}\right)+\frac{1}{\sigma}\right). \end{align} Figure 9.4 shows the two error probabilities for this example. Therefore, the average error probability is given by \begin{align} P_e &=P( \textrm{choose }H_1  H_0) P(H_0)+ P( \textrm{choose }H_0  H_1) P(H_1)\\ &=p \cdot \Phi\left(\frac{\sigma}{2} \ln \left(\frac{1p}{p}\right)\frac{1}{\sigma}\right)+(1p) \cdot \left[ 1\Phi\left(\frac{\sigma}{2} \ln \left(\frac{1p}{p}\right)+\frac{1}{\sigma}\right)\right]. \end{align}
Minimum Cost Hypothesis Test:
Suppose that you are building a sensor network to detect fires in a forest. Based on the information collected by the sensors, the system needs to decide between two opposing hypotheses:
$\quad$ $H_0$: There is no fire,
$\quad$ $H_1$: There is a fire.
$\quad$ $C_{10}$: The cost of choosing $H_1$, given that $H_0$ is true.
$\quad$ $C_{01}$: The cost of choosing $H_0$, given that $H_1$ is true.
Assuming the following costs
$\quad$ $C_{10}$: The cost of choosing $H_1$, given that $H_0$ is true.
$\quad$ $C_{01}$: The cost of choosing $H_0$, given that $H_1$ is true.
Example
A surveillance system is in charge of detecting intruders to a facility. There are two hypotheses to choose from:
$\quad$ $H_0$: No intruder is present.
$\quad$ $H_1$: There is an intruder.
 Solution
 First note that \begin{align} P(H_0y)=1P(H_1y)=0.95 \end{align} The posterior risk of accepting $H_1$ is \begin{align} P(H_0y) C_{10} =0.95 C_{10}. \end{align} We have $C_{01}=10 C_{10}$, so the posterior risk of accepting $H_0$ is \begin{align} P(H_1y) C_{01} &=(0.05) (10 C_{10})\\ &=0.5 C_{10}. \end{align} Since $P(H_0y) C_{10} \geq P(H_1y) C_{01}$, we accept $H_0$, so no alarm message needs to be sent.