1.3.3 Finding Probabilities

Suppose that we are given a random experiment with a sample space $S$. To find the probability of an event, there are usually two steps: first, we use the specific information that we have about the random experiment. Second, we use the probability axioms. Let's look at an example. Although this is a simple example and you might be tempted to write the answer without following the steps, we encourage you to follow the steps.



Example

You roll a fair die. What is the probability of $E=\{1,5\}$?

  • Solution
    • Let's first use the specific information that we have about the random experiment. The problem states that the die is fair, which means that all six possible outcomes are equally likely, i.e., $$P(\{1\})=P(\{2\})=\cdots=P(\{6\}).$$ Now we can use the axioms of probability. In particular, since the events $\{1\}, \{2\}, \cdots, \{6\}$ are disjoint we can write

      $1$ $=P(S)$
      $ = P\bigg(\{1\} \cup \{2\} \cup\cdots\cup \{6\}\bigg)$
      $=P(\{1\})+P(\{2\})+\cdots+P(\{6\})$
      $=6P(\{1\})$.

      Thus, $$P(\{1\})=P(\{2\})=\cdots=P(\{6\})=\frac{1}{6}.$$ Again since $\{1\}$ and $\{5\}$ are disjoint, we have $$P(E)=P(\{1,5\})=P(\{1\})+P(\{5\})=\frac{2}{6}=\frac{1}{3}.$$



It is worth noting that we often write $P(1)$ instead of $P(\{1\})$ to simplify the notation, but we should emphasize that probability is defined for sets (events) not for individual outcomes. Thus, when we write $P(2)=\frac{1}{6}$, what we really mean is that $P(\{2\})=\frac{1}{6}$.

We will see that the two steps explained above can be used to find probabilities for much more complicated events and random experiments. Let us now practice using the axioms by proving some useful facts.



Example

Using the axioms of probability, prove the following:

  1. For any event $A$, $P(A^c)=1-P(A)$.
  2. The probability of the empty set is zero, i.e., $P(\emptyset)=0$.
  3. For any event $A$, $P(A) \leq 1$.
  4. $P(A-B)=P(A)-P(A \cap B)$.
  5. $P(A \cup B)=P(A)+P(B)-P(A \cap B)$, (inclusion-exclusion principle for $n=2$).
  6. If $A \subset B$ then $P(A) \leq P(B)$.

  • Solution
      1. This states that the probability that $A$ does not occur is $1-P(A)$. To prove it using the axioms, we can write
        $1$ $ = P(S)$ $\textrm{(axiom 2)}$
        $=P(A \cup A^c)$ $\textrm{(definition of complement)}$
        $=P(A)+P(A^c) $ $\textrm{(since $A$ and $A^c$ are disjoint)}$

      2. Since $\emptyset=S^c$, we can use part (a) to see that $P(\emptyset)=1-P(S)=0$. Note that this makes sense as by definition: an event happens if the outcome of the random experiment belongs to that event. Since the empty set does not have any element, the outcome of the experiment never belongs to the empty set.

      3. From part (a), $P(A)=1-P(A^c)$ and since $P(A^c) \geq 0$ (the first axiom), we have $P(A) \leq 1$.

      4. We show that $P(A)=P(A \cap B)+P(A-B)$. Note that the two sets $A \cap B$ and $A-B$ are disjoint and their union is $A$ (Figure 1.17). Thus, by the third axiom of probability

        \begin{align} P(A)&=P\big((A \cap B) \cup (A-B)\big) &(\textrm{ since }A=(A \cap B) \cup (A-B))\\ &=P(A \cap B)+P(A-B) &\textrm{ (since $A \cap B$ and $A-B$ are disjoint)}. \end{align}
        Difference
        Fig.1.17 - $P(A)=P(A \cap B)+P(A-B)$.
        Note that since $A-B=A \cap B^c$, we have shown $$P(A)=P(A \cap B)+P(A \cap B^c).$$ Note also that the two sets $B$ and $B^c$ form a partition of the sample space (since they are disjoint and their union is the whole sample space). This is a simple form of law of total probability that we will discuss shortly and is a very useful rule in finding probability of some events.

      5. Note that $A$ and $B-A$ are disjoint sets and their union is $A \cup B$. Thus,

        $P(A \cup B)$ $ =P(A \cup (B-A))$ $\textrm{($A \cup B=A \cup (B-A$))}$
        $=P(A)+P(B-A)$ $\textrm{(since $A$ and $B-A$ are disjoint)}$
        $=P(A)+P(B)-P(A \cap B) \hspace{20pt}$ $\textrm{(by part (d))}$

      6. Note that $A \subset B$ means that whenever $A$ occurs $B$ occurs, too. Thus intuitively we expect that $P(A) \leq P(B)$. Again the proof is similar as before. If $A \subset B$, then $A \cap B=A$. Thus,

        $P(B)$ $ =P(A \cap B)+P(B-A)$ $\hspace{40pt}$ $\textrm{(by part (d))}$
        $=P(A)+P(B-A)$ $\textrm{(since $A=A \cap B$)}$
        $\geq P(A)$ $\textrm{(by axiom 1)}$



Example

Suppose we have the following information:

  1. There is a $60$ percent chance that it will rain today.
  2. There is a $50$ percent chance that it will rain tomorrow.
  3. There is a $30$ percent chance that it does not rain either day.
Find the following probabilities:
  1. The probability that it will rain today or tomorrow.
  2. The probability that it will rain today and tomorrow.
  3. The probability that it will rain today but not tomorrow.
  4. The probability that it either will rain today or tomorrow, but not both.

  • Solution
    • An important step in solving problems like this is to correctly convert them to probability language. This is especially useful when the problems become complex. For this problem, let's define $A$ as the event that it will rain today, and $B$ as the event that it will rain tomorrow. Then, let's summarize the available information:

      1. $P(A)=0.6$,
      2. $P(B)=0.5$,
      3. $P(A^c \cap B^c)=0.3$

      Now that we have summarized the information, we should be able to use them alongside probability rules to find the requested probabilities:

      1. The probability that it will rain today or tomorrow: this is $P(A \cup B)$. To find this we notice that

        $P(A \cup B)$ $=1-P\bigg((A \cup B)^c \bigg) \hspace{40pt}$ $\textrm{by Example 1.10}$
        $=1-P(A^c \cap B^c)$ $\textrm{by De Morgan's Law}$
        $=1-0.3$
        $=0.7$

      2. The probability that it will rain today and tomorrow: this is $P(A \cap B)$. To find this we note that

        $P(A \cap B)$ $=P(A)+P(B)-P(A \cup B) \hspace{30pt}$ $\textrm{by Example 1.10}$
        $=0.6+0.5-0.7$
        $=0.4$

      3. The probability that it will rain today but not tomorrow: this is $P(A \cap B^c)$.

        $P(A \cap B^c)$ $ = P(A-B)$
        $=P(A)-P(A \cap B) \hspace{60pt}$ $\textrm{by Example 1.10}$
        $=0.6-0.4$
        $=0.2$

      4. The probability that it either will rain today or tomorrow but not both: this is $P(A-B)+P(B-A)$. We have already found $P(A-B)=.2$. Similarly, we can find $P(B-A)$:

        $P(B-A)$ $=P(B)-P(B \cap A) \hspace{60pt}$ $\textrm{by Example 1.10}$
        $=0.5-0.4$
        $=0.1$

        Thus, $$P(A-B)+P(B-A)=0.2+0.1=0.3 \hspace{40pt}$$


In this problem, it is stated that there is a $50$ percent chance that it will rain tomorrow. You might have heard this information from news on the TV. A more interesting question is how the number $50$ is obtained. This is an example of a real-life problem in which tools from probability and statistics are used. As you read more chapters from the book, you will learn many of these tools that are frequently used in practice.

Inclusion-Exclusion Principle:

The formula $P(A \cup B)=P(A)+P(B)-P(A \cap B)$ that we proved in Example 1.10 is a simple form of the inclusion-exclusion principle. We can extend it to the union of three or more sets.

Inclusion-exclusion principle:

  • $P(A \cup B )= P(A)+P(B)-P(A \cap B)$,

  • $P(A \cup B \cup C) = P(A) + P(B) + P(C)-$
    $ - P(A \cap B) - P(A \cap C) - P(B \cap C) + P(A \cap B \cap C)$


Generally for $n$ events $A_1, A_2,\cdots,A_n$, we have


$P\biggl(\bigcup_{i=1}^n A_i\biggr) =\sum_{i=1}^n P(A_i)-\sum_{i < j}P(A_i\cap A_j) $ $ \hspace{32pt} +\sum_{i < j < k}P(A_i\cap A_j\cap A_k)-\ \cdots\ +(-1)^{n-1}\, P\biggl(\bigcap_{i=1}^n A_i\biggr)$


The print version of the book is available on Amazon.

Book Cover


Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI

ractical Uncertaintly Cover