## 3.5 Independence

Recall that events \(A\) and \(B\) are **independent** if the knowing whether or not one occurs does not change the probability of the other.
For events \(A\) and \(B\) (with \(0<\textrm{P}(A)<1\) and \(0<\textrm{P}(B)<1\)) the following are equivalent.
That is, if one is true then they all are true; if one is false, then they all are false.

\[\begin{align*} \text{$A$ and $B$} & \text{ are independent}\\ \textrm{P}(A \cap B) & = \textrm{P}(A)\textrm{P}(B)\\ \textrm{P}(A^c \cap B) & = \textrm{P}(A^c)\textrm{P}(B)\\ \textrm{P}(A \cap B^c) & = \textrm{P}(A)\textrm{P}(B^c)\\ \textrm{P}(A^c \cap B^c) & = \textrm{P}(A^c)\textrm{P}(B^c)\\ \textrm{P}(A|B) & = \textrm{P}(A)\\ \textrm{P}(A|B) & = \textrm{P}(A|B^c)\\ \textrm{P}(B|A) & = \textrm{P}(B)\\ \textrm{P}(B|A) & = \textrm{P}(B|A^c) \end{align*}\]

The presence of independence can greatly simplify computations of probabilities. But be careful to properly identify when events are independent, and when they’re not.

### 3.5.1 Interpreting independence

**Example 3.11 **Each of the three Venn diagrams below represents a sample space with 16 equally likely outcomes. Let \(A\) be the yellow `/`

event, \(B\) the blue `\`

event, and their intersection \(A\cap B\) the green \(\times\) event. Suppose that areas represent probabilities, so that for example \(\textrm{P}(A) = 4/16\).

In which of the scenarios are events \(A\) and \(B\) independent?

*Solution*. to Example 3.11

## Show/hide solution

In each case, \(\textrm{P}(A)=4/16\). Condition on event \(B\), by zooming in on the blue slice, and see if \(\textrm{P}(A|B)\) is the same as \(\textrm{P}(A)\).

- Left: \(\textrm{P}(A|B)=0\neq 4/16 = \textrm{P}(A)\). Therefore, events \(A\) and \(B\) are not independent.
- Middle: \(\textrm{P}(A|B) = 2/4\neq 4/16 = \textrm{P}(A)\). Therefore, events \(A\) and \(B\) are not independent.
- Right: \(\textrm{P}(A|B) = 1/4= 4/16 = \textrm{P}(A)\). Therefore, events \(A\) and \(B\) are independent. The
*ratio of yellow to total*is the same as the*ratio of the green part of blue to blue*. If we zoom into the blue part of the picture (slice) and then resize it to the size of the original picture (renormalize), then the green part takes up 1/4 of the area just as the yellow part did in the original picture.

Do not confuse “disjoint” with “independent”. Disjoint means two events do not “overlap”. Independence means two events *“overlap in just the right way”*. You can pretty much forget “disjoint” exists; you will naturally apply the addition rule for disjoint events correctly without even thinking about it. Independence is much more important and useful, but also requires more care.

**Example 3.12 **Roll two fair six-sided dice, one green and one gold. There are 36 total possible outcomes (roll on green, roll on gold), all equally likely. Consider the event \(E=\{\text{the green die lands on 1}\}\).
Answer the following questions by computing and comparing appropriate probabilities.

- Consider \(A=\{\text{the gold die lands on 6}\}\). Are \(A\) and \(E\) independent?
- Consider \(B=\{\text{the sum of the dice is 2}\}\). Are \(B\) and \(E\) independent?
- Consider \(C=\{\text{the sum of the dice is 7}\}\). Are \(C\) and \(E\) independent?

*Solution*. to Example 3.12

## Show/hide solution

\(\textrm{P}(E)=6/36=1/6\) since there are six pairs of rolls which satisfy event \(E\): \(E=\{(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6)\}\)

- There are 6 outcomes which satisfy event \(A=\{(1, 6), (2, 6), (3, 6), (4, 6), (5, 6), (6, 6)\}\), all equally likely, only one, (1, 6), of which also satisfies event \(E\). \(\textrm{P}(E|A) = 1/6 = 6/36 = \textrm{P}(E)\), so events \(A\) and \(E\) are independent. The ratio of the \(E\) part of \(A\) to \(A\) is equal to the ratio of \(E\) to the sample space.
- There is only 1 outcome which satisfies event \(B=\{(1, 1)\}\) and it also satisfies event \(E\). \(\textrm{P}(E|B) = 1 \neq 6/36 = \textrm{P}(E)\), so events \(A\) and \(E\) are not independent. If you know the sum of the dice is 2, then the green die must have landed on 1.
- There are 6 outcomes which satisfy event \(C=\{(1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)\}\), all equally likely, only one, (1, 6), of which also satisfies event \(E\). \(\textrm{P}(E|C) = 1/6 = 6/36 = \textrm{P}(E)\), so events \(C\) and \(E\) are independent. The ratio of the \(E\) part of \(C\) to \(C\) is the ratio of \(E\) to the sample space.

Independence concerns whether or not the occurrence of one event affects the *probability* of the other. Conditioning involves slicing and renormalizing; independence concerns whether the renormalized slice matches the original picture. Given two events it is not always obvious whether or not they are independent. When there is any doubt, be sure to check directly if one of the equivalent conditions for independence is true (that is, the directly compute the left side and right side and see if they’re equal.)

Independence is often a reasonable assumption based on the physical properties of the random phenomenon. But remember that it is an *assumption*, which might or might not match reality. Be sure to make a distinction between *assumption* and *observation*.

For example, flip a coin some number of times. It might be reasonable to assume the coin is fair and flips are independent. In this case, the probability that the next flip lands on heads is 1/2 regardless of what you observed on the previous flips. However, if you flip a coin twenty times and it lands on heads each time, this might cast doubt on your assumption that the coin is fair.

**Example 3.13 **You have just been elected president (congratulations!) and you need to choose one of four people to sing the national anthem at your inauguration: Alicia, Ariana, Beyonce, or Billie.
You write their names on some cards — *each name on possibly a different number of cards* — shuffle the cards, and draw one.
Let \(A\) be the event that either Alicia or Ariana is selected, and \(B\) be the event that either Alicia or Beyonce is selected.

The following questions ask you to specify probability models satisfying different conditions. You can specify the model by identifying how many cards each person’s name is written on. For each model, find the probabilities of \(A\), \(B\), and \(A\cap B\), and verify whether or not events \(A\) and \(B\) are independent according to the model.

- Specify a probability model according to which the events \(A\) and \(B\) are independent.
- Specify a different probability model according to which the events \(A\) and \(B\) are independent.
- Specify a probability model according to which the events \(A\) and \(B\) are not independent.

*Solution*. to Example 3.13

## Show/hide solution

Note that \(A \cap B\) is the event that Alicia is selected.

Write each person’s name on exactly one card, so the 4 outcomes are equally likely. Let \(\textrm{P}\) represent this probability measure. Then \(\textrm{P}(A \cap B) = 1/4 = (2/4)(2/4)=\textrm{P}(A)\textrm{P}(B)\), so \(A\) and \(B\) are independent.

The previous part involves a situation where \((1/2)(1/2)=1/4=(2/4)(2/4)\). We try to construct a situation where \((1/3)(1/3)=1/9=(3/9)(3/9)\). Suppose there are 9 cards, with Alicia on 1, Ariana and Beyonce on 2 each, and Billie on 4.

Outcome Alicia Ariana Beyonce Billie Number of cards 1 2 2 4 Probability 1/9 2/9 2/9 4/9 Let \(\textrm{Q}\) represent this probability measure. Then \(\textrm{Q}(A \cap B) = 1/9 = (3/9)(3/9)=\textrm{Q}(A)\textrm{Q}(B)\), so events \(A\) and \(B\) are independent. Elaborating,

- There are 3 cards that satisfy \(A\) and 6 that don’t so \(A\) is 3/6 = 1/2 as likely to occur than not.
- If \(B\) occurs, then it’s either Alicia (satisfies \(A\), 1 card) or Beyonce (does not satisfy \(A\), 2 cards), so given that \(B\) occurs then \(A\) is 1/2 times as likely to occur than not.
- If \(B\) does not occur, then it’s either Ariana (satifies \(A\), 2 cards) or Billie (does not satisfy \(A\), 4 cards), so given that \(B\) does not occur then \(A\) is 2/4 = 1/2 times as likely to occur than not.

Knowing whether or not \(B\) occurs doesn’t change the chance of \(A\) occurring, so \(A\) and \(B\) are independent according to this probability model.

Independence requires probabilities to overlap in just the right way. Aside from equally likely situations, if we blindly write down four numbers that sum to 1 we will probably not luck into a probability measure where the events are independent. For example,

Outcome Alicia Ariana Beyonce Billie Number of cards 1 2 3 4 Probability 0.1 0.2 0.3 0.4 Let \(\tilde{\textrm{Q}}\) represent this probability measure. Then \(\tilde{\textrm{Q}}(A \cap B) = 0.1 \neq (0.3)(0.4)=\tilde{\textrm{Q}}(A)\tilde{\textrm{Q}}(B)\), so events \(A\) and \(B\) are not independent. Elaborating,

- There are 3 cards that satisfy \(A\) and 7 that don’t so \(A\) is 3/7 as likely to occur than not.
- If \(B\) occurs, then it’s either Alicia (satisfies \(A\), 1 card) or Beyonce (does not satisfy \(A\), 3 cards), so given that \(B\) occurs then \(A\) is 1/3 times as likely to occur than not.
- If \(B\) does not occur, then it’s either Ariana (satifies \(A\), 2 cards) or Billie (does not satisfy \(A\), 4 cards), so given that \(B\) does not occur then \(A\) is 2/4 = 1/2 times as likely to occur than not.

Knowing whether or not \(B\) occurs changes the chance of \(A\) occurring, so \(A\) and \(B\) are not independent according to this probability model.

Remember, independence is a statement about probabilities, not outcomes themselves. Given two events it is not always obvious whether or not they are independent.

Independence depends on the underlying probability measure. Events that are independent under one probability measure might not be independent under another.

The probability measure represents all the underlying assumptions about the random phenomenon. Independence is often assumed. Whether or not independence is a valid assumption depends on the underlying random phenomenon.

**Example 3.14 **Flip a fair coin twice. Let

- \(A\) be the event that the first flip lands on heads
- \(B\) be the event that the second flip lands on heads,
- \(C\) be the event that both flips land on the same side.

- Are the two events \(A\) and \(B\) independent?
- Are the two events \(A\) and \(C\) independent?
- Are the two events \(B\) and \(C\) independent?
- Are the three events \(A\), \(B\), and \(C\) independent?

*Solution*. to Example 3.14

## Show/hide solution

There are four equally likely outcomes \(\{HH, HT, TH, TT\}\).

- \(A = \{HH, HT\}\), so \(\textrm{P}(A) = 2/4\)
- \(B = \{HH, TH\}\), so \(\textrm{P}(B) = 2/4\)
- \(C = \{HH, TT\}\), so \(\textrm{P}(C) = 2/4\)

- Yes, events \(A\) and \(B\) are independent. \(A\cap B=\{HH\}\), \(\textrm{P}(A\cap B)=1/4\), and \(\textrm{P}(A\cap B)=\textrm{P}(A)\textrm{P}(B)\).
- Yes, events \(A\) and \(C\) are independent. \(A\cap C=\{HH\}\), \(\textrm{P}(A\cap C)=1/4\), and \(\textrm{P}(A\cap C)=\textrm{P}(A)\textrm{P}(C)\).
- Yes, events \(B\) and \(C\) are independent. \(B\cap C=\{HH\}\), \(\textrm{P}(B\cap C)=1/4\), and \(\textrm{P}(B\cap C)=\textrm{P}(B)\textrm{P}(C)\).
- No, even though each pair of events is independent, the collection of the three events is not. If \(A\) and \(B\) occur then we know event \(C\) occurs. That is, \(\textrm{P}(C|A \cap B)=1\) but \(\textrm{P}(C) = 1/2\).

Events \(A_1, A_2, A_3, \ldots\) are **independent** if:

- any pair of events \(A_i, A_j, (i \neq j)\) satisfies \(\textrm{P}(A_i\cap A_j)=\textrm{P}(A_i)\textrm{P}(A_j)\),
- and any triple of events \(A_i, A_j, A_k\) (distinct \(i,j,k\)) satisfies \(\textrm{P}(A_i\cap A_j\cap A_k)=\textrm{P}(A_i)\textrm{P}(A_j)\textrm{P}(A_k)\),
- and any quadruple of events satisfies \(\textrm{P}(A_i\cap A_j\cap A_k \cap A_m)=\textrm{P}(A_i)\textrm{P}(A_j)\textrm{P}(A_k)\textrm{P}(A_m)\),
- and so on.

Intuitively, a collection of events is independent if knowing whether or not any combination of the events in the collection occur does not change the probability of any other event in the collection.

In particular, three events \(A\), \(B\), \(C\) are independent if and only if *all* of the following are true
\[
{\scriptsize
\textrm{P}(A\cap B) = \textrm{P}(A)\textrm{P}(B), \quad \textrm{P}(A\cap C) = \textrm{P}(A)\textrm{P}(C),\quad \textrm{P}(B\cap C) = \textrm{P}(B)\textrm{P}(C),\quad \textrm{P}(A\cap B\cap C) = \textrm{P}(A)\textrm{P}(B)\textrm{P}(C)
}
\]
Equivalently, it can be shown that three events \(A\), \(B\), \(C\) are independent if and only if *all* of the following^{95} are true.

\[\begin{align*} & \textrm{P}(A| B) = \textrm{P}(A), \quad \textrm{P}(A| C) = \textrm{P}(A), \quad \textrm{P}(B|A) = \textrm{P}(B), \quad \textrm{P}(B| C) = \textrm{P}(B), \quad \textrm{P}(C|A) = \textrm{P}(C),\\ & \textrm{P}(C|B) = \textrm{P}(C), \quad \textrm{P}(A| B\cap C) = \textrm{P}(A), \quad \textrm{P}(B|A\cap C) = \textrm{P}(B), \quad \textrm{P}(C|A\cap B) = \textrm{P}(C) \end{align*}\]

### 3.5.2 Using independence

Remember the general multiplication rule involves successive conditional probabilities \[ \textrm{P}(A_1\cap A_2 \cap A_3 \cap \cdots \cap A_{n}) = \textrm{P}(A_1)\textrm{P}(A_2|A_1)\textrm{P}(A_3|A_1\cap A_2) \times \cdots \times \textrm{P}(A_n|A_1 \cap A_2 \cap \cdots \cap A_{n-1}) \] In problems with complicated relationships, determining joint and conditional probabilities can be difficult.

But when events are independent, the multiplication rule simplifies greatly. \[ \textrm{P}(A_1 \cap A_2 \cap A_3 \cap \cdots \cap A_n) = \textrm{P}(A_1)\textrm{P}(A_2)\textrm{P}(A_3)\cdots\textrm{P}(A_n) \quad \text{if $A_1, A_2, A_3, \ldots, A_n$ are independent} \]

When a problem involves independence, you will want to take advantage of it. Work with “and” events whenever possible in order to use the multiplication rule. For example, for problems involving “at least one” (an “or” event) take the complement to obtain “none” (an “and” event).

**Example 3.15 **A certain system consists of four identical components. Suppose that the probability that any particular component fails is 0.1, and failures of the components occur independently of each other. Find the probability that the system fails if:

- The components are connected in
*parallel*: the system fails only if*all*of the components fail. - The components are connected in
*series*: the system fails whenever*at least one*of the components fails. - Donny Don’t says the answer to the previous part is \(0.1 + 0.1 + 0.1 + 0.1 = 0.4\). Explain the error in Donny’s reasoning.

*Solution*. to Example 3.15

## Show/hide solution

Let \(F\) be the event the system fails, and \(F_i\) the event that component \(i\) fails.

- If the components are connected in parallel, \(F=F_1 \cap F_2 \cap F_3 \cap F_4\). \[\begin{align*} \textrm{P}(F) & = \textrm{P}(F_1\cap F_2\cap F_3 \cap F_4) & & \\ & = \textrm{P}(F_1)\textrm{P}(F_2)\textrm{P}( F_3)\textrm{P}(F_4) & & \text{independence}\\ & = (0.1)(0.1)(0.1)(0.1) = 0.0001 \end{align*}\]
- “At least one fails” is an “or” event: \(F= F_1 \cup F_2 \cup F_3 \cup F_4\). With independence you want “and” events. Use the complement rule \[\begin{align*} \textrm{P}(F) & = \textrm{P}(\text{at least one fails}) & & \\ & = 1 - \textrm{P}(\text{none fails})\ & & \\ & = 1 - \textrm{P}(F_1^c\cap F_2^c \cap F_3^c\cap F_4^c) & & \\ & = 1 - \textrm{P}(F_1^c)\textrm{P}(F_2^c)\textrm{P}( F_3^c)\textrm{P}(F_4^c) & & \text{independence}\\ & = 1-(0.9)(0.9)(0.9)(0.9) = 0.3439 \end{align*}\]
- Donny is assuming that the component failures are
*disjoint*, but that’s not true since multiple components could fail. Simply adding the probabilities double counts outcomes where multiple components fail. Don’t confuse “disjoint” and “independent”. It is almost always better to work with “and” events and multiplying rather than “or” events.

The complement rule is often useful in probability problems that involve finding “the probability of at least one…,” which on the surface involves unions (OR). It usually more convenient to use the complement rule and compute “the probability of at least one…” as one minus “the probability of none…”; the latter probability involves intersections (AND). Don’t forget to actually use the complement rule to get back to the original probability of interest! Subtracting a computed probability from 1 seems like a small computational step, but it’s an important one. A basketball player who has a 90% chance of successfully making a free throw is much different from a player who only has a 10% chance. Unfortunately, the complement rule step is often overlooked when doing probability calculations. It’s a good idea to ask yourself if the probability you are computing should be greater than or less than 50%. If your computed value seems to be on the wrong side of 50%, check your calculations to see if you have forgotten (or misapplied) the complement rule.

**Example 3.16 **In the Powerball lottery, a player picks five different whole numbers between 1 and 69, and another whole number between 1 and 26 that is called the Powerball. In the drawing, the 5 numbers are drawn without replacement from a “hopper” with balls labeled 1 through 69, but the Powerball is drawn from a separate hopper with balls labeled 1 through 26. The player wins the jackpot if both the first 5 numbers match those drawn, in any order, and the Powerball is a match.
Under this set up, there are 292,201,338 possible winning numbers.

- What is the probability the next winning number is 6-7-16-23-26, plus the Powerball number, 4.
- What is the probability the next winning number is 1-2-3-4-5, plus the Powerball number, 6.
- The Powerball drawing happens twice a week. Suppose you play the same Powerball number, twice a week, every week for over 50 years. Let’s say you purchase a ticket for 6000 drawings in total. What is the probability that you win at least once?
- Instead of playing for 50 years, you decide only to play one lottery, but you buy 6000 tickets, each with a different Powerball number. What is the probability that at least one of your tickets wins? How does this compare to the previous part? Why?
- Each ticket costs 2 dollars, but the jackpot changes from drawing to drawing. Suppose you buy 6000 tickets for a single drawing. How large does the jackpot need to be for your “expected” profit to be positive? To be $100,000? (We’re ignoring inflation, taxes, transaction costs, and any changes in the rules.)

*Solution*. to Example 3.16

## Show/hide solution

- Each of the possible winning numbers is equally likely, so the probability is \(1/292,201,338\approx 3\times 10^{-9}\). See Example 1.15 and the discussion following it.
- Each of the possible winning numbers is equally likely. Remember, don’t confuse a general event with a specific outcome; see Example 1.14.
- The drawings are independent. The probability that you win at least once is \(1 - (1-1/292201338)^{6000}\approx 0.00002\). If many people each play 6000 drawings, about 2 in every 100,000 people win will at least once.
- If you play 6000 different numbers, the events that each different number wins are disjoint. So the probability you win at least once is \(6000/292201338\approx 0.00002\). This is about the same as the probability in the previous part. When you play 6000 different independent drawings, there is a possibility that you win multiple times, so the events of winning in each different drawing are not disjoint. But the probability of winning
*multiple*lotteries is so small that it’s negligible. The probability of winning any single drawing is about 1 in 300 million. The probability of winning any two drawings is about 1 in 85 quadrillion. - You pay $12,000 in total. Let \(w\) be the value of the jackpot. You win either 0 or \(w\) so your “expected” profit is \(w(6000/292201338)-12000\). But this not what you expect in a single repetition. Rather, it is the profit you would expect to see on average in the long run. You probably won’t be buying 6000 tickets for a large number of drawings, so your long run average isn’t really relevant. But in any case, we must have \(w>584,402,676\) for the expected profit to be positive. Sometimes, but not often, the jackpot does get this high; even so, this just guarantees that your expected profit is positive. In order for your expected long run average profit to be greater than just $100,000, the jackpot must be over 5 billion dollars, and the largest jackpot ever was 1.6 billion. The moral: there are better things to do with $12,000 dollars.

**Example 3.17 **In the meeting problem, assume Regina’s arrival time \(R\) follows a Uniform(0, 60) distribution and Cady’s arrival time \(Y\) follows a Normal(30, 10) distribution, independently of each other.
(Remember, arrival times are measured in minutes after 12:00.)
Let \(T=\min(R, Y)\).
Compute and interpret \(\textrm{P}(T < 10)\).

*Solution*. to Example 3.17

## Show/hide solution

\(\textrm{P}(T < 10)\) is the probability that the first person to arrive arrives before 12:10. The key is to notice that \(T<10\) whenever either \(R<10\) or \(T<10\); that is, \(T<10\) if at least one person arrives before 12:10. To change this “or” event into an “and” event, consider the complement. \(T>10\) whenever they both arrive after 12:10. That is, \(\{T > 10\} = \{\min(R, Y)>10\} = \{R> 10, Y> 10\}\).

Since \(R\) follows a Uniform(0, 60) distribution \(\textrm{P}(R > 10) = 50/60 = 0.833\).

Since \(Y\) follows a Normal(30, 10) distribution, \(\textrm{P}(Y > 10) = 0.975\). This follows from the empirical rule (see Section 2.10.2), since 15 is 2 standard deviations below the mean (\(15 = (10 - 30)/10 = -2\)).

Since Regina and Cady arrive independently of each other, the events \(\{R > 10\}\) and \(\{Y > 10 \}\) are independent, so \[ \textrm{P}(T > 10) = \textrm{P}(R>10, Y>10) \stackrel{\text{(indep.)}}{=} \textrm{P}(R > 10)\textrm{P}(Y > 10) = (0.833)(0.975) = 0.8125. \]

Finally, use the complement rule: \(\textrm{P}(T<10) = 1-\textrm{P}(T > 10) = 1 - 0.8125=0.1875\).

Under these assumptions the first person arrives before 12:10 on 18.75% of days in the long run. It is 4.33 times more likely that the first person arrives after 12:10 than before 12:10.

**Example 3.18 **A very large petri dish starts with a single microorganism.
After one minute, the microorganism either splits into two with probability \(s\), or dies.
All subsequent microorganisms behave in the same way — splitting into two or dying after each minute — independently of each other.

- If \(s=3/4\), what is the probability that the population eventually goes extinct? (Hint: condition on the first step.)
- Find the probability that the population eventually goes extinct as a function of \(s\). For what values of \(s\) is the extinction probability 1?

*Solution*. to Example 3.18

## Show/hide solution

Let \(E\) be eventual extinction. We want to find \(p=\textrm{P}(E)\).

Let \(D\) be the probability that the original microorganism dies after the first minute; \(\textrm{P}(D) = 1/4\). Condition on the first “step” and use the law of total probability \[ p = \textrm{P}(E) = \textrm{P}(E|D)\textrm{P}(D) + \textrm{P}(E|D^c)\textrm{P}(D^c) = (1)(1/4) + \textrm{P}(E|D^c)(3/4) \] \(\textrm{P}(E|D) = 1\) since if the first microorganism dies the population goes extinct immediately.

The key is to find an expression for \(\textrm{P}(E|D^c)\) in terms of \(p\). If the first microorganism does not die (\(D^c\)) there are 2 microorganisms at the start of the second minute; let’s call them Marge and Homer. In order for the population to go extinct, we need Marge and all her descendants to go extinct, and the same for Homer. But Marge is just a single microorganism, so the probability that her line eventually goes extinct is \(p\); similarly the probability that Homer’s line goes extinct is \(p\). Since all microorganisms behave independently, the probability that both Marge and Homer’s lines eventually go extinct is \((p)(p)=p^2\). That is, \(\textrm{P}(E | D^c) = p^2\).

Plugging into the equation above yields \[ p = (1)(1/4) + p^2(3/4) \]

Solve (quadratic formula) this equation to get

^{96}\(p= 1/3\). The probability that the population eventually goes extinct is 1/3. This microorganism population is 2 times more likely to survive forever than to go extinct!The process is the same as the above, with 3/4 replaced by \(s\) \[ p = (1)(1-s) + p^2s \] Solving gives two solutions, 1 and \(1/s - 1\). However, if \(s<1/2\) then \(1/s - 1 > 1\), which is not a valid probability. Therefore the probability of eventual extinction is 1 if \(s \le 1/2\), and \(1/s - 1<1\) if \(s > 1/2\).

Some of these conditions are redundant. For example, \(\textrm{P}(A|B)=\textrm{P}(A)\) if and only if \(\textrm{P}(B|A)=\textrm{P}(B)\) so technically only one of those conditions needs to be verified.↩︎

Technically, there are two solutions, 1 and \(1/3\). There are some technical justifications that can be made to show that the extinction probability is the smaller of the two solutions, but this is beyond our scope.↩︎