20  Joint Distributions

20.1 Joint probability mass functions

Example 20.1 Flip a fair coin four times and record the results in order, e.g. HHTT means two heads followed by two tails. We’re interested in the proportion of the flips which immediately follow a H that result in H. In particular, what is the expected value of this proportion? (Make a guess before proceeding.)

For example, if the sequence is HHTT then there are two flips which follow H (the second and third flips) of which 1 results in H (the second flip), so the proportion of interest is 1/2. We cannot measure this proportion if no flips follow a H, i.e. the outcome is either TTTT or TTTH; in these cases, we would discard the outcome and try again.

Let:

  • \(Z\) be the number of flips immediately following H (e.g., \(Z = 2\) for HHTT)
  • \(Y\) be the number of flips immediately following H that result in H (e.g., \(Z = 1\) for HHTT)
  • \(X = Y/Z\) be the proportion of flips immediately following H that result in H (e.g. \(X = 1/2\) for HHTT)
  1. Make a table of all possible outcomes and the corresponding values of \(Z, Y, X\).




  2. Make a two-way table representing the joint probability mass function of \(Y\) and \(Z\).




  3. Make a table specifying the pmf of \(X\).




  4. Compute \(\text{E}(X)\). Surprised? Is \(\text{E}(Y / Z)\) equal to \(\text{E}(Y) / \text{E}(Z)\)?




  • The joint probability mass function (pmf) of two discrete random variables \((X,Y)\) defined on a probability space with probability measure \(\text{P}\) is the function \(p_{X,Y}:\mathbb{R}^2\mapsto[0,1]\) defined by \[ p_{X,Y}(x,y) = \text{P}(X= x, Y= y) \qquad \text{ for all } x,y \]
  • Remember to specify the possible \((x, y)\) pairs when defining a joint pmf.

Example 20.2 Let \(X\) be the number of home runs hit by the home team, and \(Y\) the number of home runs hit by the away team in a randomly selected Major League Baseball game. Suppose that \(X\) and \(Y\) have joint pmf

\[ p_{X, Y}(x, y) = \begin{cases} e^{-2.3}\frac{1.2^{x}1.1^{y}}{x!y!}, & x = 0, 1, 2, \ldots; y = 0, 1, 2, \ldots,\\ 0, & \text{otherwise.} \end{cases} \]

  1. Compute and interpret the probability that the home teams hits 2 home runs and the away team hits 1 home run.




  2. Construct a two-way table representation of the joint pmf (you can use software or a spreadsheet).




  3. Compute and interpret the probability that each team hits at most 3 home runs.




  4. Compute and interpret the probability that both teams combine to hit a total of 3 home runs.




  5. Compute and interpret the probability that the home team and the away team hit the same number of home runs.




  • Recall that we can obtain marginal distributions from a joint distribution.
  • Marginal pmfs are determined by the joint pmf via the law of total probability.
  • If we imagine a plot with blocks whose heights represent the joint probabilities, the marginal probability of a particular value of one variable can be obtained by “stacking” all the blocks corresponding to that value.

\[\begin{align*} p_X(x) & = \sum_y p_{X,Y}(x,y) & & \text{a function of $x$ only} \\ p_Y(y) & = \sum_x p_{X,Y}(x,y) & & \text{a function of $y$ only} \\ \end{align*}\]

Example 20.3 Continuing Example 20.2. Let \(X\) be the number of home runs hit by the home team, and \(Y\) the number of home runs hit by the away team in a randomly selected Major League Baseball game. Suppose that \(X\) and \(Y\) have joint pmf

\[ p_{X, Y}(x, y) = \begin{cases} e^{-2.3}\frac{1.2^{x}1.1^{y}}{x!y!}, & x = 0, 1, 2, \ldots; y = 0, 1, 2, \ldots,\\ 0, & \text{otherwise.} \end{cases} \]

  1. Compute and interpret the probability that the home team hits 2 home runs.




  2. Find the marginal pmf of \(X\), and identify the marginal distribution by name.




  3. Compute and interpret the probability that the away team hits 1 home run.




  4. Find the marginal pmf of \(Y\), and identify the marginal by name.




  5. Use the joint pmf to compute the probability that the home team hits 2 home runs and the away team hits 1 home run. How does it relate to the marginal probabilities from the previous parts? What does this imply about the events \(\{X = 2\}\) and \(\{Y = 1\}\)?




  6. How does the joint pmf relate to the marginal pmfs from the previous parts? What do you think this implies about \(X\) and \(Y\)?




  7. In light of the previous part, how you could use spinners to simulate and \((X, Y)\) pair?




20.2 Joint probability density fuctions

  • The joint distribution of two continuous random variables can be specified by a joint pdf, a surface specifying the density of \((x, y)\) pairs.
  • The probability that the \((X,Y)\) pair of random variables lies is some region is the volume under the joint pdf surface over the region.

Example 20.4 Suppose that

  • \(X\) has a Normal(0, 1) distribution
  • \(U\) has a Uniform(-2, 2) distribution
  • \(X\) and \(U\) are generated independently
  • \(Y = UX\).

Sketch a plot representing the joint pdf of \(X\) and \(Y\). Be sure to label axes with appropriate values.






  • The joint probability density function (pdf) of two continuous random variables \((X,Y)\) defined on a probability space with probability measure \(\text{P}\) is the function \(f_{X,Y}\) which satisfies, for any region \(S\) \[ \text{P}[(X,Y)\in S] = \iint\limits_{S} f_{X,Y}(x,y)\, dx dy \]
  • A joint pdf is a surface with height \(f_{X,Y}(x,y)\) at \((x, y)\).
  • The probability that the \((X,Y)\) pair of random variables lies in the region \(A\) is the volume under the pdf surface over the region \(A\)
  • The height of the density surface at a particular \((x,y)\) pair is related to the probability that \((X, Y)\) takes a value “close to” \((x, y)\): \[ \text{P}(x-\epsilon/2<X < x+\epsilon/2,\; y-\epsilon/2<Y < y+\epsilon/2) = \epsilon^2 f_{X, Y}(x, y) \qquad \text{for small $\epsilon$} \]

Example 20.5 Let \(X\) be the time (hours), starting now, until the next earthquake (of any magnitude) occurs in SoCal, and let \(Y\) be the time (hours), starting now, until the second earthquake from now occurs (so that \(Y-X\) is the time between the first and second earthquake). Suppose that \(X\) and \(Y\) are continuous RVs with joint pdf

\[ f_{X, Y}(x, y) = \begin{cases} 4e^{-2y}, & 0 < x< y < \infty,\\ 0, & \text{otherwise} \end{cases} \]

  1. Is the joint pdf a function of both \(x\) and \(y\)? How?




  2. Why is \(f_{X, Y}(x, y)\) equal to 0 if \(y < x\)?




  3. Sketch a plot of the joint pdf. What does its shape say about the distribution of \(X\) and \(Y\) in this context?




  4. Set up the integral to find \(\text{P}(X > 0.5, Y < 1)\).




  5. Sketch a plot of the marginal pdf of \(X\). Be sure to specify possible values.




  6. Find the marginal pdf of \(X\) at \(x=0.5\).




  7. Find the marginal pdf of \(X\). Be sure to specify possible values. (Can you identify the marginal distribution of \(X\) by name?)




  8. Compute and interpret \(\text{P}(X > 0.5)\).




  9. Sketch the marginal pdf of \(Y\). Be sure to specify possible values.




  10. Find the marginal pdf of \(Y\) at \(y=1.5\).




  11. Find the marginal pdf of \(Y\). Be sure to specify possible values of \(Y\).




  12. Compute and interpret \(\text{P}(Y < 1)\).




  13. Is \(\text{P}(X > 0.5, Y < 1)\) equal to the product of \(\text{P}(X > 0.5)\) and \(\text{P}(Y < 1)\)? Why?




  • The joint distribution is a distribution on \((X, Y)\) pairs. A mathematical expression of a joint distribution is a function of both values of \(X\) and values of \(Y\). Pay special attention to the possible values; the possible values of one variable might be restricted by the value of the other.
  • The marginal distribution of \(Y\) is a distribution on \(Y\) values only, regardless of the value of \(X\). A mathematical expression of a marginal distribution will have only values of the single variable in it; for example, an expression for the marginal distribution of \(Y\) will only have \(y\) in it (no \(x\), not even in the possible values).

20.3 Independence of random variables

  • Two random variables \(X\) and \(Y\) defined on a probability space with probability measure \(\text{P}\) are independent if \(\text{P}(X\le x, Y\le y) = \text{P}(X\le x)\text{P}(Y\le y)\) for all \(x, y\). That is, two random variables are independent if their joint cdf is the product of their marginal cdfs.
  • Random variables \(X\) and \(Y\) are independent if and only if the joint distribution factors into the product of the marginal distributions. The definition is in terms of cdfs, but analogous statements are true for pmfs and pdfs. \[\begin{align*} \text{Discrete RVs $X$ and $Y$} & \text{ are independent}\\ \Longleftrightarrow p_{X,Y}(x,y) & = p_X(x)p_Y(y) & & \text{for all $x, y$} \end{align*}\] \[\begin{align*} \text{Continuous RVs $X$ and $Y$} & \text{ are independent}\\ \Longleftrightarrow f_{X,Y}(x,y) & = f_X(x)f_Y(y) & & \text{for all $x,y$} \end{align*}\]
  • Random variables \(X\) and \(Y\) are independent if and only if their joint distribution can be factored into the product of a function of values of \(X\) alone and a function of values of \(Y\) alone. That is, \(X\) and \(Y\) are independent if and only if there exist functions \(g\) and \(h\) for which \[ f_{X,Y}(x,y) \propto g(x)h(y) \qquad \text{ for all $x$, $y$} \]
  • \(X\) and \(Y\) are independent if and only if the joint distribution factors into a product of the marginal distributions. The above result says that you can determine if that’s true without first finding the marginal distributions.

Example 20.6 Let \(X\) and \(Y\) be continuous random variables with joint pdf

\[ f_{X, Y}(x, y) = e^{-x}, \qquad x>0,\; 0<y<1. \]

  1. Without doing any calculations, determine if \(X\) and \(Y\) are independent and find the marginal distributions.




  2. Sketch a plot of the joint pdf of \(X\) and \(Y\).




  3. Without integrating, find \(\text{P}(X<0.2, Y<0.4)\).