5.6 Linear combinations of random variables
Remember that in general
E(g(X))≠g(E(X))E(g(X,Y))≠g(E(X),E(Y))
In this section we will introduce certain transformations of random variables for which the expected value of the transformation is the transformation of the expected value. We will also study variance of certain transformations of random variables.
5.6.1 Linear rescaling
A linear rescaling is a transformation of the form g(u)=a+bu. Recall that in Section 3.8.1 we observed, via simulation, that
- A linear rescaling of a random variable does not change the basic shape of its distribution, just the range of possible values.
- A linear rescaling transforms the mean in the same way the individual values are transformed.
- Adding a constant to a random variable does not affect its standard deviation.
- Multiplying a random variable by a constant multiples its standard deviation by the same constant.
Formally, if X is a random variable and a,b are non-random constants then
E(aX+b)=aE(X)+bSD(aX+b)=|a|SD(X)Var(aX+b)=a2Var(X)
5.6.2 Linearity of expected value
- Find E(U1) and E(U2).
- Find E(X).
- How does E(X) relate to E(U1) and E(U2)? Suggest a simpler way of finding E(U1+U2).
Show/hide solution
- E(U1)=1+42=2.5=E(U2).
- We found the pdf of X in Example 4.15. Since the pdf of X is symmetric about 5 we should have E(X)=5, which integrating confirms. E(X)=∫52x((x−2)/9)dx+∫85x((8−x)/9)dx=5.
- We see that E(U1+U2)=5=2.5+2.5=E(U1)+E(U2). Finding the expected value of each of U1 and U2 and adding these two numbers is much easier than finding the pdf of U1+U2 and then using the definition of expected value.
In the previous example, the values U1 and U2 came from separate spins so they were unrelated. What about the expected value of X+Y when X and Y are correlated?
Show/hide solution
You should have observed that, yes, changing the correlation affected the distribution of T and D mainly by changing the degree of variability. However, you should have also observed that the expected value of T did not change as the correlation changed (after accounting for simulation margin of error). Similarly, the expected value of D did not change as the correlation changed.
Linearity of expected value. For any two random variables X and Y. E(X+Y)=E(X)+E(Y) That is, the expected value of the sum is the sum of expected values, regardless of how the random variables are related. Therefore, you only need to know the marginal distributions of X and Y to find the expected value of their sum. (But keep in mind that the distribution of X+Y will depend on the joint distribution of X and Y.)
Linearity of expected value follows from simple arithmetic properties of numbers. Whether in the short run or the long run, Average of X+Y =Average of X+Average of Y regardless of the joint distribution of X and Y. For example, for the two (X,Y) pairs (4, 3) and (2, 1) Average of X+Y =(4+3)+(2+1)2=4+22+3+12=Average of X+Average of Y.
A linear combination of two random variables X and Y is of the form aX+bY where a and b are non-random constant. Combining properties of linear rescaling with linearity of expected value yields the expected value of a linear combination E(aX+bY)=aE(X)+bE(Y) For example, E(X−Y)=E(X)−E(Y). The left side above represents the “long way”: find the distribution of aX+bY, which will depend on the joint distribution of X and Y, and then use the definition of expected value. The right side is the “short way”: find the expected values of X and Y, which only requires their marginal distributions, and plug those numbers into the transformation formula. Similar to LOTUS, linearity of expected value provides a way to find the expected value of certain random variables without first finding the distribution of the random variables.
Linearity of expected value extends naturally to more than two random variables.
Example 5.30 Recall the matching problem in Example 5.1. We showed that the expected value of the number of matches Y is E(Y)=1 when n=4. Now consider a general n: there are n rocks that are shuffled and placed uniformly at random in n spots with one rock per spot. Let Y be the number of matches. Can you find a general formula for E(Y)?
- How do you think E(Y) depends on n?
- Recall the indicator random variables from Example 2.29. Let I1 be the indicator that rock 1 is placed correctly in spot 1. Find E(I1).
- Let Ii be the indicator that rock i is placed correctly in spot i, i=1,…,n. Find E(Ii).
- What is the relationship between Y and I1,…,In?
- Find E(Y). Be amazed.
Show/hide solution
- There are two common guesses. (1) As n increases, there are more chances for a match, so maybe E(Y) increases with n. (2) But as n increases the chance that any particular rock goes in the correct spot decreases, so maybe E(Y) decreases with n. These considerations move E(Y) in opposite directions; how do they balance?
- Recall that the expected value of an indicator random variable is just the probability of the corresponding event. There are n rocks which are equally likely to be placed in spot 1, only 1 of which is correct. The probability that rock 1 is correct placed in spot 1 is 1/n. That is, E(I1)=1/n.
- If the rocks are placed uniformly at random then no rock is more or less likely than any other to be placed in its correct spot, so the probability and expected value should be the same for all i. Given any spot i, any of the n rocks is equally likely to be placed in spot i, and only one of those is the correct rock, so P(Ii=1)=1/n, and E(Ii)=1/n.
- Recall Section 2.3.4. We can count the total number of matches by incrementally adding 1 to our counter each time rock i matches spot i for i=1,…,n. That is, the total number of matches is the sum of the indicator random variables: Y=I1+⋯+In.
- Use linearity of expected value E(Y)=E(I1+I2+⋯+In)=E(I1)+E(I2)+⋯+E(In)=1n+1n+⋯1n=n(1n)=1
The answer to the previous problem is not an approximation: the expected value of the number of matches is equal to 1 for any n. We think that’s pretty amazing. (We’ll see some even more amazing results for this problem LATER.) Notice that we computed the expected value without first finding the distribution of Y.
Intuitively, if the rocks are placed in the spots uniformly at random, then the probability that rock i is placed in the correct spot should be the same for all the rocks, 1/n. But you might have said: “but if rock 1 goes in spot 1, there are only n−1 rocks that can go in spot 2, so the probability that rock 2 goes in spot to is 1/(n−1)”. That is true if rock 1 goes in spot 1. However, when computing the marginal probability that rock 2 goes in spot 2, we don’t know whether rock 1 went in spot 1 or not, so the probability needs to account for both cases. There is a difference between marginal/unconditional probability and conditional probability, which we will discuss in more detail LATER.
When a problem asks “find the expected number of…” it’s a good idea to try using indicator random variables and linearity of expected value.
Let A1,A2,…,An be a collection of n events. Suppose event i occurs with marginal probability pi. Let N=IAi+IA2+⋯+IAn be the random variable which counts the number of the events in the collection which occur. Then the expected number of events that occur is the sum of the event probabilities. E(N)=n∑i=1pi. If each event has the same probability, pi≡p, then E(N) is equal to np. These formulas for the expected number of events are true regardless of whether there is any association between the events (that is, regardless of whether the events are independent.)
Example 5.31 Kids wake up during the night. On any given night,
- the probability that Paul wakes up is 1/14
- the probability that Bob wakes up is 2/7
- the probability that Tommy wakes up is 1/30
- the probability that Chris wakes up is 1/2
- the probability that Slim wakes up is 6/7.
If any kids wakes up they’re likely to wake other kids up too. Find the expected number of kids that wake up on any given night.
Show/hide solution
Simply add the probabilities: 1/14+2/7+1/30+1/2+6/7=1.75. The expected number of kids to wake up in a night is 1.75. Over many nights, on average 1.75 kids wake up per night.
The fact that kids wake each other up implies that the events are not independent, but this is irrelevant here. Because of linearity of expected value, we only need to know the marginal probability81 of each event (provided) in order to determine the expected number of events occur. (The distribution of the number of kids that wake up would depend the relationships between the events, but not the long run average value.)
5.6.3 Variance of linear combinations of random variables
Example 5.32 Consider a RV X with Var(X)=1. What is Var(2X)?
- Walt says: Var(2X)=22Var(X)=4(1)=4.
- Jesse says: Var(2X)=Var(X+X)=Var(X)+Var(X)=1+1=2.
Show/hide solution
Walt is correctly using properties of linear rescaling. Jesse is assuming that a variance of a sum is the sum of the variances, which is not true in general. We’ll see why below.
When two variables are correlated the degree of the association will affect the variability of linear combinations of the two variables.
Example 5.33 Recall the Colab activity where you simulated pairs of SAT Math (X) and Reading (Y) scores from Bivariate Normal distributions with different correlations. (See also Section 3.9.) You considered the distribution of the sum T=X+Y and difference D=X−Y. Did changing the correction affect the variance of T? Of D?
Variance of sums and differences of random variables. Var(X+Y)=Var(X)+Var(Y)+2Cov(X,Y)Var(X−Y)=Var(X)+Var(Y)−2Cov(X,Y)
Example 5.34 Assume that SAT Math (X) and Reading (Y) follow a Bivariate Normal distribution, Math scores have mean 527 and standard deviation 107, and Reading scores have mean 533 and standard deviation 100. Compute Var(X+Y) and SD(X+Y) for each of the following correlations.
- Corr(X,Y)=0.77
- Corr(X,Y)=0.40
- Corr(X,Y)=0
- Corr(X,Y)=−0.77
Show/hide solution
- Cov(X,Y)=Corr(X,Y)SD(X)SD(Y)=0.77(
- Corr(X,Y)=0.40
- Corr(X,Y)=0
- Corr(X,Y)=−0.77
If X and Y have a positive correlation:
- Large values of X are associated with large values of Y (so the sum is really large), and small values of X with small values of Y (so the sum is really small), so the sum exhibits more variability than it would if the values of X and Y were uncorrelated.
- Large values of X are associated with large values of Y (so the difference is small), and small values of X with small values of Y (so the difference is small), so the difference exhibits less variability than it would if the values of X and Y were uncorrelated.
If X and Y have a negative correlation:
- Large values of X are associated with small values of Y (so the sum is moderate), and small values of X with large values of Y (so the sum is moderate), so the sum exhibits less variability than it would if the values of X and Y were uncorrelated.
- Large values of X are associated with small values of Y (so the difference is large and positive), and small values of X with large values of Y (so the difference is large and negative), so the difference exhibits more variability than it would if the values of X and Y were uncorrelated.
The variance of the sum is the sum of the variances if and only if X and Y are uncorrelated. Var(X+Y)=Var(X)+Var(Y)if X,Y are uncorrelatedVar(X−Y)=Var(X)+Var(Y)if X,Y are uncorrelated
5.6.4 Bilinearity of covariance
The formulas for variance of sums and differences are application of several more general properties of covariance. Let X,Y,U,V be random variables and a,b,c,d be non-random constants.
Properties of covariance. Cov(X,X)=Var(X)Cov(X,Y)=Cov(Y,X)Cov(X,c)=0Cov(aX+b,cY+d)=acCov(X,Y)Cov(X+Y,U+V)=Cov(X,U)+Cov(X,V)+Cov(Y,U)+Cov(Y,V)
- The variance of a random variable is the covariance of the random variable with itself.
- Non-random constants don’t vary, so they can’t co-vary.
- Adding non-random constants shifts the center of the joint distribution but does not affect variability.
- Multiplying by non-random constants changes the scale and hence changes the degree of variability.
- The last property is like a FOIL’’ (first, outer, inner, last) property.
The last two properties together are called bilinearity of covariance. These properties extend natural to sums involving more than two random variables. To compute the covariance between two sums of random variables, compute the covariance between each component random variable in the first sum and each component random variable in the second sum, and sum these covariances.
Example 5.35 Let X be the number of two-point field goals a basketball player makes in a game, Y the number of three point field goals made, and Z the number of free throws made (worth one point each). Assume X, Y, Z have standard deviations of 2.5, 3.7, 1.8, respectively, and Corr(X,Y)=0.1, Corr(X,Z)=0.3, Corr(Y,Z)=−0.5.
- Find the standard deviation of the number of fields goals in a game (not including free throws)
- Find the standard deviation of total points scored on fields goals in a game (not including free throws)
- Find the standard deviation of total points scored in a game.
Show/hide solution
If there were too much dependence, then the provided marginal probabilities might not be possible. For example if Slim always wakes up all the other kids, then the other marginal probabilities would have to be at least 6/7. So a specified set of marginal probabilities puts some limits on how much dependence there can be. This idea is similar to Example 1.10.↩︎