Chapter 14 Transformations of random variables
14.1 Introduction
In this section we will consider transformations of random variables. Transformations are useful for:
- Simulating random variables.
For example, computers can generate pseudo random numbers which represent draws from U(0,1)U(0,1) distribution and transformations enable us to generate random samples from a wide range of more general (and exciting) probability distributions.
- Understanding functions of random variables.
Suppose that £PP is invested in an account with continuously compounding interest rate rr. Then the amount £AA in the account after tt years is
A=Pert.A=Pert. Suppose that P=1,000P=1,000 and rr is a realisation of a continuous random variable RR with pdf f(r)f(r). What is the p.d.f. of the amount AA after one year? i.e. What is the p.d.f. of
A=1000eR?A=1000eR?
We will consider both univariate and bivariate transformations with the methodology for bivariate transformations extending to more general multivariate transformations.
14.2 Univariate case
Suppose that XX is a continuous random variable with p.d.f. f(x)f(x). Let gg be a continuous function, then Y=g(X)Y=g(X) is a continuous random variable. Our aim is to find the p.d.f. of YY.
We present the distribution function method which has two steps:
- Compute the c.d.f. of YY, that is
FY(y)=P(Y≤y).FY(y)=P(Y≤y). - Derive the p.d.f. of YY, fY(y)fY(y), using the fact that
fY(y)=dFY(y)dy.fY(y)=dFY(y)dy.
Square of a Standard Normal
Let Z∼N(0,1)Z∼N(0,1). Find the p.d.f. of Y=Z2Y=Z2.
Now ddzFZ(z)=1√2πe−z22ddzFZ(z)=1√2πe−z22, so
Thus Y∼Gamma(12,12)Y∼Gamma(12,12), otherwise known as a Chi-squared distribution with 11 degree of freedom. :::
14.3 Bivariate case
Suppose that X1X1 and X2X2 are continuous random variables with joint p.d.f. given by fX1,X2(x1,x2)fX1,X2(x1,x2). Let (Y1,Y2)=T(X1,X2)(Y1,Y2)=T(X1,X2). We want to find the joint p.d.f. of Y1Y1 and Y2Y2.
Jacobian
Suppose T:(x1,x2)→(y1,y2)T:(x1,x2)→(y1,y2) is a one-to-one transformation in some region of R2R2, such that x1=H1(y1,y2)x1=H1(y1,y2) and x2=H2(y1,y2)x2=H2(y1,y2). The Jacobian of T−1=(H1,H2)T−1=(H1,H2) is defined byTransformation of random variables.
Let (Y1,Y2)=T(X1,X2)(Y1,Y2)=T(X1,X2) be some transformation of random variables. If TT is a one-to-one function and the Jacobian of T−1T−1 is non-zero in T(A)T(A) whereif (y1,y2)∈T(A)(y1,y2)∈T(A), and 00 otherwise.
Transformation of uniforms.
Let X1∼U(0,1)X1∼U(0,1), X2∼U(0,1)X2∼U(0,1) and suppose that X1X1 and X2X2 are independent. LetFind the joint p.d.f. of Y1Y1 and Y2Y2.

Figure 14.1: Transformation
Transformation of Exponentials.
Suppose that X1X1 and X2X2 are i.i.d. exponential random variables with parameter λλ. Let Y1=X1X2Y1=X1X2 and Y2=X1+X2Y2=X1+X2.
- Find the joint p.d.f. of Y1Y1 and Y2Y2.
- Find the p.d.f. of Y1Y1.
Attempt Example 14.3.4: Transformation of Exponentials and then watch Video 22 for the solutions.
Video 22: Transformation of Exponentials
Solution to Example 14.3.4
Remember from previous results that Y2=X1+X2∼Gamma(2,λ)Y2=X1+X2∼Gamma(2,λ).
- Since X1X1 and X2X2 are i.i.d. exponential random variables with parameter λλ, the joint p.d.f. of X1X1 and X2X2 is given by
fX1,X2(x1,x2)=fX1(x1)fX2(x2)={λe−λx1λe−λx2,if x1,x2>0,0,otherwise.={λ2e−λ(x1+x2),if x1,x2>0,0,otherwise.fX1,X2(x1,x2)=fX1(x1)fX2(x2)={λe−λx1λe−λx2,if x1,x2>0,0,otherwise.={λ2e−λ(x1+x2),if x1,x2>0,0,otherwise. Solving simultaneously for X1X1 and X2X2 in terms of Y1Y1 and Y2Y2, gives X1=Y1X2X1=Y1X2 and
Y2=X1+X2=Y1X2+X2=X2(Y1+1).Y2=X1+X2=Y1X2+X2=X2(Y1+1). Rearranging gives X2=Y2Y1+1(=H2(Y1,Y2))X2=Y2Y1+1(=H2(Y1,Y2)), and then X1=Y1X2=Y1Y2Y1+1(=H1(Y1,Y2))X1=Y1X2=Y1Y2Y1+1(=H1(Y1,Y2)).
Computing the Jacobian of T−1T−1, we get
J(y1,y2)=|∂H1∂y1∂H1∂y2∂H2∂y1∂H2∂y2|=|y2(y1+1)2y1y1+1−y2(y1+1)21y1+1|=y2(y1+1)3+y1y2(y1+1)3=y2(y1+1)2.J(y1,y2)=∣∣ ∣∣∂H1∂y1∂H1∂y2∂H2∂y1∂H2∂y2∣∣ ∣∣=∣∣ ∣∣y2(y1+1)2y1y1+1−y2(y1+1)21y1+1∣∣ ∣∣=y2(y1+1)3+y1y2(y1+1)3=y2(y1+1)2.
Now,
Therefore, T(A)⊆{(y1,y2):y1>0,y2>0}T(A)⊆{(y1,y2):y1>0,y2>0}. Since x1>0x1>0 and x2>0x2>0, y1=x1x2>0.y1=x1x2>0. Furthermore, since x1=y1y2y1+1>0x1=y1y2y1+1>0, then y1y2>0y1y2>0 implies y2>0y2>0. Therefore,
Consequently, the joint p.d.f. of Y1Y1 and Y2Y2, f=fY1,Y2(y1,y2)f=fY1,Y2(y1,y2) is given by
If either y1<0y1<0 or y2<0y2<0, then fY1,Y2(y1,y2)=0fY1,Y2(y1,y2)=0.
- The p.d.f. of Y1Y1 is the marginal p.d.f. of Y1Y1 coming from the joint p.d.f fY1,Y2(y1,y2)fY1,Y2(y1,y2). Therefore, for y1>0y1>0,
fY1(y1)=∫∞0λ2e−λy2y2(1+y1)2dy2=1(1+y1)2∫∞0λ2y2e−λy2dy2=1(1+y1)2.fY1(y1)=∫∞0λ2e−λy2y2(1+y1)2dy2=1(1+y1)2∫∞0λ2y2e−λy2dy2=1(1+y1)2. (In the above integration remember that λ2y2e−λy2λ2y2e−λy2 is the p.d.f. of Gamma(2,λ)Gamma(2,λ).)
So,
fY1(y1)={1(1+y1)2if y1>0,0otherwise.fY1(y1)={1(1+y1)2if y1>0,0otherwise. The distribution Y1Y1 is an example of a probability distribution for which the expectation is not defined.
Figure 14.2: Plot of the p.d.f. of Y1Y1.
Note that one can extend the method of transformations to the case of nn random variables.
Student Exercise
Attempt the exercise below.
Let XX and YY be independent random variables, each having probability density function
f(x)={λe−λxif x>0,0otherwisef(x)={λe−λxif x>0,0otherwise
and let U=X+YU=X+Y and V=X−YV=X−Y.
- Find the joint probability density function of UU and VV.
- Hence derive the marginal probability density functions of UU and VV.
- Are UU and VV independent? Justify your answer.
Solution to Exercise 14.1.
- Let the transformation TT be defined by T(x,y)=(u,v)T(x,y)=(u,v), where u=x+yu=x+y and v=x−yv=x−y. Then, x=12(u+v)x=12(u+v) and y=12(u−v)y=12(u−v), so that
J(u,v)=|121212−12|=−12.J(u,v)=∣∣ ∣∣121212−12∣∣ ∣∣=−12. Since XX and YY are independent,
fX,Y(x,y)=fX(x)fY(y)={λ2e−λ(x+y)if x,y>0,0otherwise.fX,Y(x,y)=fX(x)fY(y)={λ2e−λ(x+y)if x,y>0,0otherwise. Thus, since TT is one-to-one,
fU,V(u,v)=fX,Y(x(u,v),y(u,v))|J(u,v)|={12λ2e−λuif u+v>0,u−v>0,0otherwise={12λ2e−λuif u>0,−u<v<u,0otherwise.fU,V(u,v)=fX,Y(x(u,v),y(u,v))|J(u,v)|={12λ2e−λuif u+v>0,u−v>0,0otherwise={12λ2e−λuif u>0,−u<v<u,0otherwise. The region over which fU,V(u,v)>0fU,V(u,v)>0 is shown below.
- The marginal pdf’s of UU and VV are respectively
fU(u)=∫∞−∞fU,V(u,v)dv={∫u−u12λ2e−λudv=λ2ue−λuif u>0,0otherwise;fV(v)=∫∞−∞fU,V(u,v)du=∫∞|v|12λ2e−λudu=12λe−λ|v|,v∈R.fU(u)=∫∞−∞fU,V(u,v)dv={∫u−u12λ2e−λudv=λ2ue−λuif u>0,0otherwise;fV(v)=∫∞−∞fU,V(u,v)du=∫∞|v|12λ2e−λudu=12λe−λ|v|,v∈R. Note that again we have U=X+YU=X+Y is the sum of two independent Exp(λ)Exp(λ) random variables, so U∼Gamma(2,λ)U∼Gamma(2,λ).
- Clearly, fU,V(u,v)=fU(u)fV(v) does not hold for all u,v∈R, so U and V are not independent.