Chapter 14 Transformations of random variables

14.1 Introduction

In this section we will consider transformations of random variables. Transformations are useful for:

  • Simulating random variables.
    For example, computers can generate pseudo random numbers which represent draws from U(0,1)U(0,1) distribution and transformations enable us to generate random samples from a wide range of more general (and exciting) probability distributions.
  • Understanding functions of random variables.
    Suppose that £PP is invested in an account with continuously compounding interest rate rr. Then the amount £AA in the account after tt years is
    A=Pert.A=Pert.
    Suppose that P=1,000P=1,000 and rr is a realisation of a continuous random variable RR with pdf f(r)f(r). What is the p.d.f. of the amount AA after one year? i.e. What is the p.d.f. of
    A=1000eR?A=1000eR?

We will consider both univariate and bivariate transformations with the methodology for bivariate transformations extending to more general multivariate transformations.

14.2 Univariate case

Suppose that XX is a continuous random variable with p.d.f. f(x)f(x). Let gg be a continuous function, then Y=g(X)Y=g(X) is a continuous random variable. Our aim is to find the p.d.f. of YY.

We present the distribution function method which has two steps:

  1. Compute the c.d.f. of YY, that is
    FY(y)=P(Yy).FY(y)=P(Yy).
  2. Derive the p.d.f. of YY, fY(y)fY(y), using the fact that
    fY(y)=dFY(y)dy.fY(y)=dFY(y)dy.

Square of a Standard Normal

Let ZN(0,1)ZN(0,1). Find the p.d.f. of Y=Z2Y=Z2.

For y>0y>0, the c.d.f. of Y=Z2Y=Z2 can be expressed in terms of the c.d.f. of ZZ,
FY(y)=P(Yy)=P(Z2y)=P(yZy)=P(Zy)P(Zy)=FZ(y)FZ(y).FY(y)=P(Yy)=P(Z2y)=P(yZy)=P(Zy)P(Zy)=FZ(y)FZ(y).
Note that if we want a specific formula for FYFY, then we can evaluate the resulting c.d.f.’s. In this case:
FZ(y)=y12πez22dz.FZ(y)=y12πez22dz.
Therefore, using the chain rule for differentiation,
fY(y)=dFY(y)dy=ddyFZ(y)ddyFZ(y)=ddzFZ(z)ddy(z)ddzFZ(z)ddy(z)fY(y)=dFY(y)dy=ddyFZ(y)ddyFZ(y)=ddzFZ(z)ddy(z)ddzFZ(z)ddy(z)
where z=y1/2z=y1/2.
Now ddzFZ(z)=12πez22ddzFZ(z)=12πez22, so
fY(y)=12πe(y)2212y12πe(y)2212y=122πyey2+122πyey2=12πyey2.fY(y)=12πe(y)2212y12πe(y)2212y=122πyey2+122πyey2=12πyey2.
Therefore YY has probability density function:
fY(y)={y1/22πexp(y2)y>0,0otherwise.fY(y)={y1/22πexp(y2)y>0,0otherwise.

Thus YGamma(12,12)YGamma(12,12), otherwise known as a Chi-squared distribution with 11 degree of freedom. :::

14.3 Bivariate case

Suppose that X1X1 and X2X2 are continuous random variables with joint p.d.f. given by fX1,X2(x1,x2)fX1,X2(x1,x2). Let (Y1,Y2)=T(X1,X2)(Y1,Y2)=T(X1,X2). We want to find the joint p.d.f. of Y1Y1 and Y2Y2.

Jacobian

Suppose T:(x1,x2)(y1,y2)T:(x1,x2)(y1,y2) is a one-to-one transformation in some region of R2R2, such that x1=H1(y1,y2)x1=H1(y1,y2) and x2=H2(y1,y2)x2=H2(y1,y2). The Jacobian of T1=(H1,H2)T1=(H1,H2) is defined by
J(y1,y2)=|H1y1H1y2H2y1H2y2|.J(y1,y2)=∣ ∣ ∣H1y1H1y2H2y1H2y2∣ ∣ ∣.

Transformation of random variables.

Let (Y1,Y2)=T(X1,X2)(Y1,Y2)=T(X1,X2) be some transformation of random variables. If TT is a one-to-one function and the Jacobian of T1T1 is non-zero in T(A)T(A) where
A={(x1,x2):fX1,X2(X1,X2)>0},A={(x1,x2):fX1,X2(X1,X2)>0},
then the joint p.d.f. of Y1Y1 and Y2Y2, fY1,Y2(y1,y2)fY1,Y2(y1,y2), is given by
fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|

if (y1,y2)T(A)(y1,y2)T(A), and 00 otherwise.

Transformation of uniforms.

Let X1U(0,1)X1U(0,1), X2U(0,1)X2U(0,1) and suppose that X1X1 and X2X2 are independent. Let
Y1=X1+X2,Y2=X1X2.Y1=X1+X2,Y2=X1X2.

Find the joint p.d.f. of Y1Y1 and Y2Y2.

The joint p.d.f. of X1X1 and X2X2 is
fX1,X2(x1,x2)=fX1(x1)fX2(x2)={1,if 0x11 and 0x21,0,otherwise.fX1,X2(x1,x2)=fX1(x1)fX2(x2)={1,if 0x11 and 0x21,0,otherwise.
Now T:(x1,x2)(y1,y2)T:(x1,x2)(y1,y2) is defined by
y1=x1+x2,y2=x1x2.y1=x1+x2,y2=x1x2.
Hence,
x1=H1(y1,y2)=y1+y22,x2=H2(y1,y2)=y1y22.x1=H1(y1,y2)=y1+y22,x2=H2(y1,y2)=y1y22.
The Jacobian of T1T1 is
J(y1,y2)=|H1y1H1y2H2y1H2y2|=|12121212|=12.J(y1,y2)=∣ ∣H1y1H1y2H2y1H2y2∣ ∣=∣ ∣12121212∣ ∣=12.
Since A={(x1,x2):0x11,0x21}A={(x1,x2):0x11,0x21} and since the lines x1=0x1=0, x1=1x1=1, x2=0x2=0 and x2=1x2=1 map to the lines y1+y2=0y1+y2=0, y1+y2=2y1+y2=2, y1y2=0y1y2=0 and y1y2=2y1y2=2 respectively, it can be checked that
T(A)={(y1,y2):0y1+y22,0y1y22}.T(A)={(y1,y2):0y1+y22,0y1y22}.
Transformation

Figure 14.1: Transformation

Thus,
fY1,Y2(y1,y2)={12fX1,X2(H1(y1,y2),H2(y1,y2)),if (y1,y2)T(A),0,otherwise.={12,if 0y1+y22 and 0y1y22,0,otherwise.fY1,Y2(y1,y2)={12fX1,X2(H1(y1,y2),H2(y1,y2)),if (y1,y2)T(A),0,otherwise.={12,if 0y1+y22 and 0y1y22,0,otherwise.


Transformation of Exponentials.

Suppose that X1X1 and X2X2 are i.i.d. exponential random variables with parameter λλ. Let Y1=X1X2Y1=X1X2 and Y2=X1+X2Y2=X1+X2.

  1. Find the joint p.d.f. of Y1Y1 and Y2Y2.
  2. Find the p.d.f. of Y1Y1.

Attempt Example 14.3.4: Transformation of Exponentials and then watch Video 22 for the solutions.

Video 22: Transformation of Exponentials

Solution to Example 14.3.4

Remember from previous results that Y2=X1+X2Gamma(2,λ)Y2=X1+X2Gamma(2,λ).

  1. Since X1X1 and X2X2 are i.i.d. exponential random variables with parameter λλ, the joint p.d.f. of X1X1 and X2X2 is given by
    fX1,X2(x1,x2)=fX1(x1)fX2(x2)={λeλx1λeλx2,if x1,x2>0,0,otherwise.={λ2eλ(x1+x2),if x1,x2>0,0,otherwise.fX1,X2(x1,x2)=fX1(x1)fX2(x2)={λeλx1λeλx2,if x1,x2>0,0,otherwise.={λ2eλ(x1+x2),if x1,x2>0,0,otherwise.
    Solving simultaneously for X1X1 and X2X2 in terms of Y1Y1 and Y2Y2, gives X1=Y1X2X1=Y1X2 and
    Y2=X1+X2=Y1X2+X2=X2(Y1+1).Y2=X1+X2=Y1X2+X2=X2(Y1+1).
    Rearranging gives X2=Y2Y1+1(=H2(Y1,Y2))X2=Y2Y1+1(=H2(Y1,Y2)), and then X1=Y1X2=Y1Y2Y1+1(=H1(Y1,Y2))X1=Y1X2=Y1Y2Y1+1(=H1(Y1,Y2)).
    Computing the Jacobian of T1T1, we get
    J(y1,y2)=|H1y1H1y2H2y1H2y2|=|y2(y1+1)2y1y1+1y2(y1+1)21y1+1|=y2(y1+1)3+y1y2(y1+1)3=y2(y1+1)2.J(y1,y2)=∣ ∣H1y1H1y2H2y1H2y2∣ ∣=∣ ∣y2(y1+1)2y1y1+1y2(y1+1)21y1+1∣ ∣=y2(y1+1)3+y1y2(y1+1)3=y2(y1+1)2.

Now,

A={(x1,x2):fX1,X2(x1,x2)>0}={(x1,x2):x1>0,x2>0}.A={(x1,x2):fX1,X2(x1,x2)>0}={(x1,x2):x1>0,x2>0}.

Therefore, T(A){(y1,y2):y1>0,y2>0}T(A){(y1,y2):y1>0,y2>0}. Since x1>0x1>0 and x2>0x2>0, y1=x1x2>0.y1=x1x2>0. Furthermore, since x1=y1y2y1+1>0x1=y1y2y1+1>0, then y1y2>0y1y2>0 implies y2>0y2>0. Therefore,

T(A)={(y1,y2):y1>0,y2>0}.T(A)={(y1,y2):y1>0,y2>0}.

Consequently, the joint p.d.f. of Y1Y1 and Y2Y2, f=fY1,Y2(y1,y2)f=fY1,Y2(y1,y2) is given by

f=fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|=fX1,X2(y1y21+y1,y21+y1)|y2(1+y1)2|=λ2eλ(y1y2(1+y1)+y2(1+y1))y2(1+y1)2=λ2eλy2y2(1+y1)2,if y1,y2>0.f=fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|=fX1,X2(y1y21+y1,y21+y1)y2(1+y1)2=λ2eλ(y1y2(1+y1)+y2(1+y1))y2(1+y1)2=λ2eλy2y2(1+y1)2,if y1,y2>0.

If either y1<0y1<0 or y2<0y2<0, then fY1,Y2(y1,y2)=0fY1,Y2(y1,y2)=0.

  1. The p.d.f. of Y1Y1 is the marginal p.d.f. of Y1Y1 coming from the joint p.d.f fY1,Y2(y1,y2)fY1,Y2(y1,y2). Therefore, for y1>0y1>0,
    fY1(y1)=0λ2eλy2y2(1+y1)2dy2=1(1+y1)20λ2y2eλy2dy2=1(1+y1)2.fY1(y1)=0λ2eλy2y2(1+y1)2dy2=1(1+y1)20λ2y2eλy2dy2=1(1+y1)2.
    (In the above integration remember that λ2y2eλy2λ2y2eλy2 is the p.d.f. of Gamma(2,λ)Gamma(2,λ).)
    So,
    fY1(y1)={1(1+y1)2if y1>0,0otherwise.fY1(y1)={1(1+y1)2if y1>0,0otherwise.
    The distribution Y1Y1 is an example of a probability distribution for which the expectation is not defined.
    Plot of the p.d.f. of $Y_1$.

    Figure 14.2: Plot of the p.d.f. of Y1Y1.


Note that one can extend the method of transformations to the case of nn random variables.

Student Exercise

Attempt the exercise below.


Let XX and YY be independent random variables, each having probability density function f(x)={λeλxif x>0,0otherwisef(x)={λeλxif x>0,0otherwise and let U=X+YU=X+Y and V=XYV=XY.

  1. Find the joint probability density function of UU and VV.
  2. Hence derive the marginal probability density functions of UU and VV.
  3. Are UU and VV independent? Justify your answer.
Solution to Exercise 14.1.
  1. Let the transformation TT be defined by T(x,y)=(u,v)T(x,y)=(u,v), where u=x+yu=x+y and v=xyv=xy. Then, x=12(u+v)x=12(u+v) and y=12(uv)y=12(uv), so that
    J(u,v)=|12121212|=12.J(u,v)=∣ ∣12121212∣ ∣=12.
    Since XX and YY are independent,
    fX,Y(x,y)=fX(x)fY(y)={λ2eλ(x+y)if x,y>0,0otherwise.fX,Y(x,y)=fX(x)fY(y)={λ2eλ(x+y)if x,y>0,0otherwise.
    Thus, since TT is one-to-one,
    fU,V(u,v)=fX,Y(x(u,v),y(u,v))|J(u,v)|={12λ2eλuif u+v>0,uv>0,0otherwise={12λ2eλuif u>0,u<v<u,0otherwise.fU,V(u,v)=fX,Y(x(u,v),y(u,v))|J(u,v)|={12λ2eλuif u+v>0,uv>0,0otherwise={12λ2eλuif u>0,u<v<u,0otherwise.
    The region over which fU,V(u,v)>0fU,V(u,v)>0 is shown below.
  2. The marginal pdf’s of UU and VV are respectively
    fU(u)=fU,V(u,v)dv={uu12λ2eλudv=λ2ueλuif u>0,0otherwise;fV(v)=fU,V(u,v)du=|v|12λ2eλudu=12λeλ|v|,vR.fU(u)=fU,V(u,v)dv={uu12λ2eλudv=λ2ueλuif u>0,0otherwise;fV(v)=fU,V(u,v)du=|v|12λ2eλudu=12λeλ|v|,vR.
    Note that again we have U=X+YU=X+Y is the sum of two independent Exp(λ)Exp(λ) random variables, so UGamma(2,λ)UGamma(2,λ).
  3. Clearly, fU,V(u,v)=fU(u)fV(v) does not hold for all u,vR, so U and V are not independent.