Chapter 14 Transformations of random variables

14.1 Introduction

In this Section we will consider transformations of random variables. Transformations are useful for:

  • Simulating random variables.
    For example, computers can generate pseudo random numbers which represent draws from U(0,1) distribution and transformations enable us to generate random samples from a wide range of more general (and exciting) probability distributions.
  • Understanding functions of random variables.
    Suppose that £P is invested in an account with continuously compounding interest rate r. Then the amount £A in the account after t years is
    A=Pert.
    Suppose that P=1,000 and r is a realisation of a continuous random variable R with pdf f(r). What is the p.d.f. of the amount A after one year? i.e. what is the p.d.f. of
    A=1000eR?

We will both univariate and bivariate transformations with the methodology for bivariate transformations extending to more general multivariate transformations.

14.2 Univariate case

Suppose that X is a continuous random variable with p.d.f. f(x). Let g be a continuous function, then Y=g(X) is a continuous random variable. Our aim is to find the p.d.f. of Y.

We present the distribution function method which has two steps:

  1. Compute the c.d.f. of Y, that is
    FY(y)=P(Yy).
  2. Derive the p.d.f. of Y, fY(y), using the fact that
    fY(y)=dFY(y)dy.

Square of a Standard Normal

Let ZN(0,1). Find the p.d.f. of Y=Z2.

For y>0, the c.d.f. of Y=Z2 can be expressed in terms of the c.d.f. of Z,
FY(y)=P(Yy)=P(Z2y)=P(yZy)=P(Zy)P(Zy)=FZ(y)FZ(y).
Note that if we want a specific formula for FY, then we can evaluate the resulting c.d.f.’s. In this case:
FZ(y)=y12πez22dz.
Therefore, using the chain rule for differentiation,
fY(y)=dFY(y)dy=ddyFZ(y)ddyFZ(y)=ddzFZ(z)ddy(z)ddzFZ(z)ddy(z)
where z=y1/2.
Now ddzFZ(z)=12πez22, so
fY(y)=12πe(y)2212y12πe(y)2212y=122πyey2+122πyey2=12πyey2.
Therefore Y has probability density function:
fY(y)={y1/22piexp(y2)y>0,0otherwise.

Thus YGamma(12,12), otherwise known as a Chi-squared distribution with 1 degree of freedom. :::

14.3 Bivariate case

Suppose that X1 and X2 are continuous random variables with joint p.d.f. given by fX1,X2(x1,x2). Let (Y1,Y2)=T(X1,X2). We want to find the joint p.d.f. of Y1 and Y2.

Suppose T:(x1,x2)(y1,y2) is a one-to-one transformation in some region of R2, such that x1=H1(y1,y2) and x2=H2(y1,y2). The Jacobian of T1=(H1,H2) is defined by
J(y1,y2)=|H1y1H1y2H2y1H2y2|.

Transformation of random variables.

Let (Y1,Y2)=T(X1,X2) be some transformation of random variables. If T is a one-to-one function and the Jacobian of T1 is non-zero in T(A) where
A={(x1,x2):fX1,X2(X1,X2)>0},
then the joint p.d.f. of Y1 and Y2, fY1,Y2(y1,y2), is given by
fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|

if (y1,y2)T(A), and 0 otherwise.

Transformation of uniforms.

Let X1U(0,1), X2U(0,1) and suppose that X1 and X2 are independent. Let
Y1=X1+X2,Y2=X1X2.

Find the joint p.d.f. of Y1 and Y2.

The joint p.d.f. of X1 and X2 is
fX1,X2(x1,x2)=fX1(x1)fX2(x2)={1,if 0x11 and 0x21,0,otherwise.
Now T:(x1,x2)(y1,y2) is defined by
y1=x1+x2,y2=x1x2.
Hence,
x1=H1(y1,y2)=y1+y22,x2=H2(y1,y2)=y1y22.
The Jacobian of T1 is
J(y1,y2)=|H1y1H1y2H2y1H2y2|=|12121212|=12.
Since A={(x1,x2):0x11,0x21} and since the lines x1=0, x1=1, x2=0 and x2=1 map to the lines y1+y2=0, y1+y2=2, y1y2=0 and y1y2=2 respectively, it can be checked that
T(A)={(y1,y2):0y1+y22,0y1y22}.
Transformation

Figure 14.1: Transformation

Thus,
fY1,Y2(y1,y2)={12fX1,X2(H1(y1,y2),H2(y1,y2)),if (y1,y2)T(A),0,otherwise.={12,if 0y1+y22 and 0y1y22,0,otherwise.


Transformation of Exponentials.

Suppose that X1 and X2 are i.i.d. exponential random variables with parameter λ. Let Y1=X1X2 and Y2=X1+X2.

  1. Find the joint p.d.f. of Y1 and Y2.
  2. Find the p.d.f. of Y1.

Attempt Exercise 1: Transformation of Exponentials and then watch Video 22 for the solutions.

Video 22: Transformation of Exponentials

Alternatively the solutions are available:

Solution to Exercise 1

Remember from previous results that Y2=X1+X2Gamma(2,λ).

  1. Since X1 and X2 are i.i.d. exponential random variables with parameter λ, the joint p.d.f. of X1 and X2 is given by
    fX1,X2(x1,x2)=fX1(x1)fX2(x2)={λeλx1λeλx2,if x1,x2>0,0,otherwise.={λ2eλ(x1+x2),if x1,x2>0,0,otherwise.
    Solving simultaneously for X1 and X2 in terms of Y1 and Y2, gives X1=Y1X2 and
    Y2=X1+X2=Y1X2+X2=X2(Y1+1).
    Rearranging gives X2=Y2Y1+1(=H2(Y1,Y2)), and then X1=Y1X2=Y1Y2Y1+1(=H1(Y1,Y2)).
    Computing the Jacobian of T1, we get
    J(y1,y2)=|H1y1H1y2H2y1H2y2|=|y2(y1+1)2y1y1+1y2(y1+1)21y1+1|=y2(y1+1)3+y1y2(y1+1)3=y2(y1+1)2.

Now,

A={(x1,x2):fX1,X2(x1,x2)>0}={(x1,x2):x1>0,x2>0}.

Therefore, T(A){(y1,y2):y1>0,y2>0}. Since x1>0 and x2>0, y1=x1x2>0. Furthermore, since x1=y1y2y1+1>0, then y1y2>0 implies y2>0. Therefore,

T(A)={(y1,y2):y1>0,y2>0}.

Consequently, the joint p.d.f. of Y1 and Y2, f=fY1,Y2(y1,y2) is given by

f=fX1,X2(H1(y1,y2),H2(y1,y2))|J(y1,y2)|=fX1,X2(y1y21+y1,y21+y1)|y2(1+y1)2|=λ2eλ(y1y2(1+y1)+y2(1+y1))y2(1+y1)2=λ2eλy2y2(1+y1)2,if y1,y2>0.

If either y1<0 or y2<0, then fY1,Y2(y1,y2)=0.

  1. The p.d.f. of Y1 is the marginal p.d.f. of Y1 coming from the joint p.d.f fY1,Y2(y1,y2). Therefore, for y1>0,
    fY1(y1)=0λ2eλy2y2(1+y1)2dy2=1(1+y1)20λy2eλy2dy2=1(1+y1)2.
    (In the above integration remember that λy2eλy2 is the p.d.f. of Gamma(2,λ).)
    So,
    fY1(y1)={1(1+y1)2if y1>0,0otherwise.
    The distribution Y1 is an example of a probability distribution for which the expectation is not defined.
    Plot of the p.d.f. of $Y_1$.

    Figure 14.2: Plot of the p.d.f. of Y1.


Note that one can extend the method of transformations to the case of n random variables.

Student Exercise

Attempt the exercise below.

Question.

Let X and Y be independent random variables, each having probability density function f(x)={λeλxif x>0,0otherwise and let U=X+Y and V=XY.

  1. Find the joint probability density function of U and V.
  2. Hence derive the marginal probability density functions of U and V.
  3. Are U and V independent? Justify your answer.
Solution to Question.
  1. Let the transformation T be defined by T(x,y)=(u,v), where u=x+y and v=xy. Then, x=12(u+v) and y=12(uv), so that
    J(u,v)=|12121212|=12.
    Since X and Y are independent,
    fX,Y(x,y)=fX(x)fY(y)={λ2eλ(x+y)if x,y>0,0otherwise.
    Thus, since T is one-to-one,
    fU,V(u,v)=fX,Y(x(u,v),y(u,v))|J(u,v)|={12λ2eλuif u+v>0,uv>0,0otherwise={12λ2eλuif u>0,u<v<u,0otherwise.
    The region over which fU,V(u,v)>0 is shown below.
  2. The marginal pdf’s of U and V are respectively
    fU(u)=fU,V(u,v)dv={uu12λ2eλudv=λ2ueλuif u>0,0otherwise;fV(v)=fU,V(u,v)du=|v|12λ2eλudu=12λeλ|v|,vR.
    Note that again we have U=X+Y is the sum of two independent Exp(λ) random variables, so UGamma(2,λ).
  3. Clearly, fU,V(u,v)=fU(u)fV(v) does not hold for all u,vR, so U and V are not independent.