3.2 Homework Problems (Winter 2020)

Exercise 3.12 (Homework 1, Problem 1) Let \(X\sim exp(\lambda)\), where \(E(X)=\frac{1}{\lambda}\). What is the p.m.f. of \(Y=\lfloor X\rfloor\) (the floor of \(X\))? Do you recognize it as a distribution that you have studied in the past?

Proof. For nonnegative integer \(a\), and becasue \(X\) is a continuous random variable, we have \[\begin{equation} \begin{split} Pr(Y=a)&=Pr(X<a+1)-Pr(X<a)\\ &=(1-e^{-\lambda(a+1)})-(1-e^{-\lambda a})\\ &=e^{-\lambda a}(1-e^{-\lambda}) \end{split} \tag{3.43} \end{equation}\] and if \(a\) is negative integer, then \(Pr(Y=a)=0\). Thus, we have the pmf of Y as \[\begin{equation} p_Y(y)=\left\{ \begin{aligned} &e^{-\lambda y}(1-e^{-\lambda}) & y\in\{0,1,2,\cdots\} \\ & 0 & otherwise \end{aligned} \right. \tag{3.44} \end{equation}\] Form (3.44) we can also recognize that Y has a geometric distribution with parameter \(p=e^{-\lambda}\).

Exercise 3.13 (Homework 1, Problem 2) Let \(X_1\) and \(X_2\) be two independent random variables such that \(X_i\sim Gamma(a_i,b)\) for any \(a_1,a_2,b>0\). Define \(Y=\frac{X_1}{X_1+X_2}\) and \(Z=X_1+X_2\).

  1. Find the joint pdf for \(Y\) and \(Z\) and show that these two random variables are independent.

  2. Find the marginal pdf of \(Z\). Do you recognize this pdf as belonging to some family that you know?

  3. Find the marginal pdf of \(Y\) . Do you recognize this pdf as belonging to some family that you know?

  4. Compute \(E(Y^k)\) for any \(k>0\).

  5. What does this result imply if \(a_1=a_2=b=1\)?

Proof. (a) Since \(X_1\) and \(X_2\) are independent \(Ga(a_i,b)\), the joint pdf is then \[\begin{equation} \begin{split} f(x_1,x_2)&=\frac{b^{a_1}}{\Gamma(a_1)}x_1^{a_1-1}exp(-bx_1)\times\frac{b^{a_2}}{\Gamma(a_2)}x_2^{a_2-1}exp(-bx_2)\\ &=\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}x_1^{a_1-1}exp(-bx_1)x_2^{a_2-1}exp(-bx_2) \end{split} \tag{3.45} \end{equation}\] for \(x_1>0\) and \(x_2>0\). Define variable transformation as (3.46), \[\begin{equation} \left\{ \begin{aligned} & y=\frac{x_1}{x_1+x_2} \\ & z=x_1+x_2 \end{aligned} \right. \tag{3.46} \end{equation}\] Then we have \[\begin{equation} \left\{ \begin{aligned} & x_1=yz \\ & x_2=(1-y)z \end{aligned} \right. \tag{3.47} \end{equation}\] The Jacobian corresponding to (3.47) has determinant \[\begin{equation} |J|=\begin{vmatrix} z & y\\ -z & 1-y \end{vmatrix}=z \tag{3.48} \end{equation}\] Thus, the joint distribution of \(Y\) and \(Z\) can be written as \[\begin{equation} \begin{split} f(y,z)&=\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}(yz)^{a_1-1}exp(-byz)((1-y)z)^{a_2-1}exp(-b(1-y)z)z\\ &=y^{a_1-1}(1-y)^{a_2-1}\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}z^{a_1+a_2-1}exp(-bz)\\ &=f(y)\cdot f(z) \end{split} \tag{3.49} \end{equation}\] for \(0<y<1\) and \(z>0\).Since the joint pdf can be factorized as a function of y only and a function of z only, random variables Y and Z are independent.

  1. To get the marginal pdf of Z, we need to integrate out Y from the joint pdf. Therefore \[\begin{equation} \begin{split} f_Z(z)&=\int_0^1y^{a_1-1}(1-y)^{a_2-1}\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}z^{a_1+a_2-1}exp(-bz)dy\\ &=\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}z^{a_1+a_2-1}exp(-bz) \frac{\Gamma{a_1}\Gamma(a_2)}{\Gamma(a_1+a_2)}\\ &=\frac{b^{a_1+a_2}}{\Gamma(a_1+a_2)}z^{a_1+a_2-1}exp(-bz) \end{split} \tag{3.50} \end{equation}\] for \(z>0\). The second equation in (3.50) uses the fact that this integral is the kernel of a beta distributed random variabel with parameters \(a_1\) and \(a_2\). From (3.50) it is obvious that the marginal distribution of \(Z\) is \(Ga(a_1+a_2,b)\).

  2. Similarily, integrate out Z from the joint distribution will give us the marginal distribution of Y. \[\begin{equation} \begin{split} f_Z(z)&=\int_0^{\infty}y^{a_1-1}(1-y)^{a_2-1}\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)}\\ &z^{a_1+a_2-1}exp(-bz)dz\\ &=y^{a_1-1}(1-y)^{a_2-1}\frac{b^{a_1+a_2}}{\Gamma(a_1)\Gamma(a_2)} \frac{\Gamma(a_1+a_2)}{b^{a_1+a_2}}\\ &=\frac{y^{a_1-1}(1-y)^{a_2-1}}{B(a_1,a_2)} \end{split} \tag{3.51} \end{equation}\] for \(y\in(0,1)\). The second equation in (3.51) uses the fact that this integral is the kernel of a gamma distributed random variabel with parameters \(a_1+a_2\) and \(b\). From (3.51) it is obvious that the marginal distribution of \(Y\) is \(Beta(a_1,a_2)\).

  3. By definition \[\begin{equation} \begin{split} E(Y^k)&=\int_0^1y^k\frac{y^{a_1-1}(1-y)^{a_2-1}}{B(a_1,a_2)}dy\\ &=\frac{1}{B(a_1,a_2)}\int_0^1y^{k+a_1-1}(1-y)^{a_2-1}dy\\ &=\frac{B(a_1+k,a_2)}{B(a_1,a_2)} \end{split} \tag{3.52} \end{equation}\] for \(k>0\). The second equation uses the fact that the integral is the kernel of a beta distributed random variable with parameters \(a_1+k\) and \(a_2\).

  4. If \(a_1=a_2=b=1\), then for each \(X_i\), the gamma distribution collapse to a exponential distribution with parameter \(\lambda=1\). Thus, the marginal distribution of \(Y\) is \(Beta(1,1)\) which is just a uniform distribution on \((0,1)\). The marginal distribution of \(Z\) is the sum of two independent exponential random variables, which is an Erlang distributed random variable with parameter \(k=2\) and \(\lambda=1\). The result can be generate to \(n\) independent random variables \(X_i\), \(X_i\sim Ga(1,1)=Exp(1)\). The random variable that denote the proportion of random variable \(X_i\) to the sum of all random variables \(\sum_{i=1}^nX_i\) is a \(Beta(1,n-1)\) and the sum \(\sum_{i=1}^nX_i\) has an Erlang distribution with parameters \(k=n\) and \(\lambda=1\) (same as \(Gamma(n,1)\)).

Exercise 3.14 (Homework 1, Problem 3) Consider three independent random variables \(X_1,X_2\), and \(X_3\) such that \(X_i\stackrel{ind}{\sim} Gamma(a_i,b)\), \(i=1,2,3\). Let \[\begin{equation} \mathbf{Y}=(Y_1,Y_2,Y_3)=(\frac{X_1}{X_1+X_2+X_3},\frac{X_2}{X_1+X_2+X_3},\frac{X_3}{X_1+X_2+X_3}) \tag{3.53} \end{equation}\]

  1. Show that \(\mathbf{Y}\sim Dirichlet(a_1,a_2,a_3)\), a Dirichlet distribution.

  2. How can this result be used to generate random variables according to a Dirichlet distribution? Write a simple function in R that takes as inputs \(n\), the number of trivariate vectors to be generated, and \(\mathbf{a}=(a_1,a_2,a_3)\) generates a matrix of size \(n\times 3\) whose rows correspond to independent samples from a Dirichlet distribution with parameter \((a_1,a_2,a_3)\).

Use each of \(\mathbf{a}=(0.01,0.01,0.01), (100,100,100)\) and \((3,5,10)\) and comment how the density of \(\mathbf{Y}\) changes over \(\mathbf{a}\).

Proof. (a) Since \(X_i\stackrel{ind.}{\sim}Ga(a_i,b)\), the joint pdf is then \[\begin{equation} f(x_1,x_2,x_3)=\prod_{i=1}^3\frac{b^{a_i}}{\Gamma(a_i)}x_i^{a_i-1}exp(-bx_i) \end{equation}\] Define variable transformation in (3.54) \[\begin{equation} \left\{ \begin{aligned} & y_1=\frac{x_1}{x_1+x_2+x_3} \\ & y_2=\frac{x_2}{x_1+x_2+x_3} \\ & z=x_1+x_2+x_3 \end{aligned} \right. \tag{3.54} \end{equation}\] Then we have \[\begin{equation} \left\{ \begin{aligned} & x_1=y_1z \\ & x_2=y_2z \\ & x_3=(1-y_1-y_2)z \end{aligned} \right. \tag{3.55} \end{equation}\] The Jacobian corresponding to (3.55) has determinant \[\begin{equation} \begin{split} |J|&=\begin{vmatrix} z & 0 & y_1\\ 0 & z & y_2\\ -z & -z & 1-y_1-y_2 \end{vmatrix}\\ &=z^2(1-y_1-y_2)+z^2y_1+z^2y_2=z^2 \end{split} \tag{3.56} \end{equation}\] Thus, the joint distribution of \(Y_1\), \(Y_2\) and \(Z\) can be written as \[\begin{equation} \begin{split} f(y_1,y_2,z)&=\frac{b^{\sum_{i=1}^3a_i}}{\prod_{i=1}^3\Gamma(a_i)}[(y_1z)^{a_1-1}exp(-by_1z)(y_2z)^{a_2-1}exp(-by_2z)\cdot\\ &((1-y_1-y_2)z)^{a_3-1}exp(-b(1-y_1-y_2)z)]z^2 \end{split} \tag{3.57} \end{equation}\] for \(0<y_1<1,0<y_2<1,0<y_1+y_2<1\) and \(z>0\).Denote \(y_3=1-y_1-y_2\) and integrate w.r.t. \(Z\), then we have \[\begin{equation} \begin{split} f(y_1,y_2,y_3)&=\int_{z=0}^{\infty}\frac{b^{\sum_{i=1}^3a_i}}{\prod_{i=1}^3\Gamma(a_i)}[(y_1)^{a_1-1}(y_2)^{a_2-1}(y_3)^{a_3-1}z^{a_1+a_2+a_3-1}exp(-bz)]dz\\ &=\frac{b^{\sum_{i=1}^3a_i}}{\prod_{i=1}^3\Gamma(a_i)}\frac{\Gamma(\sum_{i=1}^3a_i)}{b^{\sum_{i=1}^3a_i}}(y_1)^{a_1-1}(y_2)^{a_2-1}(y_3)^{a_3-1}\\ &=\frac{\Gamma(\sum_{i=1}^3a_i)}{\prod_{i=1}^3\Gamma(a_i)}(y_1)^{a_1-1}(y_2)^{a_2-1}(y_3)^{a_3-1} \end{split} \tag{3.58} \end{equation}\] with \(0<y_i<1\) and \(\sum_{i=1}^3y_i=1\). Therefore, \[\begin{equation} \mathbf{Y}=(y_1,y_2,y_3)\sim Dirichlet(a_1,a_2,a_3) \tag{3.59} \end{equation}\]

  1. The R function to generate such samples is shown below.

To compare the density of samples generated using this function with different choice of \(\mathbf{a}\), we plot 500 samples using this function and \(\mathbf{a}\) using \((0.01,0.01,0.01)\), \((100,100,100)\) and \((3,5,10)\) in Figure 3.1, Figure 3.2, and Figure 3.3, respectively. From the plot, we notice that when \(\mathbf{a}=(0.01,0.01,0.01)\), all samples are on the axises, meaning that in each sample, either 1 or two \(Y_i\) are really close to 0. When \(\mathbf{a}=(100,100,100)\), the variance of samples are quite small, which makes them centered at the theoretical mean \((1/3,1/3,1/3)\). As for \(\mathbf{a}=(3,5,10)\), samples are around their theoretical mean \((1/6,5/18,5/9)\) with larger variance comparing to the previous case.

FIGURE 3.1: Samples using a=(0.01,0.01,0.01)

FIGURE 3.2: Samples using a=(100,100,100)

FIGURE 3.3: Samples using a=(3,5,10)

Exercise 3.15 (Homework 1, Problem 4) \(Y\) follows an inverse Gamma distribution with shape parameter \(a\) and scale parameter \(b\) (\(Y\sim IG(a,b)\)) if \(Y=1/X\) with \(X\sim Gamma(a,b)\), (assume the Gamma distribution is parameterized so that \(E(X)=ab\)).

  1. Find the density of \(Y\).

  2. Compute \(E(Y^k)\). Do you need to impose any constrain on the problem for this expectation to exists?

  3. Compare \(E(Y^k)\) to \(1/E(X^k)\).

Proof. (a) Since \(X\sim Gamma(a,b)\), the pdf is \[\begin{equation} f(x)=\frac{x^{a-1}e^{-\frac{x}{b}}}{b^a\Gamma(a)} \tag{3.60} \end{equation}\] with \(x>0\). For \(Y=\frac{1}{X}\), define variable transformation \(y=\frac{1}{x}\) then \(x=\frac{1}{y}\) with corresponding Jacobian satisfies \(|J|=\frac{1}{y^2}\). Thus, the pdf of y is given by \[\begin{equation} f(y)=\frac{y^{-(a+1)}exp(-\frac{1}{by})}{b^a\Gamma(a)} \tag{3.61} \end{equation}\] for \(y>0\).

  1. By definition, we have \[\begin{equation} \begin{split} E(y^k)&=\int_0^{\infty}y^k\frac{y^{-(a+1)}exp(-\frac{1}{by})}{b^a\Gamma(a)}dy\\ &=\frac{1}{b^a\Gamma(a)}\int_0^{\infty}y^{-(a+1-k)}exp(-\frac{1}{by})dy\\ &=\frac{1}{b^a\Gamma(a)}\int_0^{\infty}t^{a-k-1}exp(-\frac{t}{b})dt\\ &=\frac{b^{a-k}\Gamma(a-k)}{b^a\Gamma(a)}=\frac{\Gamma(a-k)}{b^k\Gamma(a)} \end{split} \tag{3.62} \end{equation}\] The forth equation in (3.62) uses the fact that the intergral, by transforming variable \(y=\frac{1}{t}\), is the kernel of a gamma distributed random variabel with parameters \(a-k\) and \(b\). Therefore, the constrain for \(E(Y^k)\) to exist is \(a-k>0\).

  2. For \(X\sim Gamma(a,b)\), by definition we have \[\begin{equation} \begin{split} E(X^k)&=\int_0^{\infty}x^k\frac{x^{a-1}e^{-\frac{x}{b}}}{b^a\Gamma(a)}dx\\ &=\frac{\Gamma(a+k)b^{a+k}}{b^a\Gamma(a)}=\frac{b^k\Gamma(a+k)}{\Gamma(a)} \end{split} \tag{3.63} \end{equation}\] Therefore, consider the ratio \[\begin{equation} \gamma=\frac{1/E(X^k)}{E(Y^k)}=\frac{\Gamma(a)\Gamma(a)}{\Gamma(a+k)\Gamma(a-k)} \tag{3.64} \end{equation}\] Since from (3.64) \(\gamma=1\Leftrightarrow k=0\), we know that expectation is not invariant to non-linear transformation such as \(y=\frac{1}{x}\) since every moment of \(Y\) and \(X\) are different.

Exercise 3.16 (Homework 1, Problem 5) \(Y\) follows a log normal distribution with parameters \(\mu\) and \(\sigma^2\) (denotes as \(Y\sim Log-N(\mu,\sigma^2)\)) if \(Y=\exp(X)\) and \(X\sim N(\mu,\sigma^2)\).

  1. Find the denisty of \(Y\).

  2. Compute the mean and the variance of \(Y\).

Proof. (a) Since \(X\sim N(\mu,\sigma^2)\), the pdf of \(X\) is \[\begin{equation} f(x)=\frac{1}{\sigma\sqrt{2\pi}}exp(-\frac{(x-\mu)^2}{2\sigma^2}) \tag{3.65} \end{equation}\] with \(x\in(-\infty,+\infty)\) For \(Y=exp(X)\), consider variable transformation \(y=exp(x)\) then \(x=\log(y)\) with corresponding Jacobian \(|J|=\frac{1}{y}\). Therefore, the pdf of \(Y\) is \[\begin{equation} f(y)=\frac{1}{y\sigma\sqrt{2\pi}}exp(-\frac{(\log(y)-\mu)^2}{2\sigma^2}) \tag{3.65} \end{equation}\] for \(y>0\).

  1. We have \[\begin{equation} \begin{split} E(Y^k)&=E(e^{kX})=M_{X}(k)\\ &=exp(\mu k+\frac{\sigma^2k^2}{2}) \end{split} \tag{3.66} \end{equation}\] Therefore, \[\begin{equation} \begin{split} &E(Y)=exp(\mu+\frac{\sigma^2}{2})\\ &Var(Y)=E(Y^2)-(E(Y))^2=e^{\sigma^2+2\mu}(e^{\sigma^2}-1) \end{split} \tag{3.67} \end{equation}\]
Exercise 3.17 (Homework 1, Problem 6) Let \(\mathbf{X}=(X_1,\cdots,X_p)\) with \(\mathbf{X}\sim N_p(\boldsymbol{\mu},\Sigma)\) and set \(\mathbf{Z}_1=(X_1,\cdots,X_q)\) and \(\mathbf{Z}_2=(X_{q+1},\cdots,X_p)\) with \(1<q<p\). Show that \[\begin{equation} \mathbf{Z}_1|\mathbf{Z}_2\sim N_q(\boldsymbol{\mu}_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{Z}_2-\boldsymbol{\mu}_2),\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}) \tag{3.68} \end{equation}\] where \(\boldsymbol{\mu}_k\) and \(\Sigma_{k\ell}\) denote the block of \(\boldsymbol{\mu}\) and \(\Sigma\) where the rows correspond to the variables in \(Z_k\) and the columns correspond to the variables in \(Z_{\ell}\).

Proof. Using the inverse of block matrix formula from Wiki we have \[\begin{equation} \begin{split} \Sigma^{-1}&=\begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}^{-1}\\ &=\begin{pmatrix} \Sigma_1^{-1} & -\Sigma_{11}^{-1}\Sigma_{12}\Sigma_{2}^{-1}\\ -\Sigma_2^{-1}\Sigma_{21}\Sigma_{11}^{-1}& \Sigma_2^{-1} \end{pmatrix} \end{split} \tag{3.69} \end{equation}\] with \(\Sigma_1=\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}\) and \(\Sigma_2=\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}\). We have \[\begin{equation} \begin{split} f(\mathbf{z}_1|\mathbf{z}_2)&\propto f(\mathbf{z}_1,\mathbf{z}_2)\\ &\propto exp\{-\frac{1}{2}\begin{pmatrix} \mathbf{z}_1-\boldsymbol{\mu}_1\\ \mathbf{z}_2-\boldsymbol{\mu}_2 \end{pmatrix}^T\begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}^{-1}\begin{pmatrix} \mathbf{z}_1-\boldsymbol{\mu}_1\\ \mathbf{z}_2-\boldsymbol{\mu}_2 \end{pmatrix} \}\\ &\propto exp\{-\frac{1}{2}[(\mathbf{z}_1-\boldsymbol{\mu}_1)^T\Sigma_{1}^{-1}(\mathbf{z}_1-\boldsymbol{\mu}_1)-(\mathbf{z}_2-\boldsymbol{\mu}_2)^T\Sigma_2^{-1}\Sigma_{21}\Sigma_{11}^{-1}(\mathbf{z}_1-\boldsymbol{\mu}_1)\\ &-(\mathbf{z}_1-\boldsymbol{\mu}_1)^T\Sigma_{11}^{-1}\Sigma_{12}\Sigma_{2}^{-1}(\mathbf{z}_2-\boldsymbol{\mu}_2)]\}\\ &\propto exp\{-\frac{1}{2}[\mathbf{z}_1^T\Sigma_1^{-1}\mathbf{z}_1-\mathbf{z}_1^T(\Sigma_{1}^{-1}\boldsymbol{\mu}_1+\Sigma_{11}^{-1}\Sigma_{12}\Sigma_{2}^{-1}(\mathbf{z}_2-\boldsymbol{\mu}_2)\\ &-(\boldsymbol{\mu}_1^T\Sigma_{1}^{-1}+(\mathbf{z}_2-\boldsymbol{\mu}_2)^T\Sigma_2^{-1}\Sigma_{21}\Sigma_{11}^{-1})\mathbf{z}_1]\}\\ &=exp\{-\frac{1}{2}[(\mathbf{z}_1-(\boldsymbol{\mu}_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{z}_2-\boldsymbol{\mu}_2)))^T\Sigma_1^{-1}\\ &(\mathbf{z}_1-(\boldsymbol{\mu}_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{z}_2-\boldsymbol{\mu}_2)))]\} \end{split} \tag{3.70} \end{equation}\] Therefore, by recoginzing the kernel we have \(\mathbf{z}_1|\mathbf{z}_2\sim N(\boldsymbol{\mu}_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{z}_2-\boldsymbol{\mu}_2),\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21})\) as we desired.

Exercise 3.18 (Homework 1, Problem 7) Show that if \(X\sim exp(\beta)\), then

  1. \(Y=X^{1/\gamma}\) has a Weibull distribution with parameters \(\gamma\) and \(\beta\) with \(\gamma>0\) a constant.

  2. \(Y=(2X/\beta)^{1/2}\) has the Rayleigh distribution.

For both parts, derive the form of the p.d.f., verify that is a p.d.f., and calculate the mean and the variance.

Proof. (a) Since \(X\sim Exp(\beta)\), the pdf of \(X\) is \[\begin{equation} f(x)=\beta exp(-\beta x) \tag{3.71} \end{equation}\] for \(x>0\). Consider variable transformation \(y=x^{1/\gamma}\) then \(x=y^{\gamma}\) and hence \(|J|=\gamma y^{\gamma-1}\), we have the pdf of y as \[\begin{equation} f(y)=\beta\gamma y^{\gamma-1}exp(-\beta y^{\gamma}) \tag{3.72} \end{equation}\] for \(y>0\), which is a Weibull distribution with parameters \(\gamma>0\) and \(\beta>0\).

To verify the pdf, consider \[\begin{equation} \begin{split} \int_0^{\infty}f(y)dy&=\int_0^{\infty}\beta\gamma y^{\gamma-1}exp(-\beta y^{\gamma})dy\\ &=\int_0^{\infty}\beta exp(-\beta y^{\gamma})dy^{\gamma}=1 \end{split} \tag{3.73} \end{equation}\] Thus, it is a proper pdf.

To compute mean and variance, we have \[\begin{equation} \begin{split} E(Y^k)&=E(X^{k/\gamma})=\beta\int_0^{\infty}x^{k/\gamma}exp(-\beta x)dx\\ &=\frac{\Gamma(\frac{k}{\gamma}+1)}{\beta^{k/\gamma}} \end{split} \tag{3.74} \end{equation}\] where the second equation uses the fact that the integral is the kernel of a gamma distributed random variable with parameter \(\frac{k}{\gamma}+1\) and \(\beta\). Therefore \[\begin{equation} \begin{split} &E(Y)=\frac{\Gamma(\frac{1}{\gamma}+1)}{\beta^{1/\gamma}}\\ &Var(Y)=E(Y^2)-(E(Y))^2=\frac{\Gamma(\frac{2}{\gamma}+1)-(\Gamma(\frac{1}{\gamma}+1))^2}{\beta^{2/\gamma}} \end{split} \tag{3.75} \end{equation}\]

  1. Similarily, consider variable transformation \(y=(2x/\beta)^{1/2}\), then \(x=\frac{\beta y^2}{2}\) with Jacobian \(|J|=\beta y\) and thus \(Y\) has the pdf in (3.76). \[\begin{equation} f(y)=\beta^2yexp(-\frac{\beta^2y^2}{2}) \tag{3.76} \end{equation}\] for \(y>0\). It has the form as Rayleigh distribution with parameter \(\beta>0\).

To see it is a legal pdf, consider \[\begin{equation} \begin{split} \int_0^{\infty}f(y)dy&=\int_0^{\infty}\beta^2yexp(-\frac{\beta^2y^2}{2})dy\\ &=\int_0^{\infty}\frac{1}{2}exp(-\frac{\beta^2y^2}{2})d\beta^2y^2=1 \end{split} \tag{3.77} \end{equation}\]

Finally, for mean and variance, we have \[\begin{equation} \begin{split} E(Y^k)&=E((\frac{2X}{\beta})^{k/2})=\frac{2^{k/2}}{\beta^{\frac{k-2}{2}}}\int_0^{\infty}x^{k/2}exp(-\beta x)dx\\ &=\frac{2^{k/2}\Gamma(\frac{k}{2}+1)}{\beta^k} \end{split} \tag{3.78} \end{equation}\] Therefore \[\begin{equation} \begin{split} &E(Y)=\frac{\sqrt{2}\Gamma(1.5)}{\beta}=\frac{\sqrt{2\pi}}{2\beta}\\ &Var(Y)=E(Y^2)-(E(Y))^2=\frac{2\Gamma(2)}{\beta^2}-(\frac{\sqrt{2\pi}}{2\beta})^2=\frac{4-\pi}{2\beta^2} \end{split} \tag{3.79} \end{equation}\]