1.2 Some facts about distributions

We will make use of several parametric distributions. Some notation and facts are introduced as follows:

  • N(μ,σ2) stands for the normal distribution with mean μ and variance σ2. Its pdf is ϕσ(xμ):=12πσe(xμ)22σ2, xR, and satisfies that ϕσ(xμ)=1σϕ(xμσ) (if σ=1 the dependence is omitted). Its cdf is denoted as Φσ(xμ). zα denotes the upper α-quantile of a N(0,1), i.e., zα=Φ1(1α). Some uncentered moments of XN(μ,σ2) are

    E[X]=μ,E[X2]=μ2+σ2,E[X3]=μ3+3μσ2,E[X4]=μ4+6μ2σ2+3σ4.

    The multivariate normal is represented by \mathcal{N}_p(\boldsymbol{\mu},\boldsymbol{\Sigma}), where \boldsymbol{\mu} is a p-vector and \boldsymbol{\Sigma} is a p\times p symmetric and positive matrix. The pdf of a \mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma}) is \phi_{\boldsymbol{\Sigma}}(\mathbf{x}-\boldsymbol{\mu}):=\frac{1}{(2\pi)^{p/2}|\boldsymbol{\Sigma}|^{1/2}}e^{-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})'\boldsymbol{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu})}, and satisfies that \phi_{\boldsymbol{\Sigma}}(\mathbf{x}-\boldsymbol{\mu})=|\boldsymbol{\Sigma}|^{-1/2}\phi\left(\boldsymbol{\Sigma}^{-1/2}(\mathbf{x}-\boldsymbol{\mu})\right) (if \boldsymbol{\Sigma}=\mathbf{I} the dependence is omitted).

  • The lognormal distribution is denoted by \mathcal{LN}(\mu,\sigma^2) and is such that \mathcal{LN}(\mu,\sigma^2)\stackrel{d}{=}\exp(\mathcal{N}(\mu,\sigma^2)). Its pdf is f(x;\mu,\sigma):=\frac{1}{\sqrt{2\pi}\sigma x}\allowbreak e^{-\frac{(\log x-\log\mu)^2}{2\sigma^2}}, x>0. Note that \mathbb{E}[\mathcal{LN}(\mu,\sigma^2)]=e^{\mu+\frac{\sigma^2}{2}}

  • The exponential distribution is denoted as \mathrm{Exp}(\lambda) and has pdf f(x;\lambda)=\lambda e^{-\lambda x}, \lambda,x>0.

  • The gamma distribution is denoted as \Gamma(a,p) and has pdf f(x;a,p)=\frac{a^p}{\Gamma(p)} x^{p-1}e^{-a x}, a,p,x>0, where \Gamma(p)=\int_0^\infty x^{p-1}e^{-ax}\,\mathrm{d}x. It is known that \mathbb{E}[\Gamma(a,p)]=\frac{p}{a} and \mathbb{V}\mathrm{ar}[\Gamma(a,p)]=\frac{p}{a^2}.

  • The inverse gamma distribution, \mathrm{IG}(a,p)\stackrel{d}{=}\Gamma(a,p)^{-1}, has pdf f(x;a,p)=\frac{a^p}{\Gamma(p)} x^{-p-1}e^{-\frac{a}{x}}, a,p,x>0. It is known that \mathbb{E}[\mathrm{IG}(a,p)]=\frac{a}{p-1} and \mathbb{V}\mathrm{ar}[\mathrm{IG}(a,p)]=\frac{a^2}{(p-1)^2(p-2)}.

  • The binomial distribution is denoted as \mathrm{B}(n,p). Recall that \mathbb{E}[\mathrm{B}(n,p)]=np and \mathbb{V}\mathrm{ar}[\mathrm{B}(n,p)]=np(1-p). A \mathrm{B}(1,p) is a Bernoulli distribution, denoted as \mathrm{Ber}(p).

  • The beta distribution is denoted as \beta(a,b) and its pdf is f(x;a,b)=\frac{1}{\beta(a,b)}x^{a-1}(1-x)^{1-b}, 0<x<1, where \beta(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}. When a=b=1, the uniform distribution \mathcal{U}(0,1) arises.

  • The Poisson distribution is denoted as \mathrm{Pois}(\lambda) and has pdf \mathbb{P}[X=x]=\frac{x^\lambda e^{-\lambda}}{x!}, x=0,1,2,\ldots. Recall that \mathbb{E}[\mathrm{Pois}(\lambda)]=\mathbb{V}\mathrm{ar}[\mathrm{Pois}(\lambda)]=\lambda.