## 1.2 Some facts about distributions

We will make use of several parametric distributions. Some notation and facts are introduced as follows:

• $$\mathcal{N}(\mu,\sigma^2)$$ stands for the normal distribution with mean $$\mu$$ and variance $$\sigma^2$$. Its pdf is $$\phi_\sigma(x-\mu):=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$, $$x\in\mathbb{R}$$, and satisfies that $$\phi_\sigma(x-\mu)=\frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)$$ (if $$\sigma=1$$ the dependence is omitted). Its cdf is denoted as $$\Phi_\sigma(x-\mu)$$. $$z_\alpha$$ denotes the upper $$\alpha$$-quantile of a $$\mathcal{N}(0,1)$$, i.e., $$z_\alpha=\Phi^{-1}(1-\alpha)$$. Some uncentered moments of $$X\sim\mathcal{N}(\mu,\sigma^2)$$ are

\begin{align*} \mathbb{E}[X]&=\mu,\\ \mathbb{E}[X^2]&=\mu^2+\sigma^2,\\ \mathbb{E}[X^3]&=\mu^3+3\mu\sigma^2,\\ \mathbb{E}[X^4]&=\mu^4+6\mu^2\sigma^2+3\sigma^4. \end{align*}

The multivariate normal is represented by $$\mathcal{N}_p(\boldsymbol{\mu},\boldsymbol{\Sigma})$$, where $$\boldsymbol{\mu}$$ is a $$p$$-vector and $$\boldsymbol{\Sigma}$$ is a $$p\times p$$ symmetric and positive matrix. The pdf of a $$\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$$ is $$\phi_{\boldsymbol{\Sigma}}(\mathbf{x}-\boldsymbol{\mu}):=\frac{1}{(2\pi)^{p/2}|\boldsymbol{\Sigma}|^{1/2}}e^{-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})'\boldsymbol{\Sigma}^{-1}(\mathbf{x}-\boldsymbol{\mu})}$$, and satisfies that $$\phi_{\boldsymbol{\Sigma}}(\mathbf{x}-\boldsymbol{\mu})=|\boldsymbol{\Sigma}|^{-1/2}\phi\left(\boldsymbol{\Sigma}^{-1/2}(\mathbf{x}-\boldsymbol{\mu})\right)$$ (if $$\boldsymbol{\Sigma}=\mathbf{I}$$ the dependence is omitted).

• The lognormal distribution is denoted by $$\mathcal{LN}(\mu,\sigma^2)$$ and is such that $$\mathcal{LN}(\mu,\sigma^2)\stackrel{d}{=}\exp(\mathcal{N}(\mu,\sigma^2))$$. Its pdf is $$f(x;\mu,\sigma):=\frac{1}{\sqrt{2\pi}\sigma x}\allowbreak e^{-\frac{(\log x-\log\mu)^2}{2\sigma^2}}$$, $$x>0$$. Note that $$\mathbb{E}[\mathcal{LN}(\mu,\sigma^2)]=e^{\mu+\frac{\sigma^2}{2}}$$

• The exponential distribution is denoted as $$\mathrm{Exp}(\lambda)$$ and has pdf $$f(x;\lambda)=\lambda e^{-\lambda x}$$, $$\lambda,x>0$$.

• The gamma distribution is denoted as $$\Gamma(a,p)$$ and has pdf $$f(x;a,p)=\frac{a^p}{\Gamma(p)} x^{p-1}e^{-a x}$$, $$a,p,x>0$$, where $$\Gamma(p)=\int_0^\infty x^{p-1}e^{-ax}\,\mathrm{d}x$$. It is known that $$\mathbb{E}[\Gamma(a,p)]=\frac{p}{a}$$ and $$\mathbb{V}\mathrm{ar}[\Gamma(a,p)]=\frac{p}{a^2}$$.

• The inverse gamma distribution, $$\mathrm{IG}(a,p)\stackrel{d}{=}\Gamma(a,p)^{-1}$$, has pdf $$f(x;a,p)=\frac{a^p}{\Gamma(p)} x^{-p-1}e^{-\frac{a}{x}}$$, $$a,p,x>0$$. It is known that $$\mathbb{E}[\mathrm{IG}(a,p)]=\frac{a}{p-1}$$ and $$\mathbb{V}\mathrm{ar}[\mathrm{IG}(a,p)]=\frac{a^2}{(p-1)^2(p-2)}$$.

• The binomial distribution is denoted as $$\mathrm{B}(n,p)$$. Recall that $$\mathbb{E}[\mathrm{B}(n,p)]=np$$ and $$\mathbb{V}\mathrm{ar}[\mathrm{B}(n,p)]=np(1-p)$$. A $$\mathrm{B}(1,p)$$ is a Bernouilli distribution, denoted as $$\mathrm{Ber}(p)$$.

• The beta distribution is denoted as $$\beta(a,b)$$ and its pdf is $$f(x;a,b)=\frac{1}{\beta(a,b)}x^{a-1}(1-x)^{1-b}$$, $$0<x<1$$, where $$\beta(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$. When $$a=b=1$$, the uniform distribution $$\mathcal{U}(0,1)$$ arises.

• The Poisson distribution is denoted as $$\mathrm{Pois}(\lambda)$$ and has pdf $$\mathbb{P}[X=x]=\frac{x^\lambda e^{-\lambda}}{x!}$$, $$x=0,1,2,\ldots$$. Recall that $$\mathbb{E}[\mathrm{Pois}(\lambda)]=\mathbb{V}\mathrm{ar}[\mathrm{Pois}(\lambda)]=\lambda$$.