\( \newcommand{\bm}[1]{\boldsymbol{#1}} \newcommand{\textm}[1]{\textsf{#1}} \def\T{{\mkern-2mu\raise-1mu\mathsf{T}}} \newcommand{\R}{\mathbb{R}} % real numbers \newcommand{\E}{{\rm I\kern-.2em E}} \newcommand{\w}{\bm{w}} % bold w \newcommand{\bmu}{\bm{\mu}} % bold mu \newcommand{\bSigma}{\bm{\Sigma}} % bold mu \newcommand{\bigO}{O} %\mathcal{O} \renewcommand{\d}[1]{\operatorname{d}\!{#1}} \)

Exercises

Exercise 5.1 (Graph matrices) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 6 & 1\\ 2 & 0 & 3 & 1 & 5 & 0\\ 2 & 3 & 0 & 9 & 0 & 2\\ 0 & 1 & 9 & 0 & 7 & 3\\ 6 & 5 & 0 & 7 & 0 & 2\\ 1 & 0 & 2 & 3 & 2 & 0 \end{bmatrix}. \]

  1. Calculate the connectivity matrix.
  2. Calculate the degree matrix.
  3. Calculate the Laplacian matrix.
  4. Plot the graph showing the nodes and indicating the connectivity weights.

Exercise 5.2 (Laplacian matrix of a \(k\)-connected graph) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 0 & 0 & 2 & 0\\ 2 & 0 & 0 & 9 & 3 & 2\\ 0 & 0 & 0 & 7 & 0 & 2\\ 0 & 9 & 7 & 0 & 0 & 3\\ 2 & 3 & 0 & 0 & 0 & 0\\ 0 & 2 & 2 & 3 & 0 & 0 \end{bmatrix}. \]

  1. Calculate the Laplacian matrix.
  2. Plot the graph and visually inspect the graph structure.
  3. Compute the eigenvalue decomposition of the Laplacian matrix. What can be concluded from its eigenvalues?

Exercise 5.3 (Adjacency matrix of a bipartite graph) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 0 & 0 & 2 & 6 & 1\\ 0 & 0 & 0 & 1 & 5 & 3\\ 0 & 0 & 0 & 9 & 0 & 2\\ 2 & 1 & 9 & 0 & 0 & 0\\ 6 & 5 & 0 & 0 & 0 & 0\\ 1 & 3 & 2 & 0 & 0 & 0 \end{bmatrix}. \]

  1. Calculate the Laplacian matrix.
  2. Plot the graph and visually inspect the graph structure.
  3. Compute the eigenvalue decomposition of the adjacency matrix. What can be concluded from its eigenvalues?

Exercise 5.4 (Learning graphs from similarity measures) Consider the following graph: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 0 & 0\\ 2 & 0 & 3 & 0 & 0 & 0\\ 2 & 3 & 0 & 0 & 9 & 2\\ 0 & 0 & 0 & 0 & 7 & 2\\ 0 & 0 & 9 & 7 & 0 & 3\\ 0 & 0 & 2 & 2 & 3 & 0 \end{bmatrix}. \]

  1. Calculate the Laplacian matrix \(\bm{L}\).

  2. Generate \(T=100\) observations of a graph signal \(\bm{x}^{(t)},\; t=1,\dots,T\), by drawing each realization from a zero-mean Gaussian distribution with covariance matrix equal to the Moore-Penrose matrix inverse of the Laplacian matrix \(\bm{L}^{\dagger}\) (which has inverse positive eigenvalues but keeps the same zero eigenvalues as \(\bm{L}\)), i.e., \(\bm{x}^{(t)} \sim \mathcal{N}(\bm{0}, \bm{L}^{\dagger})\).

  3. Learn the following graphs based on similarity measures:

    • thresholded distance graph
    • Gaussian graph
    • \(k\)-nearest neighbors (\(k\)-NN) graph
    • feature correlation graph
  4. Compare the graphs in terms of Laplacian matrix error and with graph plots.

Exercise 5.5 (Learning graphs from smooth signals) Consider the following graph: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 0 & 0\\ 2 & 0 & 3 & 0 & 0 & 0\\ 2 & 3 & 0 & 0 & 9 & 2\\ 0 & 0 & 0 & 0 & 7 & 2\\ 0 & 0 & 9 & 7 & 0 & 3\\ 0 & 0 & 2 & 2 & 3 & 0 \end{bmatrix}. \]

  1. Calculate the Laplacian matrix \(\bm{L}\).

  2. Generate \(T=100\) observations of a graph signal \(\bm{x}^{(t)},\; t=1,\dots,T\), by drawing each realization from a zero-mean Gaussian distribution with covariance matrix equal to the Moore-Penrose matrix inverse of the Laplacian matrix \(\bm{L}^{\dagger}\) (which has inverse positive eigenvalues but keeps the same zero eigenvalues as \(\bm{L}\)), i.e., \(\bm{x}^{(t)} \sim \mathcal{N}(\bm{0}, \bm{L}^{\dagger})\).

  3. Learn the following graphs:

    • sparse smooth graph: \[ \begin{array}{ll} \underset{\bm{W}}{\textm{minimize}} & \frac{1}{2}\textm{Tr}(\bm{W}\bm{Z}) + \gamma \|\bm{W}\|_\textm{F}^2\\ \textm{subject to} & \textm{diag}(\bm{W})=\bm{0}, \quad \bm{W}=\bm{W}^\T \ge \bm{0}; \end{array} \]

    • sparse smooth graph with hard degree control: same formulation but including the constraint \(\bm{W}\bm{1} = \bm{1}\) to control the degrees of the nodes;

    • sparse smooth graph with regularized degree control: same formulation again but now including the regularization term \(-\bm{1}^\T\textm{log}(\bm{W}\bm{1})\) to control the degrees of the nodes.

  4. Compare the graphs in terms of Laplacian matrix error and with graph plots.

Exercise 5.6 (Learning \(k\)-component financial graphs from GRMF)

  1. Download market data corresponding to \(N\) assets (e.g., stocks or cryptocurrencies) during a period with \(T\) observations, and form the data matrix \(\bm{X}\in\R^{T\times N}\).

  2. Learn a sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}}\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j. \end{array} \]

  3. Learn a \(k\)-component sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}, \bm{F}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}} - \gamma \textm{Tr}\left(\bm{F}^\T\bm{L}\bm{F}\right)\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j\\ & \textm{diag}(\bm{L})=\bm{1}\\ & \bm{F}^\T\bm{F}=\bm{I}. \end{array} \]

  4. Plot the graphs and compare them visually.

Exercise 5.7 (Learning heavy-tailed financial graphs)

  1. Download market data corresponding to \(N\) assets (e.g., stocks or cryptocurrencies) during a period with \(T\) observations, and form the data matrix \(\bm{X}\in\R^{T\times N}\).

  2. Learn a sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}}\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j. \end{array} \]

  3. Learn a heavy-tailed MRF graph by solving the sequence of Gaussianized problems for \(k=1,2,\dots\) \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}^k)\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j, \end{array} \] where \(\bm{S}^k\) is a weighted sample covariance matrix \[ \bm{S}^k = \frac{1}{T}\sum_{t=1}^T w_t^k \times \bm{x}^{(t)}(\bm{x}^{(t)})^\T, \] with weights \(w_t^k = \frac{p + \nu}{\nu + (\bm{x}^{(t)})^\T\bm{L}^k\bm{x}^{(t)}}\).

  4. Plot the graphs and compare them visually.