Exercises
Exercise 5.1 (Graph matrices) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 6 & 1\\ 2 & 0 & 3 & 1 & 5 & 0\\ 2 & 3 & 0 & 9 & 0 & 2\\ 0 & 1 & 9 & 0 & 7 & 3\\ 6 & 5 & 0 & 7 & 0 & 2\\ 1 & 0 & 2 & 3 & 2 & 0 \end{bmatrix}. \]
- Calculate the connectivity matrix.
- Calculate the degree matrix.
- Calculate the Laplacian matrix.
- Plot the graph showing the nodes and indicating the connectivity weights.
Exercise 5.2 (Laplacian matrix of a \(k\)-connected graph) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 0 & 0 & 2 & 0\\ 2 & 0 & 0 & 9 & 3 & 2\\ 0 & 0 & 0 & 7 & 0 & 2\\ 0 & 9 & 7 & 0 & 0 & 3\\ 2 & 3 & 0 & 0 & 0 & 0\\ 0 & 2 & 2 & 3 & 0 & 0 \end{bmatrix}. \]
- Calculate the Laplacian matrix.
- Plot the graph and visually inspect the graph structure.
- Compute the eigenvalue decomposition of the Laplacian matrix. What can be concluded from its eigenvalues?
Exercise 5.3 (Adjacency matrix of a bipartite graph) Consider a graph described by the following adjacency matrix: \[ \bm{W}=\begin{bmatrix} 0 & 0 & 0 & 2 & 6 & 1\\ 0 & 0 & 0 & 1 & 5 & 3\\ 0 & 0 & 0 & 9 & 0 & 2\\ 2 & 1 & 9 & 0 & 0 & 0\\ 6 & 5 & 0 & 0 & 0 & 0\\ 1 & 3 & 2 & 0 & 0 & 0 \end{bmatrix}. \]
- Calculate the Laplacian matrix.
- Plot the graph and visually inspect the graph structure.
- Compute the eigenvalue decomposition of the adjacency matrix. What can be concluded from its eigenvalues?
Exercise 5.4 (Learning graphs from similarity measures) Consider the following graph: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 0 & 0\\ 2 & 0 & 3 & 0 & 0 & 0\\ 2 & 3 & 0 & 0 & 9 & 2\\ 0 & 0 & 0 & 0 & 7 & 2\\ 0 & 0 & 9 & 7 & 0 & 3\\ 0 & 0 & 2 & 2 & 3 & 0 \end{bmatrix}. \]
Calculate the Laplacian matrix \(\bm{L}\).
Generate \(T=100\) observations of a graph signal \(\bm{x}^{(t)},\; t=1,\dots,T\), by drawing each realization from a zero-mean Gaussian distribution with covariance matrix equal to the Moore-Penrose matrix inverse of the Laplacian matrix \(\bm{L}^{\dagger}\) (which has inverse positive eigenvalues but keeps the same zero eigenvalues as \(\bm{L}\)), i.e., \(\bm{x}^{(t)} \sim \mathcal{N}(\bm{0}, \bm{L}^{\dagger})\).
Learn the following graphs based on similarity measures:
- thresholded distance graph
- Gaussian graph
- \(k\)-nearest neighbors (\(k\)-NN) graph
- feature correlation graph
Compare the graphs in terms of Laplacian matrix error and with graph plots.
Exercise 5.5 (Learning graphs from smooth signals) Consider the following graph: \[ \bm{W}=\begin{bmatrix} 0 & 2 & 2 & 0 & 0 & 0\\ 2 & 0 & 3 & 0 & 0 & 0\\ 2 & 3 & 0 & 0 & 9 & 2\\ 0 & 0 & 0 & 0 & 7 & 2\\ 0 & 0 & 9 & 7 & 0 & 3\\ 0 & 0 & 2 & 2 & 3 & 0 \end{bmatrix}. \]
Calculate the Laplacian matrix \(\bm{L}\).
Generate \(T=100\) observations of a graph signal \(\bm{x}^{(t)},\; t=1,\dots,T\), by drawing each realization from a zero-mean Gaussian distribution with covariance matrix equal to the Moore-Penrose matrix inverse of the Laplacian matrix \(\bm{L}^{\dagger}\) (which has inverse positive eigenvalues but keeps the same zero eigenvalues as \(\bm{L}\)), i.e., \(\bm{x}^{(t)} \sim \mathcal{N}(\bm{0}, \bm{L}^{\dagger})\).
Learn the following graphs:
sparse smooth graph: \[ \begin{array}{ll} \underset{\bm{W}}{\textm{minimize}} & \frac{1}{2}\textm{Tr}(\bm{W}\bm{Z}) + \gamma \|\bm{W}\|_\textm{F}^2\\ \textm{subject to} & \textm{diag}(\bm{W})=\bm{0}, \quad \bm{W}=\bm{W}^\T \ge \bm{0}; \end{array} \]
sparse smooth graph with hard degree control: same formulation but including the constraint \(\bm{W}\bm{1} = \bm{1}\) to control the degrees of the nodes;
sparse smooth graph with regularized degree control: same formulation again but now including the regularization term \(-\bm{1}^\T\textm{log}(\bm{W}\bm{1})\) to control the degrees of the nodes.
Compare the graphs in terms of Laplacian matrix error and with graph plots.
Exercise 5.6 (Learning \(k\)-component financial graphs from GRMF)
Download market data corresponding to \(N\) assets (e.g., stocks or cryptocurrencies) during a period with \(T\) observations, and form the data matrix \(\bm{X}\in\R^{T\times N}\).
Learn a sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}}\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j. \end{array} \]
Learn a \(k\)-component sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}, \bm{F}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}} - \gamma \textm{Tr}\left(\bm{F}^\T\bm{L}\bm{F}\right)\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j\\ & \textm{diag}(\bm{L})=\bm{1}\\ & \bm{F}^\T\bm{F}=\bm{I}. \end{array} \]
Plot the graphs and compare them visually.
Exercise 5.7 (Learning heavy-tailed financial graphs)
Download market data corresponding to \(N\) assets (e.g., stocks or cryptocurrencies) during a period with \(T\) observations, and form the data matrix \(\bm{X}\in\R^{T\times N}\).
Learn a sparse GMRF graph: \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}) - \rho\|\bm{L}\|_{0,\textm{off}}\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j. \end{array} \]
Learn a heavy-tailed MRF graph by solving the sequence of Gaussianized problems for \(k=1,2,\dots\) \[ \begin{array}{ll} \underset{\bm{L}\succeq\bm{0}}{\textm{maximize}} & \textm{log gdet}(\bm{L}) - \textm{Tr}(\bm{L}\bm{S}^k)\\ \textm{subject to} & \bm{L}\bm{1}=\bm{0}, \quad L_{ij} = L_{ji} \le 0, \quad \forall i\neq j, \end{array} \] where \(\bm{S}^k\) is a weighted sample covariance matrix \[ \bm{S}^k = \frac{1}{T}\sum_{t=1}^T w_t^k \times \bm{x}^{(t)}(\bm{x}^{(t)})^\T, \] with weights \(w_t^k = \frac{p + \nu}{\nu + (\bm{x}^{(t)})^\T\bm{L}^k\bm{x}^{(t)}}\).
Plot the graphs and compare them visually.