Exercises
Exercise B.1 (Euclidean norm approximation)
- Generate randomly the parameters \(\bm{A}\in\R^{10\times5}\) and \(\bm{b}\in\R^{10}\).
- Formulate a regression problem to approximate \(\bm{A}\bm{x} \approx \bm{b}\) based on the \(\ell_2\)-norm.
- Solve it directly with the least squares closed-form solution.
- Solve it using a modeling framework (e.g., CVX).
- Solve it invoking a QP solver.
Exercise B.2 (Manhattan norm approximation)
- Generate randomly the parameters \(\bm{A}\in\R^{10\times5}\) and \(\bm{b}\in\R^{10}\).
- Formulate a regression problem to approximate \(\bm{A}\bm{x} \approx \bm{b}\) based on the \(\ell_1\)-norm.
- Solve it using a modeling framework (e.g., CVX).
- Rewrite it as an LP and solve it invoking an LP solver.
Exercise B.3 (Chebyshev norm approximation)
- Generate randomly the parameters \(\bm{A}\in\R^{10\times5}\) and \(\bm{b}\in\R^{10}\).
- Formulate a regression problem to approximate \(\bm{A}\bm{x} \approx \bm{b}\) based on the \(\ell_\infty\)-norm.
- Solve it using a modeling framework (e.g., CVX).
- Rewrite it as an LP and solve it invoking an LP solver.
Exercise B.4 (Solving an LP) Consider the following LP: \[ \begin{array}{ll} \underset{x_1,x_2}{\textm{maximize}} & 3x_1 + x_2\\ \textm{subject to} & x_1 + 2x_2 \leq 4\\ & 4x_1 + 2x_2 \leq 12\\ & x_1, x_2 \geq 0. \end{array} \]
- Solve it using a modeling framework (e.g., CVX).
- Solve it directly invoking an LP solver.
- Solve it invoking a general-purpose nonlinear solver.
- Implement the projected gradient method to solve the problem.
- Implement the constrained Newton’s method to solve the problem.
- Implement the log-barrier interior-point method to solve the problem (use (1,1) as initial point).
- Compare all the solutions and the computation time.
Exercise B.5 (Central path) Formulate the log-barrier problem corresponding to the LP in Exercise B.4 and plot the central path as the parameter \(t\) varies.
Exercise B.6 (Phase I method) Design a phase I method to find a feasible point for the LP in Exercise B.4, which can then be used as starting point for the barrier method.
Exercise B.7 (Dual problem) Formulate the dual problem corresponding to the LP in Exercise B.4 and solve it using a solver of your choice.
Exercise B.8 (KKT conditions) Write down the Karush-Kuhn-Tucker (KKT) conditions for the LP in Exercise B.4 and discuss their role in determining the optimality of a solution.
Exercise B.9 (Solving a QP) Consider the following QP: \[ \begin{array}{ll} \underset{x_1,x_2}{\textm{maximize}} & x_1^2 + x_2^2\\ \textm{subject to} & x_1 + x_2 = 1\\ & x_1 \geq 0, x_2 \geq 0. \end{array} \]
- Solve it using a modeling framework (e.g., CVX).
- Solve it directly invoking a QP solver.
- Solve it invoking a general-purpose nonlinear solver.
- Implement the projected gradient method to solve the problem.
- Implement the constrained Newton’s method to solve the problem.
- Implement the log-barrier interior-point method to solve the problem (use (0.5,0.5) as initial point).
- Compare all the solutions and the computation time.
Exercise B.10 (Fractional programming) Consider the following fractional program: \[ \begin{array}{ll} \underset{\w}{\textm{maximize}} & \dfrac{\w^\T\bm{1}}{\sqrt{\w^\T\bSigma\w}}\\ \textm{subject to} & \bm{1}^\T\w=1, \quad \w\ge\bm{0}, \end{array} \] where \(\bSigma \succ \bm{0}\).
- Solve it with a general-purpose nonlinear solver.
- Solve it via bisection.
- Solve it via the Dinkelbach method as a sequence of SOCPs.
- Develop a modified algorithm that solves the problem as a sequence of QPs instead.
- Solve it via the Schaible transform method.
- Reformulate the problem as a minimization and then solve it via the Schaible transform method.
- Compare all the previous approaches in terms of accuracy of the solution and computation time.
Exercise B.11 (Soft-thresholding operator) Consider the following convex optimization problem: \[ \begin{array}{ll} \underset{x}{\textm{minimize}} & \frac{1}{2}\|\bm{a}x - \bm{b}\|_2^2 + \lambda|x|, \end{array} \] with \(\lambda\geq0\). Derive the solution and show that it can be written as \[ x = \frac{1}{\|\bm{a}\|_2^2} \mathcal{S}_\lambda\left(\bm{a}^\T\bm{b}\right), \] where \(\mathcal{S}_\lambda(\cdot)\) is the so-called soft-thresholding operator defined as \[\mathcal{S}_\lambda(u) = \textm{sign}(u)(|u| - \lambda)^+,\] with \(\textm{sign}(\cdot)\) denoting the sign function and \((\cdot)^+=\textm{max}(0,\cdot)\).
Exercise B.12 ($\ell_2 - \ell_1$-norm minimization) Consider the following \(\ell_2 - \ell_1\)-norm minimization problem (with \(\bm{A}\in\R^{10\times5}\) and \(\bm{b}\in\R^{10}\) randomly generated): \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \frac{1}{2}\|\bm{A}\bm{x} - \bm{b}\|_2^2 + \lambda\|\bm{x}\|_1. \end{array} \]
- Solve it using a modeling framework (e.g., CVX).
- Rewrite the problem as a QP and solve it invoking a QP solver.
- Solve it with an ad-hoc LASSO solver.
Exercise B.13 (BCD for $\ell_2 - \ell_1$-norm minimization) Solve the \(\ell_2 - \ell_1\)-norm minimization problem in Exercise B.12 via BCD. Plot the convergence versus iterations and CPU time.
Exercise B.14 (MM for $\ell_2 - \ell_1$-norm minimization) Solve the \(\ell_2 - \ell_1\)-norm minimization problem in Exercise B.12 via MM and its accelerated version. Plot the convergence versus iterations and CPU time.
Exercise B.15 (SCA for $\ell_2 - \ell_1$-norm minimization) Solve the \(\ell_2 - \ell_1\)-norm minimization problem in Exercise B.12 via SCA. Plot the convergence versus iterations and CPU time.
Exercise B.16 (ADMM for $\ell_2 - \ell_1$-norm minimization) Solve the \(\ell_2 - \ell_1\)-norm minimization problem in Exercise B.12 via ADMM. Plot the convergence versus iterations and CPU time.