\( \newcommand{\bm}[1]{\boldsymbol{#1}} \newcommand{\textm}[1]{\textsf{#1}} \def\T{{\mkern-2mu\raise-1mu\mathsf{T}}} \newcommand{\R}{\mathbb{R}} % real numbers \newcommand{\E}{{\rm I\kern-.2em E}} \newcommand{\w}{\bm{w}} % bold w \newcommand{\bmu}{\bm{\mu}} % bold mu \newcommand{\bSigma}{\bm{\Sigma}} % bold mu \newcommand{\bigO}{O} %\mathcal{O} \renewcommand{\d}[1]{\operatorname{d}\!{#1}} \)

Appendix A Convex Optimization Theory

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature…”

— Bertrand Russell

Over the past few decades, numerous fundamental and practical advancements have been made in the field of convex optimization theory. These developments have not only found applications in various fields such as engineering, finance, and machine learning, but they have also stimulated the mathematical progression of both the theory and the development of efficient algorithms. Some notable textbook references in this field include (Bertsekas, 1999; Bertsekas et al., 2003; S. P. Boyd and Vandenberghe, 2004; Nemirovski, 2001; Nesterov, 2018). Two classic references are (Luenberger, 1969; Rockafellar, 1970). A range of applications of optimization in engineering can be found in (Palomar and Eldar, 2009).

Traditionally, there was a common belief that linear problems were easier to solve compared to nonlinear problems. However, as Rockafellar pointed out in a 1993 survey (Rockafellar, 1993), “the great watershed in optimization isn’t between linearity and nonlinearity, but between convexity and nonconvexity.” In essence, convex problems can be optimally solved either in closed form—by applying the optimality conditions derived from Lagrange duality—or numerically—using highly efficient algorithms that exhibit polynomial convergence rates (S. P. Boyd and Vandenberghe, 2004). Consequently, it is often said that once a problem is formulated as a convex problem, it is essentially solved.

Unfortunately, most practical problems do not exhibit convexity in their initial formulation. However, many of these problems may possess a hidden convexity that practitioners need to uncover to effectively utilize the tools provided by convex optimization theory.

Generally, certain manipulations are necessary to transform a problem into a convex one. The advantage of formulating a problem in convex form is that, even in the absence of a closed-form solution and despite the problem’s complexity (which may include hundreds of variables and a nonlinear, nondifferentiable objective function), it can still be solved numerically with high efficiency, both theoretically and practically (S. P. Boyd and Vandenberghe, 2004). Another appealing aspect of expressing the problem in a convex form is that additional constraints can be easily incorporated, provided they are convex.

This appendix provides a concise introduction to convex optimization theory, drawing from the textbook (S. P. Boyd and Vandenberghe, 2004). For more detailed information, the reader is encouraged to consult this comprehensive source.

This material will be published by Cambridge University Press as Portfolio Optimization: Theory and Application by Daniel P. Palomar. This pre-publication version is free to view and download for personal use only; not for re-distribution, re-sale, or use in derivative works. © Daniel P. Palomar 2024.

References

Bertsekas, D. P. (1999). Nonlinear programming. Athena Scientific.
Bertsekas, D. P., Nedić, A., and Ozdaglar, A. E. (2003). Convex analysis and optimization. Belmont, MA, USA: Athena Scientific.
Boyd, S. P., and Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
Luenberger, D. G. (1969). Optimization by vector space methods. New York: Wiley.
Nemirovski, A. (2001). Lectures on modern convex optimization. In Society for industrial and applied mathematics (SIAM).
Nesterov, Y. (2018). Lectures on convex optimization. Springer.
Palomar, D. P., and Eldar, Y. C. (2009). Convex optimization in signal processing and communications. Cambridge University Press.
Rockafellar, R. T. (1970). Convex analysis. Princeton, NJ: Princeton Univ. Press.
Rockafellar, R. T. (1993). Lagrange multipliers and optimality. SIAM Review, 35(2), 183–238.