Appendix A Convex Optimization Theory
“Mathematics, rightly viewed, possesses not only truth, but supreme beauty – a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature …”
— Bertrand Russell
Over the past few decades, numerous fundamental and practical advancements have been made in the field of convex optimization theory. These developments have not only found applications in various fields such as engineering, finance, and machine learning, but they have also stimulated the mathematical progression of both the theory and the development of efficient algorithms. Some notable textbook references in this field include Boyd and Vandenberghe (2004), Luenberger and Ye (2021), Bertsekas (1999), Bertsekas et al. (2003), Nemirovski (2000), Ben-Tal and Nemirovski (2001), and Nesterov (2018). Two classic references are Luenberger (1969) and Rockafellar (1970). A range of applications of optimization in engineering can be found in Palomar and Eldar (2009).
Traditionally, there was a common belief that linear problems are easier to solve compared to nonlinear problems. However, as Rockafellar pointed out in a 1993 survey (Rockafellar, 1993), “the great watershed in optimization isn’t between linearity and nonlinearity, but between convexity and nonconvexity.” In essence, convex problems can be optimally solved either in closed form – by applying the optimality conditions derived from Lagrange duality – or numerically – using highly efficient algorithms that exhibit polynomial convergence rates (Boyd and Vandenberghe, 2004). Consequently, it is often said that once a problem is formulated as a convex problem, it is essentially solved.
Unfortunately, most practical problems do not exhibit convexity in their initial formulation. However, many of these problems may possess a hidden convexity that practitioners need to uncover to effectively utilize the tools provided by convex optimization theory.
Generally, certain manipulations are necessary to transform a problem into a convex one. The advantage of formulating a problem in convex form is that, even in the absence of a closed-form solution and despite the problem’s complexity (which may include hundreds of variables and a nonlinear, nondifferentiable objective function), it can still be solved numerically with high efficiency, both theoretically and practically (Boyd and Vandenberghe, 2004). Another appealing aspect of expressing the problem in a convex form is that additional constraints can be easily incorporated, provided they are convex.
This appendix provides a concise introduction to convex optimization theory, drawing on Boyd and Vandenberghe (2004). For more detailed information, the reader is encouraged to consult this comprehensive source.
This material will be published by Cambridge University Press as Portfolio Optimization: Theory and Application by Daniel P. Palomar. This pre-publication version is free to view and download for personal use only; not for re-distribution, re-sale, or use in derivative works. © Daniel P. Palomar 2025.