Appendix B Optimization Algorithms
“Waste no more time arguing what a good man should be. Be one.”
— Marcus Aurelius
Over the past century, there has been a remarkable progression in the development of efficient algorithms designed to solve a broad range of convex optimization problems. In 1947, Dantzig introduced the widely used and efficient simplex method for linear programming (LP), despite its theoretical worst-case complexity being exponential. Later, in 1984, Karmarkar presented a groundbreaking paper (Karmarkar, 1984) proposing an interior-point method for solving LPs, which offered a worst-case polynomial time complexity. This development was followed by numerous researchers extending the application of interior-point methods to quadratic programming (QP) and linear complementarity problems. In 1994, Nesterov and Nemirovskii further advanced the field by developing the theory of self-concordant functions. This theory facilitated the expansion of algorithms based on the log-barrier function to a wider array of convex problems, notably including semidefinite programming (SDP) and second-order cone programming (SOCP) (Nesterov and Nemirovskii, 1994).
In addition to these general-purpose algorithms designed for various classes of convex problems, there exist other highly beneficial techniques and algorithmic frameworks, such as block-coordinate descent, majorization-minimization, successive convex approximation, among others. These can be employed to develop customized, straightforward algorithms specifically tailored to certain (potentially nonconvex) problems, often with improved complexity and convergence rates. This chapter will explore a broad array of such practical algorithms.
This material will be published by Cambridge University Press as Portfolio Optimization: Theory and Application by Daniel P. Palomar. This pre-publication version is free to view and download for personal use only; not for re-distribution, re-sale, or use in derivative works. © Daniel P. Palomar 2024.