\( \newcommand{\bm}[1]{\boldsymbol{#1}} \newcommand{\textm}[1]{\textsf{#1}} \newcommand{\textnormal}[1]{\textsf{#1}} \def\T{{\mkern-2mu\raise-1mu\mathsf{T}}} \newcommand{\R}{\mathbb{R}} % real numbers \newcommand{\E}{{\rm I\kern-.2em E}} \newcommand{\w}{\bm{w}} % bold w \newcommand{\bmu}{\bm{\mu}} % bold mu \newcommand{\bSigma}{\bm{\Sigma}} % bold mu \newcommand{\bigO}{O} %\mathcal{O} \renewcommand{\d}[1]{\operatorname{d}\!{#1}} \)

B.11 Summary

  • A broad spectrum of algorithms is available to solve optimization problems. The specific selection depends on the problem’s requirements and the user’s technical expertise.

  • Solvers are accessible for most types of convex and nonconvex formulations across most programming languages. Most users will simply invoke a solver or, for greater ease, utilize a modeling framework to define the problem more conveniently. The framework will then call an appropriate solver.

  • Inside a solver, various methods can be employed, such as the gradient descent method, Newton’s method, and interior-point methods, among others. However, most users do not need to delve into these details.

  • More advanced users may seek to develop improved algorithms tailored to their specific problem. These can be faster, more efficient, simpler to implement, or capable of handling larger problems. This advanced approach demands more effort and knowledge from the user, such as knowledge of the Dinkelbach method or the Charnes–Cooper–Schaible transform for fractional problems.

  • Complex problems can be addressed using iterative algorithmic frameworks. These frameworks aim to break down the original, intricate problem into more manageable sub-problems, although this approach necessitates iterations. Some of the most popular and widely used methods include:

    • bisection
    • block coordinate descent (BCD)
    • majorization–minimization (MM)
    • successive convex approximation (SCA)
    • alternating direction method of multipliers (ADMM).