B.11 Summary
A broad spectrum of algorithms is available to solve optimization problems. The specific selection depends on the problem’s requirements and the user’s technical expertise.
Solvers are accessible for most types of convex and nonconvex formulations across all programming languages. Most users will simply invoke a solver or, for greater ease, utilize a modeling framework to define the problem more conveniently. The framework will then call an appropriate solver.
Inside a solver, various methods can be employed, such as the gradient descent method, Newton’s method, and interior-point methods, among others. However, most users do not need to delve into these details.
More advanced users may seek to develop improved algorithms tailored to their specific problem. These can be faster, more efficient, simpler to implement, or capable of handling larger problems. This advanced approach demands more effort and knowledge from the user, such as knowledge of the Dinkelbach method or the Charnes-Cooper-Schaible transform for fractional problems.
Complex problems can be addressed using iterative algorithmic frameworks. These frameworks aim to break down the original, intricate problem into more manageable sub-problems, although this approach necessitates iterations. Some of the most popular and widely used methods include:
- Bisection
- Block Coordinate Descent (BCD)
- Majorization-Minimization (MM)
- Successive Convex Approximation (SCA)
- Alternating Direction Method of Multipliers (ADMM)