\( \newcommand{\bm}[1]{\boldsymbol{#1}} \newcommand{\textm}[1]{\textsf{#1}} \def\T{{\mkern-2mu\raise-1mu\mathsf{T}}} \newcommand{\R}{\mathbb{R}} % real numbers \newcommand{\E}{{\rm I\kern-.2em E}} \newcommand{\w}{\bm{w}} % bold w \newcommand{\bmu}{\bm{\mu}} % bold mu \newcommand{\bSigma}{\bm{\Sigma}} % bold mu \newcommand{\bigO}{O} %\mathcal{O} \renewcommand{\d}[1]{\operatorname{d}\!{#1}} \)

A.5 Taxonomy of convex problems

A convex problem, such as the one in (A.6), can be further classified into different specific types of problems. These can be identified by abbreviations such as LP, QP, QCQP, SOCP, SDP, CP, FP, LFP, and GP, which we will briefly explore next. For more detailed information, readers are referred to (S. P. Boyd and Vandenberghe, 2004, Chapter 4). In traditional optimization literature, a problem is also referred to as a program. Therefore, for example, a linear program is the same as a linear problem.

This classification is beneficial for both theoretical and algorithmic purposes. For instance, solvers are designed to handle specific types of problems (see Section B.1 in Appendix B for more details on solvers).

A.5.1 Linear programming

When the objective and constraint functions are all affine, a problem is called a linear program or linear problem (LP): \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{c}^\T\bm{x} + d\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}, \end{array} \end{array} \] where the parameters \(\bm{A}\), \(\bm{b}\), \(\bm{c}\), \(d\), \(\bm{G}\), and \(\bm{h}\) are of appropriate size. Linear programs are, of course, convex optimization problems.

The geometric interpretation of an LP can be visualized as a polyhedron on an inclined flat surface, where an optimal solution is always located at a corner of the polyhedron. This observation forms the basis of the popular simplex method, developed by Dantzig in 1947, for solving LPs.

While problems invoving only linear functions are easily recognizable as LPs, some formulations involving \(\ell_{\infty}\)-norm and \(\ell_{1}\)-norm minimization can also be rewritten as LP as shown next.

Example A.10 (\(\ell_{\infty}\)-norm minimization as an LP) The problem \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\left\Vert\bm{x}\right\Vert _{\infty}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}, \end{array} \end{array} \] is equivalent to the LP \[ \begin{array}{ll} \underset{t,\bm{x}}{\textm{minimize}} & \begin{array}{c} t\end{array}\\ \textm{subject to} & \begin{array}[t]{l} -t\bm{1}\leq \bm{x}\leq t\bm{1}\\ \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}. \end{array} \end{array} \]

Example A.11 (\(\ell_1\)-norm minimization as an LP) The problem \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\left\Vert\bm{x}\right\Vert _{1}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b} \end{array} \end{array} \] is equivalent to the LP \[ \begin{array}{ll} \underset{\bm{t},\bm{x}}{\textm{minimize}} & \begin{array}{c} \sum_{i}t_{i}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} -\bm{t}\leq \bm{x}\leq \bm{t}\\ \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}. \end{array} \end{array} \]

A.5.2 Linear-fractional programming

The problem of minimizing a ratio of affine functions over a polyhedron is called a linear-fractional program (LFP): \[\begin{equation} \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\left(\bm{c}^\T\bm{x}+d\right)/\left(\bm{e}^\T\bm{x}+f\right)\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b} \end{array} \end{array} \tag{A.11} \end{equation}\] with \(\textm{dom}\,f_{0}=\left\{\bm{x}\mid \bm{e}^\T\bm{x}+f>0\right\}.\)

An LFP is not a convex problem, but it is a quasiconvex. Therefore, problem (A.11) can be solved via bisection (see Algorithm A.1) by sequentially solving a series of feasibility LPs of the form: \[ \begin{array}{ll} \underset{\bm{x}}{\textm{find}} & \begin{array}{c}\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} t \left(\bm{e}^\T\bm{x}+f\right) \geq \bm{c}^\T\bm{x}+d\\ \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}. \end{array} \end{array} \]

Alternatively, the LFP in (A.11) can be transformed into the following LP via the Charnes-Cooper transform (Bajalinov, 2003; Charnes and Cooper, 1962): \[ \begin{array}{ll} \underset{\bm{y},t}{\textm{minimize}} & \begin{array}{c} \bm{c}^\T\bm{y}+dt\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{y}\leq \bm{h}t\\ \bm{A}\bm{y}=\bm{b}t\\ \bm{e}^\T \bm{y} + ft = 1\\ t>0, \end{array} \end{array} \] with variables \(\bm{y},t\), related to the original variable \(\bm{x}\) as \[ \bm{y} = \frac{\bm{x}}{\bm{e}^\T \bm{x}+f}\quad\textm{and} \quad t = \frac{1}{\bm{e}^\T \bm{x}+f}. \] The original variable can be easily recovered from \(\bm{y},t\) as \(\bm{x} = \bm{y}/t.\) For details on the Charnes-Cooper transform, the reader is referred to Section B.5.3 in Appendix B.

A.5.3 Quadratic programming

The convex optimization problem (A.6) is called a quadratic program (QP) if the objective function is (convex) quadratic, and the constraint functions are affine: \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\frac{1}{2}\bm{x}^\T\bm{P}\bm{x}+\bm{q}^\T\bm{x}+r\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{G}\bm{x}\leq \bm{h}\\ \bm{A}\bm{x}=\bm{b}, \end{array} \end{array} \] where \(\bm{P}\succeq\bm{0}\). QPs include LPs as special case when \(\bm{P}=\bm{0}.\)

The geometric interpretation of a QP can be visualized as an elliptical surface intersecting a polyhedron, where the optimal solution does not necessarily coincide with a vertex of the polyhedron.

Example A.12 (Least squares) The LS problem \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\left\Vert \bm{A}\bm{x} - \bm{b}\right\Vert_2^2\end{array}\\ \end{array} \] is an unconstrained QP.

Example A.13 (Box-constrained LS) The following regression problem with upper and lower bounds on the variables \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\left\Vert \bm{A}\bm{x} - \bm{b}\right\Vert_2^2\end{array}\\ \textm{subject to} & \begin{array}[t]{l} l_i \leq x_i \leq u_i,\quad i=1,\dots,n\\ \end{array} \end{array} \] is a QP.

If the objective function in (A.6) as well as the inequality constraints are (convex) quadratic, then the problem is called quadratically constrained quadratic program (QCQP): \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} \frac{1}{2}\bm{x}^\T\bm{P}_{0}\bm{x} + \bm{q}_{0}^\T\bm{x} + r_{0}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \frac{1}{2}\bm{x}^\T\bm{P}_{i}\bm{x} + \bm{q}_{i}^\T\bm{x} + r_{i}\leq0,\quad i=1,\ldots,m\\ \bm{A}\bm{x}=\bm{b}, \end{array} \end{array} \] where \(\bm{P}_i\succeq\bm{0}\). In this case, the feasible region is the intersection of ellipsoids. QCQPs include QPs as special case when \(\bm{P}_i=\bm{0}\) for \(i=1,\dots,m.\)

A.5.4 Second-order cone programming

A convex problem that is closely related to quadratic programming is the second-order cone program (SOCP): \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{f}^\T \bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \left\Vert \bm{A}_{i}\bm{x}+\bm{b}_{i}\right\Vert \leq \bm{c}_{i}^\T\bm{x}+d_{i}, \quad i=1,\ldots,m\\ \bm{F}\bm{x}=\bm{g}, \end{array} \end{array} \] where the constraints \(\left\Vert \bm{A}_{i}\bm{x}+\bm{b}_{i}\right\Vert \leq \bm{c}_{i}^\T\bm{x}+d_{i}\) are called second-order cone (SOC) constraints since they are affine compositions of the (convex) SOC \[\begin{equation} \left\{\left( \bm{x},t\right) \in \mathbb{R}^{n+1}\mid\left\Vert \bm{x}\right\Vert \leq t\right\}. \tag{A.12} \end{equation}\]

An SOCP reduces to a QCQP when \(c_{i}=0\) for \(i=1,\dots,m\) (by squaring both sides of the inequalities). If each \(\bm{A}_{i}\) is a row-vector (or \(\bm{A}_{i}=0\)), then an SOCP reduces to an LP.

A comprehensive monograph on SOCPs can be found in (Lobo et al., 1998).

Example A.14 (Robust LP) Consider the linear program \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} \bm{c}^\T\bm{x} \end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{a}_i^\T\bm{x}\leq b_i,\quad i=1,\ldots,m. \end{array} \end{array} \]

Now suppose there is some uncertainty in the parameters \(\bm{a}_i\) and they are known to lie in given ellipsoids: \[ \mathcal{E}_{i}=\left\{ \bar{\bm{a}}_i + \bm{P}_i \bm{u}\mid\left\Vert\bm{u}\right\Vert \leq1\right\}, \] where \(\bm{P}_i\in\R^{n\times n}\). If we require that the constraints be satisfied for all possible values of the parameters \(\bm{a}_i\), we obtain a robust LP: \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} \bm{c}^\T\bm{x} \end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{a}_i^\T\bm{x}\leq b_i, \textm{ for all }\bm{a}_{i}\in\mathcal{E}_{i},\quad i=1,\ldots,m. \end{array} \end{array} \] The robust constraint \(\bm{a}_i^\T\bm{x}\leq b_i\) for all \(\bm{a}_i\in\mathcal{E}_i\) can equivalently expressed as \[ \textm{sup}\left\{\bm{a}_i^\T\bm{x} \mid \bm{a}_i\in\mathcal{E}_i\right\} = \bar{\bm{a}}_i^\T\bm{x} + \left\Vert \bm{P}_i^\T\bm{x} \right\Vert_2 \leq b_i. \] Hence, the robust LP can be expressed as the SOCP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} \bm{c}^\T\bm{x} \end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bar{\bm{a}}_i^\T\bm{x} + \left\Vert \bm{P}_i^\T\bm{x} \right\Vert_2\leq b_i,\quad i=1,\ldots,m. \end{array} \end{array} \]

A.5.5 Semidefinite programming

A more general convex problem than an SOCP is the semidefinite program (SDP), formulated as \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{c}^\T\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} x_{1}\bm{F}_{1}+x_{2}\bm{F}_{2} + \cdots + x_{n}\bm{F}_{n} + \bm{G} \preceq \bm{0}\\ \bm{A}\bm{x}=\bm{b}, \end{array} \end{array} \] which has linear matrix inequality (LMI) (convex) constraints of the form \[\begin{equation} x_{1}\bm{F}_{1}+\ldots+x_{n}\bm{F}_{n}+\bm{G} \preceq \bm{0}, \tag{A.13} \end{equation}\] where \(\bm{F}_{1},\cdots,\bm{F}_{n},\bm{G}\in\mathbb{S}^{k}\) (\(\mathbb{S}^{k}\) is the set of symmetric \(k\times k\) matrices) and \(\bm{A}\succeq\bm{B}\) means that \(\bm{A}-\bm{B}\) is positive semidefinite. Note that multiple LMI constraints can always be written as a single one by using block diagonal matrices.

An SDP reduces to an LP when the matrix in the LMI inequality is diagonal, that is, the SDP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{c}^\T\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \textm{Diag}(\bm{A}\bm{x}-\bm{b})\preceq\bm{0}\end{array} \end{array} \] is equivalent to the LP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{c}^\T\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{A}\bm{x}\leq \bm{b}.\end{array} \end{array} \]

An SDP can also be reduced to an SOCP when the matrix in the LMI has a specific \(2\times2\) block structure. In this case, the SDP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{f}^\T\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \begin{bmatrix} \left(\bm{c}_{i}^\T\bm{x}+d_{i}\right)\bm{I} & \bm{A}_{i}\bm{x}+\bm{b}_{i}\\ \left(\bm{A}_{i}\bm{x}+\bm{b}_{i}\right)^\T & \bm{c}_{i}^\T\bm{x}+d_{i} \end{bmatrix}\succeq\bm{0},\quad i=1,\ldots,m\end{array} \end{array} \] is equivalent to the SOCP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c}\bm{f}^\T\bm{x}\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \left\Vert \bm{A}_{i}\bm{x}+\bm{b}_{i}\right\Vert \leq \bm{c}_{i}^\T\bm{x}+d_{i},\quad i=1,\ldots,m.\end{array} \end{array} \] The equivalence can be shown via the Schur complement: \[ \bm{X} = \begin{bmatrix} \bm{A} & \bm{B}\\ \bm{B}^\T & \bm{C} \end{bmatrix}\succeq\bm{0} \quad\Longleftrightarrow\quad \bm{S} = \bm{C} - \bm{B}^\T\bm{A}^{-1}\bm{B} \succeq\bm{0}, \] where we have tacitly assumed \(\bm{A}\succ\bm{0}\).

A comprehensive monograph on SDPs can be found in (Vandenberghe and Boyd, 1996).

Example A.15 (Eigenvalue minimization) The maximum eigenvalue minimization problem \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} \lambda_{\textm{max}}\left(\bm{A}(\bm{x})\right), \end{array}\end{array} \] where \(\bm{A}(\bm{x})=\bm{A}_{0}+x_{1}\bm{A}_{1}+\cdots+x_{n}\bm{A}_{n}\), is equivalent to the SDP \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}{c} t\end{array}\\ \textm{subject to} & \begin{array}[t]{l} \bm{A}(\bm{x})\preceq t\bm{I}.\end{array} \end{array} \]

This follows from \[ \begin{array}[t]{l} \lambda_{\textm{max}}\left(\bm{A}(\bm{x})\right)\leq t\end{array} \quad\Longleftrightarrow\quad \begin{array}[t]{l} \bm{A}(\bm{x})\preceq t\bm{I}. \end{array} \]

A.5.6 Conic programming

A useful generalization of the standard convex optimization problem (A.6) can be achieved by allowing the inequality constraints to be vector-valued and incorporating generalized inequalities into the constraints: \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}[t]{c} f_{0}(\bm{x}) \end{array} \\ \textm{subject to} & \begin{array}[t]{l} \bm{f}_{i}(\bm{x})\preceq_{\mathcal{K}_{i}}\bm{0},\\ \bm{h}_{i}(\bm{x})=\bm{0}, \end{array} \quad \begin{array}[t]{l} 1\leq i\leq m\\ 1\leq i\leq p, \end{array} \end{array} \] where the generalized inequalities76\(\preceq_{\mathcal{K}_{i}}\)’ are defined by the proper cones \(\mathcal{K}_{i}\) (note that \(\bm{a}\preceq_{\mathcal{K}}\bm{b}\) \(\Leftrightarrow\) \(\bm{b} - \bm{a}\in\mathcal{K}\)) and \(f_{i}\) are \(\mathcal{K}_{i}\)-convex77 (see Section A.7 for more details on generalized inequalities).

Among the simplest convex optimization problems with generalized inequalities are cone programs (CP) (or conic-form problems), which have a linear objective and one inequality constraint function (Ben-Tal and Nemirovski, 2001; S. P. Boyd and Vandenberghe, 2004): \[\begin{equation} \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}[t]{c} \bm{c}^\T\bm{x} \end{array} \\ \textm{subject to} & \begin{array}[t]{l} \bm{Fx}+\bm{g}\preceq_{\mathcal{K}}\bm{0}\\ \bm{Ax}=\bm{b}. \end{array} \end{array} \tag{A.14} \end{equation}\]

CPs particularize nicely to LPs, SOCPs, and SDPs as follows:

  • If \(\mathcal{K}=\R_{+}^{n}\) (nonnegative orthant), the partial ordering \(\preceq_{\mathcal{K}}\) is the usual componentwise inequality between vectors and the CP (A.14) reduces to an LP.

  • If \(\mathcal{K}=\mathcal{C}^{n}\) (second-order cone), \(\preceq_{\mathcal{K}}\) corresponds to a constraint of the form (A.12) and the CP (A.14) becomes an SOCP.

  • If \(\mathcal{K}=\mathbb{S}_{+}^{n}\) (positive semidefinite cone), the generalized inequality \(\preceq_{\mathcal{K}}\) reduces to the usual matrix inequality as in (A.13) and the CP (A.14) simplifies to an SDP.

A.5.7 Fractional programming

Fractional programming is a family of optimization problems that involve ratios. Its origins can be traced back to a 1937 paper on economic expansion (von Neumann, 1937). Since then, it has inspired research in various fields such as economics, management science, optics, information theory, communication systems, graph theory, and computer science.

The simplest form of a fractional program (FP) is \[\begin{equation} \begin{array}{ll} \underset{\bm{x}}{\textm{maximize}} & \dfrac{f(\bm{x})}{g(\bm{x})}\\ \textm{subject to} & \bm{x}\in\mathcal{X}, \end{array} \tag{A.15} \end{equation}\] where \(f(\bm{x})\ge0\), \(g(\bm{x})>0\), and \(\mathcal{X}\) denotes the feasible set. One particular case is the LFP in (A.11), when both \(f\) and \(g\) are linear functions.

FPs have been widely studied and extended to deal with multiple ratios such as \[ \begin{array}{ll} \underset{\bm{x}}{\textm{maximize}} & \underset{i}{\textm{min}} \; \dfrac{f_i(\bm{x})}{g_i(\bm{x})}\\ \textm{subject to} & \bm{x}\in\mathcal{X}. \end{array} \]

FPs are nonconvex problems, which in principle makes them challenging to solve (Stancu-Minasian, 1992). Fortunately, in the case known as concave-convex FP, where \(f\) is a concave function and \(g\) is a convex function, they can be solved relatively easily using different methods. The three main approaches, covered in detail in Section B.5 of Appendix B, are:

  • Bisection method: Similar to the linear case of LFP, the bisection method (see Algorithm A.1) involves solving a sequence of convex feasibility problems of the form \[ \begin{array}{ll} \underset{\bm{x}}{\textm{find}} & \begin{array}{c} \bm{x} \end{array}\\ \textm{subject to} & \begin{array}[t]{l} t g(\bm{x}) \leq f(\bm{x})\\ \bm{x}\in\mathcal{X}. \end{array} \end{array} \]

  • Dinkelbach’s transform: This approach eliminates the fractional objective by solving a sequence of simpler convex problems of the form \[\begin{equation} \begin{array}{ll} \underset{\bm{x}}{\textm{maximize}} & f(\bm{x}) - y^k g(\bm{x})\\ \textm{subject to} & \bm{x}\in\mathcal{X}, \end{array} \tag{A.16} \end{equation}\] where the weight \(y^k\) is updated as \(y^{k} = f(\bm{x}^{k})/g(\bm{x}^{k})\) (see Section B.5.2).

  • Schaible transform: This is a more general case of the Charnes-Cooper transform (used for LFPs) proposed by (Schaible, 1974). The original concave-convex FP is thus transformed into an equivalent convex problem \[ \begin{array}{ll} \underset{\bm{y},t}{\textm{maximize}} & t f\left(\frac{\bm{y}}{t}\right)\\ \textm{subject to} & t g\left(\frac{\bm{y}}{t}\right) \le 1\\ & t > 0\\ & \bm{y}/t\in\mathcal{X}, \end{array} \] with variables \(\bm{y},t\), related to the original variable \(\bm{x}\) by \[ \bm{y} = \frac{\bm{x}}{g(\bm{x})}\quad\textm{and} \quad t = \frac{1}{g(\bm{x})}. \] The original variable can be easily recovered from \(\bm{y},t\) by \(\bm{x} = \bm{y}/t\) (see Section B.5.3).

A.5.8 Geometric programming

We will now examine a family of optimization problems that are not naturally convex, but can be converted into convex optimization problems through a change of variables and a transformation of the objective and constraint functions.

A monomial function, or simply a monomial, is a function \(f: \R^n \rightarrow \R\) with \(\text{dom}\,f = \R_{++}^n\) defined as \[ f(\bm{x}) = c x_1^{a_1} x_2^{a_2}\dots x_n^{a_n}, \] where \(c>0\) and \(a_i\in\R\).

A posynomial function, or simply a posynomial, is a sum of monomials: \[ f(\bm{x}) = \sum_{k=1}^{K} c_k x_1^{a_{1k}} x_2^{a_{2k}}\dots x_n^{a_{nk}}, \] where \(c_k>0\).

A geometric program (GP) is a (nonconvex) problem of the form \[ \begin{array}{ll} \underset{\bm{x}}{\textm{minimize}} & \begin{array}[t]{l} f_0(\bm{x}) \end{array}\\ \textm{subject to} & \begin{array}[t]{l} f_{i}(\bm{x})\leq1,\\ h_{i}(\bm{x})=1, \end{array} \quad \begin{array}[t]{l} i=1,\ldots,m\\ i=1,\ldots,p, \end{array} \end{array} \] where \(f_0,\dots,f_m\) are posynomials and \(h_1,\dots,h_p\) are monomials. The domain of this problem is \(\mathcal{D}=\R_{++}^{n}\), i.e., the constraint \(\bm{x}>\bm{0}\) is implicit.

Suppose we apply the change of variables \(y_i = \textm{log}\,x_i\) and \(b=\textm{log}\,c\) on the monomial \(f(\bm{x}) = c x_1^{a_1} x_2^{a_2}\dots x_n^{a_n}\). Then the log of the monomial is \[ \tilde{f}(\bm{y}) = \textm{log}\, f(e^{\bm{x}}) = b + a_1 y_1 + a_2 y_2 + \dots + a_n y_n = b + \bm{a}^\T\bm{y}, \] which is an affine function.

Similarly, for a posynomial, we obtain \[ \tilde{f}(\bm{y}) = \textm{log}\,\sum_{k=1}^{K}e^{b_{k} + \bm{a}_k^\T\bm{y}}, \] which is the so-called log-sum-exp function, a convex function.

Finally, the resulting transformed GP in convex form is \[ \begin{array}{ll} \underset{\bm{y}}{\textm{minimize}} & \begin{array}[t]{l} \textm{log}\,\sum_{k=1}^{K_0}e^{b_{0k} + \bm{a}_{0k}^\T\bm{y}} \end{array}\\ \textm{subject to} & \begin{array}[t]{l} \textm{log}\,\sum_{k=1}^{K_i}e^{b_{ik} + \bm{a}_{ik}^\T\bm{y}}\leq0,\\ h_{i} + \bm{g}_{i}^\T\bm{y} = 0, \end{array} \quad \begin{array}[t]{l} i=1,\ldots,m\\ i=1,\ldots,p. \end{array} \end{array} \]

Some comprehensive monographs on GP include (S. Boyd et al., 2007) and (Chiang, 2005).

References

Bajalinov, E. B. (2003). Linear-fractional programming: Theory, methods, applications and software. Kluwer Academic Publishers.
Ben-Tal, A., and Nemirovski, A. (2001). Lectures on modern convex optimization: Analysis, algorithms, and engineering applications. Society for Industrial and Applied Mathematics (SIAM).
Boyd, S. P., and Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
Boyd, S., Kim, S. J., Vandenberghe, L., and Hassibi, A. (2007). A tutorial on geometric programming. Optimization and Engineering, Springer.
Charnes, A., and Cooper, W. W. (1962). Programming with linear fractional functionals. Naval Research Logistics Quarterly, 9(3-4), 181–186.
Chiang, M. (2005). Geometric programming for communication systems. Foundations and Trends in Communications and Information Theory.
Lobo, M. S., Vandenberghe, L., Boyd, S., and Lebret, H. (1998). Applications of second-order cone programming. Linear Algebra and Applications, 284(1-3), 193–228.
Schaible, S. (1974). Parameter-free convex equivalent and dual programs of fractional programming problems. Zeitschrift Fur Operations Research, 18(5), 187–196.
Stancu-Minasian, I. M. (1992). Fractional programming: Theory, methods and applications. Kluwer Academic Publishers.
Vandenberghe, L., and Boyd, S. (1996). Semidefinite programming. SIAM Review, 38(1), 49–95.
von Neumann, J. (1937). Uber ein okonomisches gleichgewichtssystem und eine verallgemeinerung des brouwerschen fixpunktsatzes. Ergebnisse Eines Mathematischen Kolloquiums, 8, 73–83.

  1. A generalized inequality is a partial ordering on \(\mathbb{R}^{n}\) that has many of the properties of the standard ordering on \(\R\). A common example is the matrix inequality defined by the cone of positive semidefinite \(n\times n\) matrices \(\mathbb{S}_{+}^{n}\).↩︎

  2. A function \(\bm{f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k_{i}}\) is \(\mathcal{K}_{i}\)-convex if the domain is a convex set and, for all \(\bm{x},\bm{y}\in\operatorname*{dom}f\) and \(\theta\in\left[0,1\right]\), \(\bm{f}\left(\theta\bm{x}+(1-\theta)\bm{y}\right)\preceq_{\mathcal{K}_{i}}\theta\bm{f}\left( \bm{x}\right)+(1-\theta)\bm{f}\left( \bm{y}\right)\).↩︎