5.3 KKT Conditions

In mathematical optimisation, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

Allowing inequality constraints, the KKT approach to nonlinear programming generalises the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximisation (minimisation) problem is rewritten as a Lagrange function whose optimal point is a saddle point

KKT conditions is the necessary conditions for optimality in general constrained problem.

For a given nonlinear programming problem: maxf(x)s.t.gi(x)bifor i=1,2,...,mx0 where x=(x1,x2,,xn), The necessary condition for x being its critical point is that x satisfy all the following KKT conditions: hxj0for j=1,2,,nxjhxj=0for j=1,2,,nhλi0for i=1,2,,mλihλi=0for i=1,2,,mx0λ0 where the Lagrangian function h(x,λ) and its derivatives are: h(x,λ)=f(x)mi=1λi[gi(x)bi]hxj=fxjmi=1λigixjfor j=1,2,,nhλi=gi(x)+bifor i=1,2,,m

5.3.1 Critical Point

A critical point for a real-valued and differentiable function f(x1,,xn) is a point at which the function’s slope is zero in all of the x1,,xn directions. (Morgan 2015)

5.3.2 for Constrained Optimization with Inequality Constraints

Satisfying these conditions does not guarantee that the solution is optimal. Certain additional convexity assumptions are needed to obtain this guarantee. (Hillier 2012)

Thus, following corollary can be used instead:

Corollary 5.1 Assume that f(x) is a concave function and that g1(x),g2(x),,gm(x) are convex functions (i.e., this problem is a convex programming problem), where all these functions satisfy the regularity conditions. Then x=(x1,x2,,xn) is an optimal solution if and only if all the KKT conditions are satisfied.

To illustrate the point, following two-variate problem is analysed.

Example 5.1 (Beer Production) To maximise the utility associated with beer productions.

The decision variables are defined as follows:  Variables  Definition  Type x1 hours spent producing IPA continuousx2 hours spent producing Lager continuous

The problem can be formulated as: max15x1+16x2s.t.x1+x2120x1,x2R+

Two parts in the function L(x1,x2,λ) are monotonically increasing, so the function is strictly convex. (Its Hessian matrix is positive semidefinite for all possible values.) It is obvious that the decision variables belong to a convex set.

Point (1, 1) is a slater point, so the problem satisfies Slater’s condition. The strong duality holds.

The Lagrangian function is: L(x1,x2,λ)=15x1+16x2λ(x1+x2120) whose derivatives are: Lx1=152x1λLx2=8/x2λLλ=x1+x2120

Also: x1,x20λ0

Critical points can be calculated by the symbolic math toolbox in MATLAB:

syms x1 x2 lbd
eq(1) = lbd * (x1 + x2 - 120) == 0;
eq(2) = x1 * (15/2/sqrt(x1) - lbd) == 0;
eq(3) = x2 * (8/sqrt(x2) - lbd) == 0;
sol = solve(eq)

Results are (0, 0, 0), (120, 0, 0.6847), (0, 120, 0.7303), and (56.133, 63.867, 1.0010), and corresponding values of the objective function are 0, 164.3168, 175.2712, and 240.2499. So the point (56.133, 63.867, 1.0010) is chosen as the optimal solution. x1=56.133so that 3x122.5x2=63.867so that 4x232λ115x1+16x2=240.250

So I would allocate 56.133 hours in total to produce 22 and half bottles of IPA, and 63.867 hours in total to produce 32 bottles of Lager. The obtained maximum revenue is 240.250.

5.3.3 Validation

The results obtained from modern optimisation algorithms can be validated using the duality gap and KKT conditions.

For complicated problems, it may be difficult, if not essentially impossible, to derive an optimal solution directly from the KKT conditions. Nevertheless, these conditions still provide valuable clues as to the identity of an optimal solution, and they also permit us to check whether a proposed solution may be optimal. (Hillier 2012)

References

Hillier, Frederick S. 2012. Introduction to Operations Research. Tata McGraw-Hill Education.

Morgan, Peter B. 2015. An Explanation of Constrained Optimization for Economists. University of Toronto Press.