In the case of unconstrained non-linear programs, we may determine whether the objective function is convex and then use the first order condition (FOC) to find all local minima. However in practice, most of the time, nonlinear programs are ** constrained**. We can not simply ignore those constraints treating the program as an unconstrained, and then try to find an solution by seeing whether it is feasible (or by trying to find a closest solution that is feasible). We need other tools to deal with constrained non-linear programs.

## Lagrange Relaxation

Relaxing a nonlinear program directly (by ignoring all constraints) is an overkill, since it may result in the infeasible solutions, which is something we really don’t want. However we can design a new program that encourage feasibility. For an nonlinear program like this:

`max`_{x∈Rn} f(x)
s.t. g_{i}(x) ≤ b_{i}, ∀i = 1, ..., m

We could try to put these hard constraints `g`

and put them into the objective function _{i}(x)`f(x)`

, and make them soft constraints. They are called soft, because that you are allowed to violate these constraints, but any violation will result in penalty. For the constraint `i`

, we associate a unit reward for feasibility `λ`

to it (or penalty for in-feasibility). If a solution _{i} ≥ 0`x`

satisfies constant ^{-}`i`

(so `b`

), we reward the solution by _{i} - g_{i}(x^{-}) ≥ 0`λ`

. We can add this reward into the relaxed nonlinear programs._{i} [b_{i} - g_{i}(x^{-})]

```
Original NLP:
z
```^{*} = max_{x∈Rn} { f(x) | g_{i}(x) ≤ b_{i}, ∀i = 1, ..., m }
Lagrange relaxed NLP:
z^{L}(λ) = max_{x∈Rn} f(x) + Σ^{m}_{i=1} λ_{i} [b_{i} - g_{i}(x)]
where λ_{i} ≥ 0
m: number of constraints

The new objective function `L(x|λ) = f(x) + Σ`

is called Lagrangian, given ^{m}_{i=1} λ_{i} [b_{i} - g_{i}(x)]`λ`

which is called Lagrange multipliers._{i}

### Convexity

The function `z`

is convex over ^{L}(λ)`λ ∈ [0, ∞)`

. This is amazing, isn’t it? Your primal nonlinear program can be non-convex, and even you might have no idea how to solve it due to its non-convexity. However Lagrange relaxation will give you a dual program that is always convex. In most practical applications, a Lagrange dual program is solved by ^{n}** numerical** algorithms.

### Lagrange Weak Duality

Recall in linear programming, if you have a primal linear program but no way to solve it directly, then you could try to solve its dual program, which is going to provide you some information:

- Any feasible dual solution gives us a bound to the primal linear program.
- Looking for an dual optimal solution that actually gives a tight bound (the smallest upper bound, or the biggest lower bound)

Similarly, Lagrange relaxation actually provides a bound for the original NLP. For example, when we are solving a maximization problem, Lagrange relaxation is going to give us an upper bound. For the two NLPs (the original `z`

and the Lagrange relaxed ^{*}`z`

), we have ^{L}(λ)`z`

for all ^{L}(λ) ≥ z^{*}`λ ≥ 0`

. This is called weak duality.

So, in the case of nonlinear programming with constraints, when you can not solve it directly, Lagrange relaxation can provide some useful information (upper bound for maximization or lower bound for minimization). Then it is natural to define:

`min`_{λ≥0} z^{L}(λ)
= min_{λ≥0} { max_{x∈Rn} f(x) + Σ^{m}_{i=1} λ_{i} [b_{i} - g_{i}(x)] }

So we can try to find the best value of λ (the best solution of the Lagrange relaxed NLP) which in turn will give us the best bound in original NLP.

### Lagrange Strong Duality

Let `w`

be the optimized objective value to the Lagrange dual program. ^{*} = min_{λ≥0} z^{L}(λ)`w`

, if the primal NLP is a “regular” ^{*} = z^{*}** convex** program.

Linear program duality is actually a special case of Lagrange duality. If you apply Lagrange duality on a linear program, its Lagrange dual program will be the same to its dual linear program.

## The KKT Condition

The KKT condition is named by three scholars: Karush, Kuhn, and Tucker. It is going to be something similar for the first-order condition (FOC) for unconstrained problems. You may consider that KKT condition as the “first-order condition” for constrained optimization. For a regular NLP:

`max`_{x∈Rn} f(x)
s.t. g_{i}(x) ≤ b_{i}, ∀i = 1, ..., m

if `x`

is a ^{-}** local** max, then there exists

`λ ∈ R`^{m}

, such that:`g`

for all i = 1, …, m_{i}(x^{-}) ≤ b_{i}`λ ≥ 0`

, and`∇f(x`

^{-}) = Σ^{m}_{i=1}λ_{i}∇g_{i}(x^{-})`λ`

for all i = 1, …, m_{i}[b_{i}- g_{i}(x^{-})] = 0- This is complimentary slackness, where
`λ`

is the dual variable and_{i}`b`

is the slack. This means if you have a nonbinding constraint, its corresponding Lagrange multiplier must be zero, or the other way: if_{i}- g_{i}(x)`λ`

is positive, its corresponding constraint must be binding._{i}

- This is complimentary slackness, where

The condition “if `x`

is a ^{-}** local** max” is

**for all NLPs, but it will be**

*necessary***for convex programs (both objective function**

*sufficient*`f(x)`

and feasible region `g`_{i}(x) ≤ b_{i}

are convex).So if we want to have `x`

to be a local maximum, there are 3 conditions that must be met:^{-}

- Primal feasibility:
`g`

for all i = 1, …, m. It means_{i}(x^{-}) ≤ b_{i}`x`

must be feasible.^{-} - Dual feasibility:
`λ ≥ 0`

, and`∇f(x`

^{-}) = Σ^{m}_{i=1}λ_{i}∇g_{i}(x^{-})`∇f(x`

is the first order condition (FOC) for the Lagrangian^{-})`L(x`

^{-}|λ)

- Complementary slackness:
`λ`

for all i = 1, …, m_{i}[b_{i}- g_{i}(x^{-})] = 0

Still, KKT condition is not the silver bullet, it cannot be used to solve any problem you like. Because finding all local maxima can be time-consuming – adding one additional `λ`

will multiply the number of conditions by 2.

## My Certificate

For more on ** Lagrangian Duality and KKT Condition**, please refer to the wonderful course here https://www.coursera.org/learn/operations-research-theory

## Related Quick Recap

*I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai*

All of your support will be used for maintenance of this site and more great content. I am humbled and grateful for your generosity. Thank you!

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!

Tweet