When you are given a linear program, in many cases we call it a ** primal** linear program. It is usually going to be able to find another program, called a

**linear program. The primary reason why we want to use duality is that the computation time of the simplex method is roughly proportional to**

*dual*`m`^{3}

, where `m`

is the number of functional constraints of the primal linear program.Let `n`

be the number variables in the primal linear program, if `m`

is much larger than `n`

, solving the primal directly may take lots of time. On the other hand, solving the dual can take a significantly shorter time, because in the dual, the number of variables `n`

will be much larger than the number of constraints `m`

.

The primal and the dual programs may have many interesting properties, for instance:

- Given a primal formulation, the dual formulation is unique. And the dual of the dual is going to be back to the primal.
- The objective value for the primal’s optimal solution is going to be the same as the objective value for the dual’s optimal solution.
- If you can solve the primal problem, then you are going to be able to solve the dual problem immediately. The dual’s optimal is
`c`

in the matrix presentation of the simplex method._{B}^{T}A_{B}^{-1}

## Primal-Dual Pairs

Suppose you are given a linear program, which is hard and impossible to solve directly. One of your friend tells you that he has a solution `x`

, with which you will get the objective value ^{^}`z`

. If we know the optimal objective value ^{^}`z`

, we may compare ^{*}`z`

and ^{^}`z`

. However we have no way to know ^{*}`z`

since it is impossible to solve the linear program directly.^{*}

But, if we can instead find the ** upper bound** of

`z`^{*}

, denoted as `z`^{**}

, that will work. If `z`^{^}

can be as close as possible to the upper bound `z`^{**}

, we can verify that `x`^{^}

is quite good.`z`^{^} ≤ z^{*} ≤ z^{**}

For example, the linear program is what we want to solve:

```
The primal linear program
z
```^{*} = max 3x_{1} + 4x_{2} + 8x_{3}
s.t. x_{1} + 2x_{2} + 3x_{3} ≤ 6
2x_{1} + x_{2} + 2x_{3} ≤ 4
x_{1} ≥ 0, x_{2} ≥ 0, x_{3} ≥ 0

We may look for some other set of coefficients `y`

and _{1}`y`

to do the combination of these two constraints, so that the resulting upper bound is closer to the objective function. Pretty much, we are trying to do different linear combinations._{2}

`(y`_{1} + 2y_{2}) x_{1} + (2y_{1} + y_{2}) x2 + (3y_{1} + 2y_{2}) x_{3} ≤ 6y_{1} + 4y_{2}
y_{1} ≥ 0, y_{2} ≥ 0

In order to have an upper bound for `z`

, i.e. ^{*}`z`

, we look for two variables ^{*} ≤ 6y_{1} + 4y_{2}`y`

and _{1}`y`

such that:_{2}

`3 ≤ y`_{1} + 2y_{2}
4 ≤ 2y_{1} + y_{2}
8 ≤ 3y_{1} + 2y_{2}
y1 ≥ 0, y2 ≥ 0

This is actually another linear program, which helps find the upper bound of `z`

.^{*}

```
The dual linear program
min 6y
```_{1} + 4y_{2}
s.t. y_{1} + 2y_{2} ≥ 3
2y_{1} + y_{2} ≥ 4
3y_{1} + 2y_{2} ≥ 8
y_{1} ≥ 0, y_{2} ≥ 0

Based on previous examples, we have some observations:

- If your primal linear program is doing a maximization, then you’ll try to minimize the upper bound, so the dual linear program would be a minimization problem.
- Primal max ⟹ Dual min

- The coefficients in the objective of the primal linear program, will become the right-hand side values of constraints of the dual linear program.
- Primal objective ⟹ Dual RHS constraints

- The right-hand side values of constraints of the primal linear program, will become the coefficients in the objective of the dual linear program.
- Primal RHS constraints ⟹ Dual objective

### Non-Positive or Free Variables in Primal

According to the sign of our variables `x`

in primal, we will know how to compare the outcome of the linear combination of new variables _{1}, x_{2}, x_{3}`y`

in dual and the objective coefficients _{1}, y_{2}`C`

of the primal.^{T}

Variables in primal | Constraints in dual |

x ≥ 0 | ≥ |

x ≤ 0 | ≤ |

x is unrestricted | = |

For instance, consider the example above, with only exception that `x`

, the dual linear program will become_{1} ≥ 0, x_{2} ≤ 0, x_{3} is free

```
The dual linear program
min 6y
```_{1} + 4y_{2}
s.t. y_{1} + 2y_{2} *≥* 3 (because x_{1} ≥ 0)
2y_{1} + y_{2} *≤* 4 (because x_{2} ≤ 0)
3y_{1} + 2y_{2} *=* 8 (because x_{3} is free)
y_{1} ≥ 0, y_{2} ≥ 0

### Greater-Than-Or-Equal-To or Equal-To Constraints in Primal

In the previous examples, the constraints in primal has the less-than-or-equal-to `≤`

relationship, in which case, the variables in dual are non-negative. What happens if the constraints in primal are greater-than-or-equal-to `≥`

or equal-to `=`

? The restriction of the variables in dual are changed accordingly.

Constraints in Primal | Variables in Dual |

≤ | y ≥ 0 |

≥ | y ≤ 0 |

= | y is free |

### Minimization Linear Program

Fpr a minimization linear program, its dual linear program is to ** maximize** the

**bound. The rules for the directions of variables and constraints are**

*lower***.**

*reversed*Variables in primal | Constraints in dual |

x ≥ 0 | ≤ |

x ≤ 0 | ≥ |

x is unrestricted | = |

Constraints in Primal | Variables in Dual |

≤ | y 0≤ |

≥ | y 0≥ |

= | y is free |

## Duality Theorems

Duality actually shares a lot of interesting and useful properties.

### Matrix Presentation of Standard Form

The standard form of a primal linear program can be written in matrix, according to the rules we discussed above, we can get its dual linear program.

Primal | Dual |

`max c` | `min y` |

### Proposition 1 – Uniqueness and symmetry of duality

For any primal linear program, there is a unique dual linear program, whose dual is the primal.

### Proposition 2 – Weak duality

If primal is a ** maximization** problem, its dual provides an

**of the primal. Similarly, if primal is a**

*upper bound***problem, its dual provides a**

*minimization***of the primal.**

*lower bound*For the linear program defined in the standard form, if `x`

and `y`

are primal and dual feasible, then we have the relationship between objective value of the primal and the objective value of the dual: `c`

. Quick proof, because in the standard form we have already got ^{T} x ≤ y^{T} b`x ≥ 0`

and `y`

, then we can derive^{T} A ≥ c^{T}

`c`^{T} x ≤ y^{T} A x = y^{T} b

### Proposition 3 – Sufficiency of optimality

If `x`

and ^{-}`y`

are primal and dual feasible, and ^{-}`c`

, then ^{T} x^{-} = y^{-T} b`x`

and ^{-}`y`

are primal and dual optimal.^{-}

This proposition gives us a way to show that a solution of primal is optimal. Given a primal feasible solution `x`

, if we can find a dual feasible solution ^{-}`y`

such that their objective values are ^{-}** identical**, then

`x`^{-}

is optimal.### Proposition 4 – Dual optimal solution

For the linear program defined in the standard form, if `x`

is primal optimal with basis ^{-}`B`

, then the dual optimal solution `y`

is ^{-T}`c`

._{B}^{T} A_{B}^{-1}

This proposition tells us once we have known the optimal solution of the primal with basis `B`

, we can derive (one of) the optimal solution for the dual easily, without solving the dual. Recall that `c`

is part of the reduced cost function _{B}^{T} A_{B}^{-1}`c`

, when _{B}^{T} A_{B}^{-1} A_{N} - c_{N}^{T}`B`

is optimal, the reduced cost would be non-negative:

`c`_{B}^{T} A_{B}^{-1} A_{N} - c_{N}^{T} ≥ 0

As `c`

, and _{B}^{T} = c_{B}^{T} A_{B}^{-1} A_{B}`A`

is actually composed of two parts `A`

and _{B}`A`

, we have:_{N}

`y`^{-T} = c_{B}^{T} A_{B}^{-1}
⟹ y^{-T} A = c_{B}^{T} A_{B}^{-1} A = c_{B}^{T} A_{B}^{-1} [A_{B} | A_{N}] = [ c_{B}^{T} A_{B}^{-1} A_{B} | c_{B}^{T} A_{B}^{-1} A_{N}] ≥ [ c_{B}^{T} | c_{N}^{T} ] = c^{T}

Note that we just got the condition `y`

which is needed for the feasibility. So ^{T} A ≥ c^{T}`y`

can be shown to be dual feasible if you define ^{-}`y`

in this way.^{-}

Also from another perspective:

`y`^{-T} = c_{B}^{T} A_{B}^{-1}
⟹ y^{-T} b = c_{B}^{T} A_{B}^{-1} b = c_{B}^{T} x^{-}_{B} = c^{T} x^{-}

This immediately tells you that if `y`

is defined in this way, the dual will have the same objective value as the primal. According to the proposition 3, both ^{-}`x`

and ^{-}`y`

are optimal.^{-}

### Proposition 5 – Strong duality

The fact that `c`

is dual optimal implies the strong duality._{B}^{T} A_{B}^{-1}

For the linear program defined in the standard form, `x`

and ^{-}`y`

are primal and dual optimal ^{-}*if and only if*`x`

and ^{-}`y`

are primal and dual feasible and ^{-}`c`

.^{T} x^{-} = y^{-T} b

The proposition 3 is used to prove the if-and-only-if statement from right to the left. To prove the other direction (from left to the right), since `c`

is dual optimal solution, dual’s objective value is _{B}^{T} A_{B}^{-1}`y`

, which equals to the primal’s optimal objective value ^{-T} b = c_{B}^{T} A_{B}^{-1} b`c`

.^{T} x^{-}

`y`^{-T} b = c_{B}^{T} A_{B}^{-1} b = c^{T} x^{-}

As the dual linear program may or may not have a unique optimal solution, `y`

and ^{-}`c`

may or may not be identical. In either case, the statement holds._{B}^{T} A_{B}^{-1}

Strong duality implies weak duality:

- Weak duality says that the dual linear program provides a bound.
- Strong duality ways that the bound is
, there is no way to get a better bound.*tight*

The primal and dual linear programs are ** equivalent**. When we know what happens to the primal, we actually have a way to predict what’s going to happen for its dual according to this particular table:

Primal \ Dual | Infeasible | Unbounded | Finitely optimal |

Infeasible | possible (3) | possible (3) | impossible |

Unbounded | possible (2) | impossible | impossible |

Finitely optimal | impossible | impossible | possible (1) |

- If primal has a finite optimal solution, the dual must also have a finite solution.
- If primal is unbounded (maximizing objective function will lead to infinity), dual is
possible to have any feasible solution, which provides an upper bound, but primal has no upper bound.*not* - If primal is infeasible, you don’t have enough information about its dual, you only know the dual won’t have a solution. If your dual has a feasible solution, there’s no way that your primal has no feasible solution.
- Dual may be infeasible
- Dual may also be unbounded

### Proposition 6 – Complementary slackness

Recall in the dual we have a constraint `y`

, we can put a slack variable ^{T} A ≥ c^{T}`v`

into the dual, so the dual would become:

`min y`^{T} b
s.t. y^{T} A - v^{T} = c^{T}
v ≥ 0

The slack variable `v`

and the primal solution `x`

are complementary. For the linear program defined in the standard form, whose dual is defined with slack variable `v`

, `x`

and ^{-}`(y`

are primal and dual optimal ^{-}, v^{-})** if and only if** they are feasible and

`v`^{-T} x^{-} = 0

.`c`^{T} x^{-} = (y^{-T} A - v^{-T}) x^{-} = y^{-T} A x^{-} - v^{-T} x^{-} = y^{-T} b - v^{-T} x^{-}

Therefore, `v`

if and only if ^{-T} x^{-} = 0`c`

, i.e. ^{T} x^{-} = y^{-T} b`x`

and ^{-}`(y`

are primal and dual optimal according to strong duality. Note that^{-}, v^{-})

`v`

if and only if^{-T}x^{-}= Σ^{n}_{i=1}v_{i}^{-}x_{i}^{-}= 0`v`

for all i as_{i}^{-}x_{i}^{-}= 0`x`

and^{-}≥ 0`v`

.^{-}≥ 0- If a dual (or primal) constraint is
**nonbinding**, which means`v`

, the corresponding primal (or dual) variable is_{i}≠ 0**zero**, i.e.`x`

._{i}= 0

This would be useful if you want to construct a dual solution without solving the dual linear program, given a good primal feasible optimal solution.

## Shadow Prices

In practice, people often ask “what-if” questions, because parameters of model may fluctuate, estimation of parameters may be inaccurate, and people are always looking for ways to improve the results. However, in reality, answering these “what-if” questions can be hard and time consuming, if we formulate and solve a new optimization problem from scratch.

The tool called ** sensitivity analysis** comes to the rescue. The original optimal basic feasible solution (tableau, basis, solution, etc) could be utilized. We typically then do a few iterations to reach the new optimal basic feasible solution. Duality provides a theoretical background.

For each kind of resources, there is a ** maximum amount of price** we are willing to pay for

**. It depends on the net benefit of that one additional unit. This motivates us to define shadow prices:**

*one additional unit*Given a linear program that has an optimal solution, the ** shadow price** of a constraint is the amount of objective value increased when the right hand side (RHS) of that constraint is increased by 1,

**the current optimal basis remains optimal.**

*assuming*Typically in practice, optimal solution won’t change, it is always there when you do sensitivity analysis.

### Proposition 7 – Signs of shadow prices

The sign of shadow prices is determined based on how the feasible region changes. For any linear program, the sign of a shadow price follows this rule:

Objective function \ Constraint | ≤ | ≥ | = |

max | feasible region becomes larger, objective value ( ≥ 0 )increasednon-negative | feasible region becomes tighter, objective value ( ≤ 0 )decreasednon-positive | no way to predict objective value free |

min | objective value ( ≤ 0 )decreasednon-positive | objective value ( ≥ 0 )increasednon-negative | no way to predict objective value free |

The sign of shadow price actually has connections with duality.

### Proposition 8 – Nonbinding constraints’ shadow prices

Shadow prices are zero for constraints that are nonbinding at the optimal solution.

Finding shadow prices allows us to answer the questions regarding additional units of resources. We want to check that for all our resources, but there’s no way to do that easily without duality, particularly when the number of constraints is large. Apply duality can help.

### Proposition 9 – Dual optimal solution provides shadow prices

For any linear program, shadow prices equal the values of dual variables in the dual optimal solution.

In other words, given a linear program, if you want to find all the shadow prices, you can find its dual, solve the dual to get the dual optimal solution, and then you’ll have all the shadow prices. You don’t need to solve `m`

programs, where `m`

is the number of constraints. You just need to solve one program – the dual.

The shadow price of i-th constrain is the i-th element of `c`

, which is the dual optimal solution._{B}^{T} A_{B}^{-1}

For more on **Linear Programming Duality**, please refer to the wonderful course here https://www.coursera.org/learn/operations-research-theory

## Related Quick Recap

*I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai*

All of your support will be used for maintenance of this site and more great content. I am humbled and grateful for your generosity. Thank you!

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!

Tweet