When you are given a linear program, in many cases we call it a primal linear program. It is usually going to be able to find another program, called a dual linear program. The primary reason why we want to use duality is that the computation time of the simplex method is roughly proportional to m3, where m is the number of functional constraints of the primal linear program.



Let n be the number variables in the primal linear program, if m is much larger than n, solving the primal directly may take lots of time. On the other hand, solving the dual can take a significantly shorter time, because in the dual, the number of variables n will be much larger than the number of constraints m.

The primal and the dual programs may have many interesting properties, for instance:

  1. Given a primal formulation, the dual formulation is unique. And the dual of the dual is going to be back to the primal.
  2. The objective value for the primal’s optimal solution is going to be the same as the objective value for the dual’s optimal solution.
  3. If you can solve the primal problem, then you are going to be able to solve the dual problem immediately. The dual’s optimal is cBT AB-1 in the matrix presentation of the simplex method.

Primal-Dual Pairs

Suppose you are given a linear program, which is hard and impossible to solve directly. One of your friend tells you that he has a solution x^, with which you will get the objective value z^. If we know the optimal objective value z*, we may compare z^ and z*. However we have no way to know z* since it is impossible to solve the linear program directly.

But, if we can instead find the upper bound of z*, denoted as z**, that will work. If z^ can be as close as possible to the upper bound z**, we can verify that x^ is quite good.

z^ โ‰ค z* โ‰ค z**

For example, the linear program is what we want to solve:

The primal linear program

z* = max 3x1 + 4x2 + 8x3
s.t.      x1 + 2x2 + 3x3 โ‰ค 6
         2x1 +  x2 + 2x3 โ‰ค 4
          x1 โ‰ฅ 0, x2 โ‰ฅ 0, x3 โ‰ฅ 0

We may look for some other set of coefficients y1 and y2 to do the combination of these two constraints, so that the resulting upper bound is closer to the objective function. Pretty much, we are trying to do different linear combinations.

(y1 + 2y2) x1 + (2y1 + y2) x2 + (3y1 + 2y2) x3 โ‰ค 6y1 + 4y2
y1 โ‰ฅ 0, y2 โ‰ฅ 0

In order to have an upper bound for z*, i.e. z* โ‰ค 6y1 + 4y2, we look for two variables y1 and y2 such that:

3 โ‰ค  y1 + 2y2
4 โ‰ค 2y1 +  y2
8 โ‰ค 3y1 + 2y2
y1 โ‰ฅ 0, y2 โ‰ฅ 0

This is actually another linear program, which helps find the upper bound of z*.

The dual linear program

min 6y1 + 4y2
s.t. y1 + 2y2 โ‰ฅ 3
    2y1 +  y2 โ‰ฅ 4
    3y1 + 2y2 โ‰ฅ 8
     y1 โ‰ฅ 0, y2 โ‰ฅ 0


Based on previous examples, we have some observations:

  1. If your primal linear program is doing a maximization, then you’ll try to minimize the upper bound, so the dual linear program would be a minimization problem.
    • Primal max โŸน Dual min
  2. The coefficients in the objective of the primal linear program, will become the right-hand side values of constraints of the dual linear program.
    • Primal objective โŸน Dual RHS constraints
  3. The right-hand side values of constraints of the primal linear program, will become the coefficients in the objective of the dual linear program.
    • Primal RHS constraints โŸน Dual objective

Non-Positive or Free Variables in Primal

According to the sign of our variables x1, x2, x3 in primal, we will know how to compare the outcome of the linear combination of new variables y1, y2 in dual and the objective coefficients CT of the primal.

Variables in primalConstraints in dual
x โ‰ฅ 0โ‰ฅ
x โ‰ค 0โ‰ค
x is unrestricted=

For instance, consider the example above, with only exception that x1 โ‰ฅ 0, x2 โ‰ค 0, x3 is free, the dual linear program will become

The dual linear program

min 6y1 + 4y2
s.t. y1 + 2y2 โ‰ฅ 3 (because x1 โ‰ฅ 0)
    2y1 +  y2 โ‰ค 4 (because x2 โ‰ค 0) 
    3y1 + 2y2 = 8 (because x3 is free)
     y1 โ‰ฅ 0, y2 โ‰ฅ 0

Greater-Than-Or-Equal-To or Equal-To Constraints in Primal

In the previous examples, the constraints in primal has the less-than-or-equal-to โ‰ค relationship, in which case, the variables in dual are non-negative. What happens if the constraints in primal are greater-than-or-equal-to โ‰ฅ or equal-to = ? The restriction of the variables in dual are changed accordingly.

Constraints in PrimalVariables in Dual
โ‰คy โ‰ฅ 0
โ‰ฅy โ‰ค 0
=y is free


Minimization Linear Program

Fpr a minimization linear program, its dual linear program is to maximize the lower bound. The rules for the directions of variables and constraints are reversed.

Variables in primalConstraints in dual
x โ‰ฅ 0โ‰ค
x โ‰ค 0โ‰ฅ
x is unrestricted=
Constraints in PrimalVariables in Dual
โ‰คy โ‰ค 0
โ‰ฅy โ‰ฅ 0
=y is free

Duality Theorems

Duality actually shares a lot of interesting and useful properties.

Matrix Presentation of Standard Form

The standard form of a primal linear program can be written in matrix, according to the rules we discussed above, we can get its dual linear program.

PrimalDual
max cT x
s.t. A x = b
x โ‰ฅ 0
min yT b
s.t. yT A โ‰ฅ cT
y is free

Proposition 1 – Uniqueness and symmetry of duality

For any primal linear program, there is a unique dual linear program, whose dual is the primal.

Proposition 2 – Weak duality

If primal is a maximization problem, its dual provides an upper bound of the primal. Similarly, if primal is a minimization problem, its dual provides a lower bound of the primal.

For the linear program defined in the standard form, if x and y are primal and dual feasible, then we have the relationship between objective value of the primal and the objective value of the dual: cT x โ‰ค yT b. Quick proof, because in the standard form we have already got x โ‰ฅ 0 and yT A โ‰ฅ cT, then we can derive

cT x โ‰ค yT A x = yT b

Proposition 3 – Sufficiency of optimality

If x- and y- are primal and dual feasible, and cT x- = y-T b, then x- and y- are primal and dual optimal.

This proposition gives us a way to show that a solution of primal is optimal. Given a primal feasible solution x-, if we can find a dual feasible solution y- such that their objective values are identical, then x- is optimal.



Proposition 4 – Dual optimal solution

For the linear program defined in the standard form, if x- is primal optimal with basis B, then the dual optimal solution y-T is cBT AB-1.

This proposition tells us once we have known the optimal solution of the primal with basis B, we can derive (one of) the optimal solution for the dual easily, without solving the dual. Recall that cBT AB-1 is part of the reduced cost function cBT AB-1 AN - cNT, when B is optimal, the reduced cost would be non-negative:

cBT AB-1 AN - cNT โ‰ฅ 0

As cBT = cBT AB-1 AB, and A is actually composed of two parts AB and AN, we have:

y-T = cBT AB-1
โŸน y-T A = cBT AB-1 A = cBT AB-1 [AB | AN] = [ cBT AB-1 AB | cBT AB-1 AN] โ‰ฅ [ cBT | cNT ] = cT

Note that we just got the condition yT A โ‰ฅ cT which is needed for the feasibility. So y- can be shown to be dual feasible if you define y- in this way.

Also from another perspective:

y-T = cBT AB-1
โŸน y-T b = cBT AB-1 b = cBT x-B = cT x-

This immediately tells you that if y- is defined in this way, the dual will have the same objective value as the primal. According to the proposition 3, both x- and y- are optimal.

Proposition 5 – Strong duality

The fact that cBT AB-1 is dual optimal implies the strong duality.

For the linear program defined in the standard form, x- and y- are primal and dual optimal if and only if x- and y- are primal and dual feasible and cT x- = y-T b.

The proposition 3 is used to prove the if-and-only-if statement from right to the left. To prove the other direction (from left to the right), since cBT AB-1 is dual optimal solution, dual’s objective value is y-T b = cBT AB-1 b, which equals to the primal’s optimal objective value cT x-.

y-T b = cBT AB-1 b = cT x-

As the dual linear program may or may not have a unique optimal solution, y- and cBT AB-1 may or may not be identical. In either case, the statement holds.

Strong duality implies weak duality:

  • Weak duality says that the dual linear program provides a bound.
  • Strong duality ways that the bound is tight, there is no way to get a better bound.

The primal and dual linear programs are equivalent. When we know what happens to the primal, we actually have a way to predict what’s going to happen for its dual according to this particular table:

Primal \ DualInfeasibleUnboundedFinitely optimal
Infeasiblepossible (3)possible (3)impossible
Unboundedpossible (2)impossibleimpossible
Finitely optimalimpossibleimpossiblepossible (1)
  1. If primal has a finite optimal solution, the dual must also have a finite solution.
  2. If primal is unbounded (maximizing objective function will lead to infinity), dual is not possible to have any feasible solution, which provides an upper bound, but primal has no upper bound.
  3. If primal is infeasible, you don’t have enough information about its dual, you only know the dual won’t have a solution. If your dual has a feasible solution, there’s no way that your primal has no feasible solution.
    • Dual may be infeasible
    • Dual may also be unbounded

Proposition 6 – Complementary slackness

Recall in the dual we have a constraint yT A โ‰ฅ cT, we can put a slack variable v into the dual, so the dual would become:

min yT b
s.t. yT A - vT = cT
            v โ‰ฅ 0

The slack variable v and the primal solution x are complementary. For the linear program defined in the standard form, whose dual is defined with slack variable v, x- and (y-, v-) are primal and dual optimal if and only if they are feasible and v-T x- = 0.

cT x- = (y-T A - v-T) x- = y-T A x- - v-T x- = y-T b - v-T x-

Therefore, v-T x- = 0 if and only if cT x- = y-T b, i.e. x- and (y-, v-) are primal and dual optimal according to strong duality. Note that

  • v-T x- = ฮฃni=1 vi- xi- = 0 if and only if vi- xi- = 0 for all i as x- โ‰ฅ 0 and v- โ‰ฅ 0.
  • If a dual (or primal) constraint is nonbinding, which means vi โ‰  0, the corresponding primal (or dual) variable is zero, i.e. xi = 0.

This would be useful if you want to construct a dual solution without solving the dual linear program, given a good primal feasible optimal solution.



Shadow Prices

In practice, people often ask “what-if” questions, because parameters of model may fluctuate, estimation of parameters may be inaccurate, and people are always looking for ways to improve the results. However, in reality, answering these “what-if” questions can be hard and time consuming, if we formulate and solve a new optimization problem from scratch.

The tool called sensitivity analysis comes to the rescue. The original optimal basic feasible solution (tableau, basis, solution, etc) could be utilized. We typically then do a few iterations to reach the new optimal basic feasible solution. Duality provides a theoretical background.

For each kind of resources, there is a maximum amount of price we are willing to pay for one additional unit. It depends on the net benefit of that one additional unit. This motivates us to define shadow prices:

Given a linear program that has an optimal solution, the shadow price of a constraint is the amount of objective value increased when the right hand side (RHS) of that constraint is increased by 1, assuming the current optimal basis remains optimal.

Typically in practice, optimal solution won’t change, it is always there when you do sensitivity analysis.

Proposition 7 – Signs of shadow prices

The sign of shadow prices is determined based on how the feasible region changes. For any linear program, the sign of a shadow price follows this rule:

Objective function \ Constraintโ‰คโ‰ฅ=
maxfeasible region becomes larger,
objective value increased ( โ‰ฅ 0 )

non-negative
feasible region becomes tighter,
objective value decreased ( โ‰ค 0 )

non-positive
no way to predict objective value



free
minobjective value decreased ( โ‰ค 0 )

non-positive
objective value increased ( โ‰ฅ 0 )

non-negative
no way to predict objective value

free

The sign of shadow price actually has connections with duality.

Proposition 8 – Nonbinding constraints’ shadow prices

Shadow prices are zero for constraints that are nonbinding at the optimal solution.

Finding shadow prices allows us to answer the questions regarding additional units of resources. We want to check that for all our resources, but there’s no way to do that easily without duality, particularly when the number of constraints is large. Apply duality can help.

Proposition 9 – Dual optimal solution provides shadow prices

For any linear program, shadow prices equal the values of dual variables in the dual optimal solution.

In other words, given a linear program, if you want to find all the shadow prices, you can find its dual, solve the dual to get the dual optimal solution, and then you’ll have all the shadow prices. You don’t need to solve m programs, where m is the number of constraints. You just need to solve one program – the dual.

The shadow price of i-th constrain is the i-th element of cBT AB-1, which is the dual optimal solution.



My Certificate

For more on Linear Programming Duality, please refer to the wonderful course here https://www.coursera.org/learn/operations-research-theory


Related Quick Recap


I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!