The simplex method is the most fundamental tool in linear programming, it is a single algorithm that is able to solve any kinds of linear program regardless of its format, number of variables and constraints. We firstly transform the linear program to a standard form, then we focus on the standard form. The idea of the Simplex method is to search the corners or extreme points of feasible region (also called basic feasible solutions). If it can not find a corner where all the neighboring corners are as good as this one, then it says this is the optimal.



Extreme Points

For a set S βŠ† Rn , a point x is an extreme point, if there does NOT exist a three-tuple (x1, x2, Ξ») such that x1 ∈ S \ {x}, x2 ∈ S \ {x}, Ξ» ∈ (0, 1), and x = Ξ» x1 + (1 - Ξ») x2 . It mean if x can be represented as a convex combination of x1 and x2, then x is not an extremer point.

For any linear program, if there is an optimal solution, there is an extreme point optimal solution. The simplex algorithm is built on this proposition. Warning, it is not true that “if a solution is optimal, it is an extreme point.”

Standard Form of Linear Programs

If a linear program is in the standard form if:

  1. all right hand side values are non-negative
  2. all variables are non-negative, and
  3. all constraints are equalities

There is no restriction on the objective function. All the requiremnts are on the constraints. For any linear program, we may convert it to standard form:

Before transformAfter transform
RHS2 x1 + 3 x2 ≀ -4-2 x1 – 3 x2 β‰₯ 4
variables2 x1 + 3 x2 ≀ -4 , x1 ≀ 0

xi is free
-2 x1 – 3 x2 β‰₯ 4 , x1 β‰₯ 0

xi‘ – xi”, where xi‘, xi” β‰₯ 0
equality2 x1 + 3 x2 ≀ 4

2 x1 + 3 x2 β‰₯ 4
2 x1 + 3 x2 + x3 = 4, x3 β‰₯ 0

2 x1 + 3 x2 – x3 = 4, x3 β‰₯ 0

We may express a standard form in matrices, the general form is always in this way, where A ∈ Rm*n is coefficient matrix, b ∈ Rm is RHS vector, and c ∈ Rn is objective vector:

max or min cT x , such that A x = b, x β‰₯ 0

Basic Solutions

In a linear program with m constraints and n variables, usually m < n (which means more columns than rows of A) , because if it is m β‰₯ n, there is either one or zero solution, then we do not need optimization. We may assume that A has m pivots, i.e. all rows of A are independent to each other. In the case that there are dependent rows, you need to check whether the solution is feasible and eliminate the redundant constraints. It is important that all the rows (constraints) are independent.

A basic solution to a standard form of linear program is a solution that both has n-m variables being equal to 0, and satisfies A x = b. First, we select n-m (non-basic) variables and set them to zero. Then, we try to solve the remaining m (basic) variables. The set of variables is called basis. As long as the m columns form a non-singular (invertible) m-by-m matrix AB (there are n-choose-m ways to define the AB), we are able to solve AB x = b. Then we could just focus on basic variables, since all non-basic variables are just 0.

xB = AB-1 b   (basic variables)
xN = 0       (non-basic variables)


Basic Feasible Solutions

Among all basic solutions, if a solution also satisfies x β‰₯ 0, then it is a basic feasible solution. So a basic feasible solution to a standard form linear program is a basic solution whose basic variables are all non-negative.

Basic feasible solutions are just extreme points! These 2 things are equivalent. Extreme points are defined geometrically, basic feasible solutions are defined algebraically. To search among basic feasible solutions, we keep moving to a better adjacent basic feasible solution from the current one.

Adjacent bases and adjacent basic feasible solutions

Two bases are adjacent if exactly one of their variables, not values, is different (n-1 variables are the same). Two basic feasible solutions are adjacent if their associated bases are adjacent. Around the current basic feasible solution, there should be some improving direction, otherwise, current basic feasible solution is optimal.

The Simplex Method

All we need is to search among basic feasible solutions, which means geometrically we search among extreme points. Algebraically, to move to an adjacent basic feasible solution, we need to replace one basic variable with a non-basic variable. We shall select two things:

  1. one non-basic variable to enter the basis
  2. one basic variable to leave the basis

Selecting 1 non-basic variable to enter means making it nonzero: increasing its value from zero to a positive number. While this new variable increases, we identify existing basic variables that decrease. The first one that hits zero will be selected to leave the basis and become non-basic variable.

We keep changing the basis, meanwhile keep track of the objective value (also called z-value), until we find an optimal solution.

System of Equalities

Besides all of the constraints in the standard form of linear program, the objective function is also included and called 0th constraint. This is becomes another equation that must be satisfied through out the whole process. We keep modifying the value for x, so that we may find feasible solutions with higher values of z.

In general, a linear program is solved in two stages:

  1. Find the initial basic feasible solution
  2. Keep improving current basic feasible solution until find the optimal basic feasible solution

Keep improving the current basic feasible solution

The 0th constraint helps us identify which non-basic variable should be selected to enter the basis. We look for positive coefficient for a min problem, or negative coefficient for max problem. When none of non-basic variables can help further improve the objective function, the optimal solution is found.

The ratio between “right hand side value” and the entering column of selected non-basic variable helps us identify which basic variable will hit zero first, then leaving the basis.

But this method tedious, because increasing a non-basic variable will affect all basic variables and z value, this method won’t work find when there are thousands of variables and constraints. We need a better way.



Keep improving with updating the system

We found the process can be easily done, when these conditions hold true:

  1. The 0th constraint contains no basic variable.
  2. The 1st to m-th constraint contains exactly 1 basic variable.

In other words, for basic columns, we want these characteristics:

  1. A zero vector in 0th constraint.
  2. Identity matrix in 1st to m-th constraints.

Every time we get a new basis, we need to update the system using elementary row operations, so that the system exhibits characteristics above.

Identifying Unboundedness

Unboundedness of a linear program will be found when doing ratio test, there is no basic variable can leave the basis.

Feasibility of Linear Programs

If you randomly pick a few columns as basic columns to build a matrix AB, it is not easy to make sure the matrix is invertible before you try Gaussian elimination. Even though the matrix is invertible, you can not guarantee AB b β‰₯ 0 unless you already solve it. So, it is hard to tell whether the initial feasible solution exists.

The two-phase implementation

Given a standard form of linear program:

(P) min CT x such that A x = b, x β‰₯ 0

we construct a phase-I linear program, by adding artificial variables y and change the objective function:

(Q) min 1T y such that A x + I y = b, x β‰₯ 0, y β‰₯ 0

Immediately (Q) has a trivial basic feasible solution (x, y) = (0, b). We then apply the simplex method on the phase-I linear program, we want to see whether it is possible to find an optimal solution such that no y is positive.

The (P) is feasible if and only if (Q) has an optimal basic feasible solution (x, y) = (x’, 0). In this case, the x’ is a basic feasible solution of (P).

Once we get all the y’s zero, then x’ can be treated as an initial basic feasible solution for (P). Then we run simplex method on (P) – this is phase-II.



My Certificate

For more on The Simplex Method, please refer to the wonderful course here https://www.coursera.org/learn/operations-research-algorithms



I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai