In some cases, variables must take integer values, or binary values. Formulating and solving the models with integer variables is integer programming. There are mainly 2 methods to solve it:
The branch-and-bound algorithm finds an optimal solution for an integer program. Essentially, it decomposes the integer program into multiple linear programs, then solves all of linear programs separately, and compare outcomes to reach a conclusion. The process may take a long time (often not acceptable in practice).
If branch-and-bound algorithm is too time-consuming, we often look for a near-optimal feasible solution. An algorithm that generates a feasible solution in short time is an heuristic algorithm. Hopefully that solution is near-optimal. A good heuristic algorithm does not always work, but it works most of the time.
Linear Relaxation
Regarding an integer program, the feasible region is discrete and is not a real region, there is no way to “move along edges”. For an integer program, its linear relaxation is the resulting linear program, removing all the integer constraints. For example:
Integer constraints | After linear relaxation |
xi β Z+ | xi β₯ 0 |
xi β {0, 1} | xi β [0, 1] or xi β₯ 0, xi β€ 1 |
Both integer program and its linear relaxation have the same objective, however the feasible region of the relaxation is larger than that of the integer program. Let z* and z’ be the objective values associated to optimal solution of a minimization integer program and its linear relaxation, then z’ β€ z* , i.e.: its linear relaxation provides a lower bound. Similarly, for a maximization integer program, its linear relaxation provides an upper bound.
If we are lucky:
If linear relaxation | Then the original integer program |
Infeasible | Infeasible |
Unbounded | Unbounded |
Optimal solutions are integers | Solutions are found |
Otherwise, when our linear relaxation optimal solution x’ violates the integer constraint in th original integer program, we may try to round it. But this does not always work, since we don’t know:
- Round up or down? Which is better?
- Is the resulting solution always feasible?
- Is the resulting solution close to an original integer program optimal solution x* ?
Branch-and-Bound
Suppose the linear relaxation optimal solution x’ = 3.6 (violating the integer constraint), either rounding up to x’ = 4 or rounding down to x’ = 3 adds equality constraints, which actually eliminates too many feasible points. Instead we should try inequality constraints x’ β€ 3 and x’ β₯ 4 .
Branch
By introducing inequality constraints, we actually branch this into 2 linear programs, with all the integer points together in these 2 linear programs, are still the same to those of the original integer program. Next we could try to solve the 2 sub linear programs:
- If both their linear relaxation optimal solutions are integer program feasible, we could compare them and choose a better one.
- If either linear relaxation optimal solution violates the integer constraint, we do branch again.
Eventually we compare all the integer program feasible solutions we obtain.
Branching tree
The algorithm produces a branching tree. Each node presents a subproblem (linear program). Each time we branch on a variable, we create two child nodes.
Bounding
In general, when we branch to next level (going deeper in the tree), the objective value associated with a subproblem optimal solution will be getting smaller for a maximization problem (or getting greater for a minimization problem).
Once we get a feasible solution, we get a bound, The optimal solution won’t be worse than this bound, we could stop “braching” on this branch of the tree. The bound allows us to solve “fewer” subproblems.
Remarks
When there are many options to select a node to branch, there are 2 common approaches:
- Branch the node with the highest objective value, it is likely to give you the eventually optimal solution under its subtree.
- Branch descendants before non-descendants to get a integer program feasible solution faster.
The algorithm is an exact algorithm, it guarantees to find an optimal solution, if exists. The disadvantage is that it ids an exponential time algorithm. With n integer variables, the number of subproblems is aoproximately promotional to 2n.
Heuristic Algorithms
For NP-hard problems (e.g.: knapsack problems), there is no efficient (polynomial-time) exact algorithms that find an optimal solution. In this case people try to find good heuristic algorithms, which report a near-optimal solution in short polynomial time.
Intuitively, greedy algorithm serve as a good example. First we sort all items by their value-to-weight ratio in descending order. Then along the order, we select them one by one.
Optimality gap is the heart evaluating a heuristic algorithm. A natural measurement of the quality / performance of a reported solution is the optimality gap (z is objective value).
For maximization problem:
absolute error: z* - zALG
percentage error: (z* - zALG) / z*
For minimization problem:
absolute error: zALG - z*
percentage error: (zALG - z*) / z*
When z* is too hard to calculate, we could instead use the upper-bound (or lower-bound) of the objective value of an optimal solution, this can be easily obtained by solving the linear relaxation of the original integer program, say zLR .
My Certificate
For more on Branch-and-Bound & Heuristic Algorithms, please refer to the wonderful course here https://www.coursera.org/learn/operations-research-algorithms
Related Quick Recap
I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai