The Fundamental Theorem of Linear Programming

A quick intuition and justification of why linear programming optima occur at extreme points.

Introduction

At the core of linear optimization lies a foundational principle that dramatically simplifies how we search for optimal solutions.

The Fundamental Law of Linear Programming
A linear programming problem involves optimizing a linear objective function $c^\top x$ subject to linear inequality constraints $Ax \le b$ and $x \ge 0$. The optimal solution is always attained at a corner point (or extreme point) of the feasible region.

A remarkable consequence of this theorem is its implication for non-convex sets:

Equivalence on Non-Convex Sets
Minimizing a linear objective over a non-convex set of points yields the same result as minimizing it over the convex hull of those points, since the optimum always occurs at one of the extreme points.


Intuition

Why do linear functions always push their optima to the boundary, specifically to the corners? We can build this intuition from the ground up:

1. One-Dimensional Perspective

Imagine minimizing a straight line $y = c^\top x$ over a closed interval $[a, b]$. Because there is no curvature, the function is either strictly increasing, strictly decreasing, or perfectly flat. Consequently, the minimum value is always found at one of the endpoints.

2. Two-Dimensional Perspective

If we move to two dimensions and take a bird’s-eye view:


Justification

To rigorously justify why minimizing a linear objective over a non-convex set $S$ is equivalent to minimizing over its convex hull $\operatorname{conv}(S)$, we can rely on established optimization theory.

Step 1: Definition of the Convex Hull

The convex hull $\operatorname{conv}(S)$ is defined as the smallest convex set that contains all convex combinations of points from the original non-convex set $S$. Crucially, the extreme points of $\operatorname{conv}(S)$ are always drawn from the original set $S$.

Step 2: Bauer’s Maximum Principle

Bauer’s Maximum Principle states that a continuous, convex function defined on a compact, convex set will attain its maximum (and since linear functions are both convex and concave, its minimum) at an extreme point of that set.

Step 3: Synthesis

  1. By Bauer’s Principle (and the Fundamental Theorem of Linear Programming), any optimal value of a linear objective over $\operatorname{conv}(S)$ must occur at an extreme point of $\operatorname{conv}(S)$.
  2. Because the extreme points of $\operatorname{conv}(S)$ belong to the original set $S$, the optimum over the convex hull is actually an element of $S$.
  3. Therefore, minimizing a linear objective over $S$ directly, or over arguably the “easier” space $\operatorname{conv}(S)$, yields the exact same optimal value.

Key References