A (Hopefully) Concise Introduction to the Simplex Algorithm

This writeup, as a documentation of my learning of the simplex algorithm, focuses on the discussion of the basic theory behind the algorithm rather than the algorithm itself. The writeup is based on the first chapter of the book by Papadimitriou and Steiglitz, which I feel leaves many subtle questions unanswered or delayed. I hope this writeup complements the book by answering some of these questions more explicitly upfront. No actual example is provided here, but the book has many good ones. A PDF file of this document is available here. This web version is made with the python scripts from latex to wordpress by in theory.

— 1.1. Equivalence Between Forms of Linear Programming Problems —

There are three common forms of Linear Programming (LP) problems: general, standard, and canonical. This subsection introduce the formulations and show that they are equivalent in the sense that one form can be converted to other without making the problem much larger.

General LP. Let {x \in \mathbb{R}^n}. Let {N, \overline{N}} be index sets of elements of {x} such that {x_j \ge 0} for {j \in N}, {x_j \gtrless 0} (that is, {x_j} can be and real number) for {j \in \overline{N}}. Let {A} be a {m \times n} integer matrix with row vectors denoted {\{a_i\}} and column vectors denoted {\{A_j\}}. Let {b \in \mathbb{R}^m}. Let {M} be the set of indices of the rows corresponding to equality constraints (that is, there is a constraint of the form {a_i^Tx = b_i} for {i \in M}), and let {\overline{M}} be the set of indices of the rows corresponding to inequality constraints ({a_i^Tx \ge b_i} for {i \in \overline{M}}). Let {c} be an integer {n}-vector, an instance of general LP is defined as

\displaystyle  \begin{array}{ll} \min c^T x, \textrm{ subject to}& \begin{array}{ll} a_i^Tx = b_i & i \in M,\\ a_i^Tx \ge b_i & i \in \overline{M},\\ x_j \ge 0 & j \in N,\\ x_j \gtrless 0 & j \in \overline{N}. \end{array} \end{array} \ \ \ \ \ (1)

Here {c^T x} denotes the inner product between {c, x}, which can also be written as {c^Tx}.

Canonical LP. An instance has the form

\displaystyle  \begin{array}{ll} \min c^T x, \textrm{ subject to}& \begin{array}{l} Ax \ge b, \;\; x \ge 0. \end{array} \end{array} \ \ \ \ \ (2)

Standard LP. An instance has the form

\displaystyle  \begin{array}{ll} \min c^T x, \textrm{ subject to}& \begin{array}{l} Ax = b, \;\; x \ge 0. \end{array} \end{array} \ \ \ \ \ (3)

General LP to canonical LP. The three common formulations of LP are equivalent in the sense that one problem can be turned into another with minimal modification (polynomial time reduction). It is clear that a canonical LP or a standard LP is a special case of a general LP. To go from general form to canonical form, we note that a constraint of the form {a_i^Tx = b_i} is equivalent to {a_i^Tx \ge b_i} and {- a_i^Tx \ge -b_i}. This transformation gives us updated constraints {A'x \ge b'} with {x} intact. We then take care of {x_j \gtrless 0} by letting {x_j = x_j' - x_j''} and add the constraint {x_j' \ge 0, x_j'' \ge 0}. After making this substitution ({c, x, A'} will get updated in this process), we get the canonical form.

Canonical LP to standard LP. All we need to do here is to take {Ax \ge b} to a set of equality constraint. For each {a_i^Tx \ge b_i}, we introduce a surplus variable {s_i \ge 0} such that {a_i^Tx - s_i = b_i}. If we happen to work with {a_i^T x \le b_i}, we may introduce slack variable {s_i \ge 0} such that {a_i^T x + s_i = b_i}.

Standard LP to canonical LP. We may simply take a standard LP and turn it into a general LP and then to a canonical one.

— 1.2. Basic Feasible Solutions —

With the equivalence between the different formulations of LP problems, any algorithm solving any formulation solves all three; we work with the standard form from now on. For this formulation, we again assume {x \in R^n} and {A} is a {m \times n} matrix, {m < n}. Without loss of generality, we may assume that the matrix {A} has rank {m}; if not, some rows are redundant (as linear combinations of others) and can be removed. We have {m < n} since otherwise {Ax = b} either has a unique solution or is over determined, which are trivial cases.

LP problems can be approached from two equivalent angles: Algebraic and geometric. The constraints from the standard form, {Ax = b, x \ge 0} defines a feasible set {F} of possible {x}‘s; the algebraic angle works with basic feasible solutions (BFS) which are special elements of this set {F}. As we will see, a BFS {x} is always bounded, always exists when {F \ne \varnothing}, and has a cost vector {c} for which {c^Tx} is optimal among all feasible solutions in {F}.

Basic feasible solution. To obtain a basic feasible solution, we start with a basis {\mathcal B} of {m} linearly independent columns of {A}, {\mathcal B = \{A_{j_1}, \ldots, A_{j_m}\}}. Since {\mathcal B} is a basis, the equation

\displaystyle  \mathcal B (x_{j_1}, \ldots, x_{j_m})^T = b \ \ \ \ \ (4)

has a unique solution

\displaystyle  (x_{j_1}, \ldots, x_{j_m})^T = \mathcal B^{-1}b. \ \ \ \ \ (5)

The variables {x_{j_1}, \ldots, x_{j_m}} are called basic variables. For the rest of the variables, we simply set them to zero. It is easy to see that such an {x} is a solution of {Ax = b}. If also {x \ge 0}, the solution is called a basic feasible solution.

Property: A BFS is bounded. From (5) we see that each {|x_j|} is bounded since the magnitude of elements of {\mathcal B^{-1}} is bounded (since {\mathcal B} is an invertible integer matrix). Let {\alpha = \max\{|a_{ij}|\}, \beta = \max\{|b_j|\}}, then we have {|x_j| \le m!\alpha^{m-1}\beta}.

Property: A BFS always exists when the feasible set is not empty. Note that the feasible set {F} given by {Ax = b, x \ge 0} may be empty in practice; here we assume that {F \ne \varnothing}. With this assumption, at least one BFS exists. We now formally prove this. First we observe that if {b = 0}, then {x = 0} is a BFS; therefore, we assume that {b \ne 0}. Since {F \ne \varnothing}, we know that {Ax = b} must have a solution {x \ge 0}. We may take the feasible solution {x} with the most number of zero components and assume the {t} nonzero components are the first {t} components:

\displaystyle \nonumber x_1, \ldots, x_t > 0, \quad, x_{t+1}, \ldots, x_n = 0. \ \ \ \ \ (6)

We now show that if the first {t} columns of {A} is linearly dependent then we can always get an {x'} with more zero components. Since {A_1, \ldots, A_t} are linearly dependent, we have that for some nonzero {t}-vector {d},

\displaystyle \nonumber \sum_{j = 1}^t d_jA_j = 0. \ \ \ \ \ (7)

On the other hand, we have

\displaystyle \nonumber \sum_{j = 1}^t x_jA_j = b. \ \ \ \ \ (8)

We may scale {d} such that for some {1 \le k \le t}, {x_k = d_k} and {x_j \ge d_j} for all {j \ne k}. This gives us

\displaystyle \nonumber \sum_{j = 1}^t (x_j - d_j)A_j = b, \ \ \ \ \ (9)

which means that the vector {(x_1 - d_1, \ldots, x_t - d_t, 0, \ldots, 0)} is a feasible solution with one more zero component. This contradiction means that the {t} columns must be linearly independent, which means {t \le m} since rank of {A} is {m}. We may then take these {t} columns and add {m - t} columns from {A} to get a basis {\mathcal B} for which {x} is a solution, which is always doable since {A} has rank {m}. Thus, a basic solution always exists.

Property: There exists a cost vector {c} for every BFS {x} such that {c^Tx} is the unique optimal cost. Given a basic feasible solution {x}, a cost vector {c} can always be chosen such that {c^Tx} yields the lowest cost of all points in {F}. For this task, we simply pick {c} such that for every {x_j} that corresponds to a basis column, {c_j = 0}, and {c_j = 1} otherwise; therefore, {c^Tx = 0}. For any other feasible solution that differ from {x}, one of the non basis columns must be positive, making the cost positive as well.

Discussion on boundedness of feasible set. Although any BFS is bounded, it may be the case that some feasible solutions are unbounded. For example, {x_1 - x_2 = 1, x_1, x_2 \ge 0} has unbounded feasible region. Such problem has the property that there exists {x} such that {Ax = 0, x \ge 0}. Nevertheless, in such cases, optimal solution may still exist for specific {c}. For example, if we have cost vector {c = (1, 1)} for the just mentioned unbounded example, then {x_1 = 1, x_2 = 0} is the optimal solution. We shall later see how to detect that a feasible set is unbounded.

— 1.3. The Geometry of Linear Programs —

Another angle of attack for LP problems is the geometric point of view. More specifically, we may view a bounded feasible set as a polytope and vise versa, as explained below. Moreover, each vertex of the polytope corresponds to a BFS.

Bounded polytope {\Leftrightarrow} bounded feasible set. We now explore the relationship between a bounded feasible set {F} and a polytope. We use polytope to mean the bounded intersection of a set of halfspaces, which is always convex. We focus specifically on polytopes {Ax \le b} with {x \ge 0}. We may add {m} slack variables to get a standard form LP: {A'x' = b, x' \ge 0}. The feasible set for this LP must be bounded since the old variables from {x} are bounded and each newly added variables depends exclusively on the old variables and therefore is also bounded.

On the other hand, if we work with a bounded feasible set for the LP, we may take the last {m} columns of {A} and assume that they form a basis. We may further assume that this basis is the identity matrix. For the first {n - m} columns, we then have for {n - m + 1 \le i \le n}, {x_i = b_i - \sum_{j = 1}^{n-m} a_{ij}x_j}. Since we also have {x_i \ge 0}, we get a bunch of halfspaces of the form

\displaystyle \nonumber \begin{array}{ll} x_i \ge 0, & i = 1, \ldots, n - m \\ b_i - \sum_{j = 1}^{n-m} a_{ij}x_j \ge 0, & i = n - m + 1, \ldots, n \end{array} \ \ \ \ \ (10)

Since the original LP has a bounded feasible set, the corresponding polytope must also be bounded.

1-1 Correspondence between polytope vertex and BFS of {F}. ({\hat x \in P} is a vertex {\Rightarrow} {x} is a BFS). Given a polytope {P} and a vertex {\hat x}, we have from above discussion that the corresponding {x} is a feasible solution. We may let {x} be arranged such that the first {t} components are nonzero (positive). If {x} is a BFS then the corresponding columns of {A} must be linearly independent; suppose not. Then there exists {d} with not all {d_1, \ldots, d_{t}} are zero such that

\displaystyle  \sum_{j = 1}^{t}d_jA_j = 0. \ \ \ \ \ (11)

Since {x} is a feasible solution we also have {\sum_{j = 1}^{t}x_jA_j = b}. Multiplying (11) by {\pm \theta} and subtract it from {\sum_{j = 1}^{t}x_jA_j = b} gives us

\displaystyle  \sum_{j = 1}^{t}(x_j \pm \theta d_j) A_j = b. \ \ \ \ \ (12)

This suggests that for small enough {\theta}, {x' = x + \theta d_j, x'' = x - \theta d_j} are also feasible solutions in {F}. However, the transformation between {P} and {F} are affine, and thus {\hat x} must be a strict convex combination of {\hat x'} and {\hat x''}. This contradicts that {\hat x} is vertex of {P} (to show this, note that there exists a hyperplane {H = \{h^Tx = g\}} such that {h^Tp \le g} for all {p \in P} and {H \cap P = \hat x}. That is, the hyperplane contains {\hat x} and the rest of {P} is on one side of {H}; {h^Tp = g} iff {p = \hat x}. since {\hat x', \hat x'' \in P}, {h^T\hat x' < g, h^T\hat x'' < g}; there liner combination cannot satisfy {h^T\hat x = g}).

({x} is a BFS {\Rightarrow} {\hat x} is a vertex). Since {x} is a BFS, there exists a basis {\mathcal B} for {x} such that {x} is the unique solution to {\mathcal Bx = b}. Suppose {\hat x} is not a vertex, then it can be expressed as strict convex combination of two vertices {\hat x', \hat x''} such that {\hat x, \hat x', \hat x''} are on the same line. This suggests that {x', x''} must have {x_j', x_j''} be zero if {x_j = 0} since otherwise one of {x_j', x_j''} must be less than zero, which is not allowed for a feasible solution. Since the nonzero components of {x} are these corresponding to {\mathcal B}, {x'} and {x''} must equal {x} as they are all solutions to {\mathcal Bx = b}.

Degenerate BFS. A BFS is called degenerate if it has more than {n - m} zeros. Obviously, when there are multiple bases for a single BFS, the BFS is degenerate.

Existence of optimal BFS for LP with bounded {F}. This is easy to see if we look at the polytope. For a given cost vector {c}, we have {c^Tx = c'^T\hat x} (where {\hat x} is a {(n-m)} vector in the corresponding polytope {P}) for some unique {c'}. For each fixed constant {d}, {c'^T\hat x = d} defines a hyperplane in {\mathbb{R}^{n-m}}. Since {P} is bounded and convex, {d} is minimized when {\hat x} is on the border of {P}. Therefore there is at least one such {\hat x} that is a vertex of {P} and the corresponding {x} is an optimal BFS.

— 1.4. The simplex algorithm —

So far we see that if there is an optimal solution to an LP problem, then there is an optimal BFS as well as an optimal vertex in the corresponding polytope {P} (if the intersection is bounded). The basic idea behind the simplex algorithm is to move from one BFS to another and eventually get to the optimal one (we will get to how to obtain a first BFS in the end since that will use the simplex algorithm). The rest of this section explains briefly the important ingredients of the simplex algorithm.

Moving from BFS to BFS. The operation of moving from BFS to BFS is called pivoting. Suppose that we want to move from a BFS {x} to a new BFS {x'}. Let {\mathcal B} be the basis for {x}, we may assume that {\mathcal B} is an identity matrix (via elimination which does not change the LP problem) and its columns are the first {m} columns of {A}. Then, for any column {A_j} of {A} not in {\mathcal B}, we can express {A_j} as linear combination of {\mathcal B = \{A_1, \ldots, A_m\}}, {A_j = \sum_{i = 1}^m a_{ij}A_i}. Since {\sum_{i = 0}^m x_iA_i = b}, we have for any {\theta \in \mathbb{R}},

\displaystyle  \sum_{i = 0}^m (x_i - \theta a_{ij})A_i + \theta A_j = b. \ \ \ \ \ (13)

There are couple possibilities here. If some {a_{ij} > 0}, we can increase {\theta} so that there will be a first {i} such that {x_i - \theta a_{ij} = 0}. Then {A_i} is the column to be replaced. We have {x' = (x_1, \ldots, x_{i-1}, 0, x_{i+1}, \ldots, x_m, 0, \ldots, 0 , x_j' = \theta, 0, \ldots, 0)}. The new basis is linearly independent since {a_{ij} \ne 0}. The other case is that all {a_{ij} \le 0}. This means that we may increase {\theta} to be arbitrarily large and still have a feasible solution, which means that {F} is unbounded.

Choosing a profitable column {A_j}. For all choices of {A_j}, we may calculate the corresponding new BFS {x'} (if some {a_{ij} > 0}) and then evaluate the cost {cx'}. If we get a decrease in cost compared to {cx}, then {A_j} is profitable. As we march from BFS to all adjacent BFS’, there are three possibilities:

  1. The cost goes unbounded (small) when some column {A_j} is chosen, suggesting that the LP problem has {-\infty} as lowest cost. This can only happen if {F} is unbounded. Specifically, this can be used to detect whether a given feasible set {F} is unbounded: We simply let {c = (-1, \ldots, -1)}.
  2. No new BFS is profitable so {x} is the optimal one.
  3. Some new BFS {x'} gives a better cost.

The issue of cycling. As we move from BFS to BFS, there is the potential problem of returning to a BFS after getting to it at some point. Note that this can only happen if the BFS is degenerate. That is, cycling can only happen if we stay at the same vertex of the corresponding polytope {P}, since from one vertex that is not optimal, there is always a better adjacent vertex in terms of cost or the cost goes unbounded low along an edge from the current vertex (in which case the algorithm terminates). Once we move to a new BFS with better cost, we will never return to the originating BFS.

There are various methods to prevent cycling, which are of practical importance. However, since all pivoting algorithms have worst case behavior being exponential, the naive method of remembering all basis used for one BFS is no worse than these algorithms in theory.

Get a BFS to start with. We have now established that given a BFS to start with, we will find the optimal BFS if it exists. To begin the algorithm, however, we must first obtain a BFS. To do this we first flip the sign of any equation in {Ax = b} so that we have {b \ge 0}. We then add {m} artificial variables {x_{n + 1}', \ldots, x_{n + m}'} and use the temporary cost vector {c' = (0, \ldots, 0, c_{n+1}' = 1, \ldots, 1)^T} (since we only want an arbitrary BFS in the original {F}, which does not depend on the actual cost vector c). For the new system, {x' = (0, \ldots, 0, x_{n+1} = b_1, \ldots, x_{n+m} = b_m)^T} is clearly a BFS. As we drive the cost {c'x'} to zero, which will happen since as long as the original {F} is not empty, the first {n} components of {x'} is a BFS for the original system since we must have {x_{n+1} = \ldots = x_{n+m} = 0}. We can then return to the original system.

Geometry aspects of pivoting. In above discussion, we have already hinted what happens to the polytope as we move from BFS to BFS. One of two things happen during a move from a BFS to a BFS: 1. We move from a vertex of {P} to a new vertex of better cost; 2. The BFS only changes basis (the solution vector {x} remains the same), in which case we stay at the same vertex of {P}.

Final word on simplex algorithm. The importance of this theory is that, we can basically use any of the numerous pseudo algorithms or full blown packages on the web on simplex algorithm to solve any variation of the LP problems discussed in this chapter without worrying that they will produce different results, since they are essentially the same with only difference being the relative speed. We know that all deterministic pivoting simplex algorithms have worst case time complexity exponential with respect to the number of variables.

— 1.5. Primal-dual formulation and its basic properties —

As with many mathematical problems, it is possible to define problems “dual” to an LP problem in various forms. Study of this aspect of LP problems, besides revealing interesting structural properties, has many implications and applications. The primal-dual relationship for LP in general form is defined as

\displaystyle  \begin{array}{lcccl} {\rm primal} && \textrm{(index sets)} && {\rm dual} \\ \min c^Tx && && \max y^Tb \\ a_i^Tx = b_i && i \in M && y_i \gtrless 0 \\ a_i^Tx \ge b_i && i \in \overline{M} && y_i \ge 0 \\ x_j \ge 0 && j \in N && y^T A_j \le c_j \\ x_j \gtrless && j \in \overline{N} && y^T A_j = c_j \end{array} \ \ \ \ \ (14)

If we view the standard LP as a special case of general LP ({\overline{M} = \overline {N} = \varnothing}), then we have the following

\displaystyle  \begin{array}{lcccl} {\rm primal} && \textrm{(index sets)} && {\rm dual} \\ \min c^Tx && && \max y^Tb \\ a_i^Tx = b_i && i \in M && y_i \gtrless 0 \\ x_j \ge 0 && j \in N && y^T A_j \le c_j \\ \end{array} \ \ \ \ \ (15)

Equivalently, we can express it in a more familiar way

\displaystyle  \begin{array}{rl} {\rm primal:}& \min c^Tx \quad \textrm{subject to }\quad Ax = b, x \ge 0 \\ {\rm dual:}& \max y^Tb \quad \textrm{subject to }\quad y^TA \le c^T\\ \end{array} \ \ \ \ \ (16)

For canonical form ({M = \overline {N} = \varnothing}), we have

\displaystyle  \begin{array}{rl} {\rm primal:}& \min c^Tx \quad \textrm{subject to }\quad Ax \ge b, x \ge 0 \\ {\rm dual:}& \max y^Tb \quad \textrm{subject to }\quad y^TA \le c^T, y \ge 0\\ \end{array} \ \ \ \ \ (17)

Property: Dual of dual is the primal. This is clear by observing that we may express the dual problem in (14) as

\displaystyle  \begin{array}{lll} \min y^T(-b) & \\ y_i \gtrless 0 & i \in M & \\ y_i \ge 0 & i \in \overline{M} & \\ y^T(-A_j) \ge (-c_j) & j \in N & \\ y^T(-A_j) = (-c_j) & j \in \overline{N} & \end{array} \ \ \ \ \ (18)

Taking the dual of (18) according to the definition from (14) yields the primal. Since the standard/canonical form primal-dual formulations are special cases of the general form, same property holds for (16) and (17). For the rest of the section we work mainly with the standard form.

Property: Weak duality. The weak duality of LP problem is that, for any feasible solutions {x, y} to the primal and dual LP problems, the following holds

\displaystyle  c^Tx \ge y^TAx = y^Tb. \ \ \ \ \ (19)

The first inequality is obtained by multiplying {x} on both sides of {y^TA \le c^T} and the second by multiplying {y^T} on both sides of {Ax = b}.

Property: Strong duality. It turns out that the equalities hold for (19) when both {x, y} are optimal solutions for the respective primal and dual LP problems. This is called the strong duality. To show this we will need Farka’s lemma based on the separating hyperplane theorem, which says that given two convex sets {S_1, S_2} that do not intersect trivially (intersection is not a single point), there exists a hyperplane such that no two points {s_1 \in S_1, s_2 \in S_2} lie in the same open halfspace created by the hyperplane. The variant of Farka’s lemma we use is the following.

Theorem 1 (Farka’s lemma) Let {A \in \mathbb{R}^{m \times n}} and {b \in \mathbb{R}^m}. Exactly one of the following holds:

  • (a) {\exists x \ge 0} such that {Ax = b}
  • (b) {\exists p \in \mathbb{R}^m} such that {p^TA \ge 0^T} and {p^Tb < 0}.

Proof. We need to show that a, b cannot be true at the same time and a {\vee} b. Note that {a \vee b \Leftrightarrow \neg a \Rightarrow b}. Note that {Ax = b} can be interpreted as “{b} is a linear combination of the columns {A_j} of {A} with coefficients {x_j}: {\sum_{j = 1}^mA_j^Tx_j = b}”.

(a, b are exclusive). The two conditions cannot hold at the same time, otherwise we have {p^TAx = p^Tb}, which means {p^TA} and {p^Tb} must hold same sign.

({\neg a \Rightarrow b}). We are now left to show that if the first condition fails, the second must be true. Note that we may view the columns of {A} as {n} {m}-vectors in {\mathbb{R}^m}. Then {Ax} for all {x \ge 0} is simply a cone {C} of these {n}-vectors which is convex and contains the zero vector. By assumption, the vector {b}, which is simply a point in {\mathbb{R}^m} and thus convex, does not intersect {C}. Moreover, if we take the ray from origin in the direction of {b}, this is again convex and does not intersect {C} non trivially. Thus, there exists a hyperplane that separates {C} and {b}. Clearly the hyperplane must pass through origin and we can pick its normal {p} such that {p^Tb < 0}. For this {p}, we have {p^TA_j \ge 0} for each column {j} of {A}. \Box

We are now ready to prove the strong duality theorem. Assume that {y} is an optimal solution to the dual. Note that we may express the dual as {\min b^T(-y), A^T(-y) \ge -c}. Let {J} be the set of (column) indices such that {A_j^T(-y) = -c_j \Leftrightarrow y^TA_j = c_j} for {j \in J}. That is, {J} is the set of indices that are active. This set is not empty, otherwise {y} is an interior point in a corresponding polytope representation of the feasible set. We see that there cannot be a vector {d \in \mathbb{R}^m}, such that {b^Td < 0} and {A_j^Td \ge 0} for {j \in J}, otherwise {(-y + \theta d)} can be a better feasible solution for small {\theta > 0} since {A_j^T(-y) \ge -c_j, j \notin J} are not binding and a small perturbation of {y} will not violate these constraints. By Farka’s lemma, the other alternative must hold: For the set of {A_j}‘s with {j \in J}, there exists a set of {x_j}‘s such that {x_j \ge 0} and {\sum_{j \in J} A_j^Tx_j = b}. If we let {x_j = 0} for {j \ne J}, we have {\sum_{j = 1}^n A_j^Tx_j = \sum_{j \in J} A_j^Tx_j + \sum_{j \notin J} A_j^Tx_j = \sum_{j \in J} A_j^Tx_j + \sum_{j \notin J} A_j^T\cdot 0 = b}, making {x \in \mathbb{R}^n} a feasible solution to the primal problem. We then have

\displaystyle  c^Tx \overset{x_j = 0 \textrm{ for } j \ne J}{=} \sum_{j \in J} c_jx_j = \sum_{j \in J} y^TA_jx_j \overset{x_j = 0 \textrm{ for } j \ne J}{=} \sum_{j = 1}^n y^TA_jx_j = y^TAx= y^Tb \ \ \ \ \ (20)

An immediate consequence of the strong duality property is that when the primal problem has an optimal, then the dual must also have an optimal and cannot be unbounded or infeasible. Same holds for dual-primal relationship. From the weak duality, we know that it is not possible for both primal and dual to be unbounded. For the same reason, if primal/dual is unbounded, then the corresponding dual/primal must be infeasible. It is also possible that both primal and dual are infeasible. Examples can be found in the book.

Property: Complementary slackness. Another direct consequence of the strong duality property is complementary slackness. From (19) we observe that the following holds

\displaystyle  u := (c^T - y^TA)x \ge 0, \quad v:= y^T(AX - b) \ge 0, \ \ \ \ \ (21)

with equality iff {x, y} are optimal by the strong duality property. Complementary slackness holds for other primal-dual formulations as well, as a direct consequence of strong duality.

This entry was posted in everything and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>