Transcript
Page 1: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Introduction to LinearProgramming

Page 2: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Linear Programming.

In a linear programming problem we are given a set of variables, anobjective function a set of linear constrains and want to assign realvalues to the variables as to:

I satisfy the set of linear equations,

I maximize or minimize the objective function.

LP is of special interest because many combinatorial optimizationproblems can be reduced to LP: Max-Flow; Assignment problems;Matchings; Shortest paths; MinST; . . .

Page 3: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example.

A company produces 2 products P1, and P2, and wishes tomaximize the profits.

Each day, the company can produce x1 units of P1 and x2 units ofP2.The company makes a profit of 1 for each unit of P1; and a profitof 6 for each unit of P2.

Due to supply limitations and labor constrains we have thefollowing additional constrains: x1 ≤ 200, x2 ≤ 300 andx1 + x2 ≤ 400.

What are the best levels of production?

Page 4: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

We format this problem as a linear program:

Objective function: max(x1 + 6x2)

subject to the constraints: x1 ≤ 200

x2 ≤ 300

x1 + x2 ≤ 400

x1, x2 ≥ 0.

Recall a linear equation in x1 and x2 defines a line in R2. A linearinequality define a half-space.The set of feasibles region of this LP are the (x1, x2) which in theconvex polygon defined by the linear constrains.

Page 5: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

x1

400

300

200

100

100 200 300 400

x1 ≤ 200

x2 ≤ 300

x1 + x2 ≤ 400

x2

x1 + 6x2 = 1200

400

300

200

100

100 200 300 400

x1 ≤ 200

x2 ≤ 300

x2

x1

x1 + x2 ≤ 400

x1 + 6x2 = 600

x1 + 6x2 = 1900

Page 6: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

In a linear program the optimum is achieved at a vertex of thefeasible region.

A LP is infeasible if

I The constrains are so tight that there are impossible to satisfyall of them. For ex. x ≥ 2 and x ≤ 1,

I The constrains are so loose that the feasible region isunbounded. For ex. max(x1 + x2) with x1, x2 ≥ 0

Page 7: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Higher dimensions.

The company produces products P1, P2 and P3, where each day itproduces x1 units of P1, x2 units of P2 and x3 units of P3. andmakes a profit of 1 for each unit of P1, a profit of 6 for each unit ofP2 and a profit of 13 for each unit of P3. Due to supply limitationsand labor constrains we have the following additional constrains:x1 ≤ 200, x2 ≤ 300, x1 + x2 + x3 ≤ 400 and x2 + 3x3 ≤ 600.

max(x1+6x2 + 13x3)

x1 ≤ 200

x2 ≤ 300

x1 + x2 + x3 ≤ 400

x2 + 3x3 ≤ 600

x1, x2, x3 ≥ 0.

Page 8: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

x2

x1

x3

Page 9: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Standard form of a Linear Program.

INPUT: Given real numbers {ci}ni=1, {aji}1≤j≤m&1≤i≤n{bi}ni=1

OUTPUT: real values for variables {xi}ni=1

A linear programming problem is the problem or maximizing(minimizing) a linear function the objective functionP =

∑ni=1 cixj subject to finite set of linear constraints

max∑n

i=1 cixj ,subject to:∑n

i=1 ajixi = bj for 1 ≤ j ≤ mxi ≥ 0 for 1 ≤ i ≤ n

A LP is in standard form if thefollowing are true:1.- Non-negative constraints for allvariables.2.- All remaining constraints areexpressed as = constraints.

3.- All bi ≥ 0.

Page 10: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Equivalent formulations of LP.

A LP has many degrees of freedom:

1. It can be a maximization or a minimization problem.

2. Its constrains could be equalities or inequalities.

3. The variables are often restricted to be non-negative, but theyalso could be unrestricted.

Most of the ”real life” constrains are given as inequalities.The main reason to convert a LP into standard form is because thesimplex algorithm starts with a LP in standard form.But it could be useful the flexibility to be able to change theformulation of the original LP.

Page 11: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Transformations among LP forms

1.- To convert inequality sumni=1aixi ≤ bi into equality: introduce

a slack variable si to get∑n

i=1 aixi + si = bi with s ≥ 0.The slack variable si measures the amount of “non-used resource.”Ex: x1 + x2 + x3 ≤ 40 ⇒ x1 + x2 + x3 + s1 = 40So that s1 = 40− (x1 + x2 + x3)

2.- To convert inequality∑n

i=1 aixi ≥ −b into equality: introducea surplus variable and get

∑ni=1 aixi − si = bi with s ≥ 0.

The surplus variable si measures the extra amount of usedresource.Ex: −x1 + x2 − x3 ≥ 4 ⇒ −x1 + x2 − x3 − s1 = 4

Page 12: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Transformations among LP forms (cont.)

3.- To to deal with an unrestricted variable x (i.e. x can bepositive or negative): introduce x+, x− ≥ 0, and replace alloccurrences of x by x+ − x−.Ex: x unconstrained ⇒ x = x+ − x− with x+ ≥ 0 and x− ≥ 0.

4- To turn max. problem into min. problem: multiply thecoefficients of the objective function by -1.Ex: max(10x1 + 60x2 + 140x3) ⇒ min(−10x1 − 60x2 − 140x3).

Applying these transformations, we can reduce any LP intostandard form, in which variables are all non-negative, theconstrains are equalities, and the objective function is to beminimized.

Page 13: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example:

max(10x1 + 60x2 + 140x3)

x1 ≤ 20

x2 ≤ 30

x1 + x2 + x3 ≤ 40

x2 + 3x3 ≤ 60

x1, x2, x3 ≥ 0.

min(−10x1 − 60x2 − 140x3)

x1 + s1 = 20

x2 + s2 = 30

x1 + x2 + x3 + s3 = 40

x2 + 3x3 + s5 = 60

x1, x2, x3, s1, s2, s3, s4, s5 ≥ 0.

Page 14: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Algebraic representation of LP

Let ~c = {ci}ni=1 ~x = {xi}ni=1, ~b = {bi}ni=1 and A the n × n matrixof the variables involved in the constrains.

A L.P. can be represented using matrix and vectors:

Page 15: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example.

max 100x1 + 600x2 + 1400x3

x1 ≤ 200

x2 ≤ 300

x1 + x2 + x3 ≤ 400

x2 + 3x3 ≤ 600

x1, x2, x3 ≥ 0.

A =

1 0 00 1 01 1 10 1 1

; ~x =

x1x2x3

; ~b =

200300400600

; ~c =

100600

1400

Page 16: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Given a LP

min ~cT~xsubject to

A~x ≤ ~b~x ≥ 0

Any ~x that satisfies the constraints is a feasible solution.A LP is feasible if there exists a feasible solution. Otherwise is saidto be infeasible.A feasible solution ~x∗ is an optimal solution if

~cT ~x∗ = max(min){~cT~x |A~x ≤ ~b, ~x ≥ ~0}

Page 17: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The Geometry of LP

Consider:

maxP = 2x+5y

3x + y ≤9

y ≤3

x + y ≤4

x , y ≥ 0

P: 2x+5y=c

MAX

Theorem: If there exists an optimal solution to P = ~cT~x , thenthere exists one that is a vertex of the polytope.

Intuition of proof If ~x is not avertex, move in a non-decreasingdirection until reach a boundary.Repeat, following the boundary. x

x *

c= (2,5)

Page 18: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The objective function vector and the cone of constrainsvectors

Given a LP:

min P = ~cT~xsubject to

A~x ≤ ~b~x ≥ 0

Let ~x∗ be an optimal solution for max PThe objective function vector is ~c = (c1, . . . , cn)in P. The constrain cone is the setK = {x |

∑i λi~ai , λi ≥ 0} generated for the row

vectors {~ai} ∈ A. Then ~c lies inside K .

*

P=x+2y

c

a

a

a

c

a

1

1

1

1

x

Page 19: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example: Max-FlowGiven a network G = (V , ~E) with source and sink s, t ∈ V and capacities

c : ~E → Z+, find the Max-Flow from s → t.

Consider the example in the Figure below, where ~xT = (x1, x2, x3)amount of flow to maximize.

max x1 + x2 + x3

x1 ≤ 2

x1 + x2 ≤ 2

x2 ≤ 1

x2 + x3 ≤ 2

x3 ≤ 2

x1, x2, x3 ≥ 0

max ~cT~xsubject to

A~x ≤ ~b~x ≥ 0

A =

1 0 01 1 00 1 00 1 10 1 1

; ~b =

22122

; ~c =

111

Page 20: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The polytope of feasible solutions for Max-Flow

Notice the Max Flow correspond to ~x = (2, 0, 2) .

Page 21: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The Simplex algorithm

LP can be solved efficiently: George Dantzing (1947)

It uses hill-climbing: Start in a vertex of the feasible polytope andlook for an adjacent vertex of better objective value. Until reachinga vertex that has no neighbor with better objective function.

x2

x1

x3

x2

x1

x3

Page 22: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Complexity of LP:

Input to LP: The number n of variables in the LP.

Simplex could be exponential on n: there exists specific input (theKlee-Minty cube) where the usual versions of the simplex algorithmmay actually ”cycle” in the path to the optimal. (see Ch.6 in

Papadimitriou-Steiglitz, Comb. Optimization: Algorithms and Complexity)

In practice, the simplex algorithm is quite efficient and can find theglobal optimum (if certain precautions against cycling are taken).

It is known that simplex solves ”typical” (random) problems inO(n3) steps.Simplex is the main choice to solve LP, among engineers.

But some software packages use interior-points algorithms, whichguarantee poly-time termination.

Page 23: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The Simplex algorithm: Searching the optimal can takelong

Figure: from The Nature of Computation by Moore and Mertens

Page 24: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

LP DualityMost important concept in LP.Allows proofs of optimality and approximation.

Want to obtain UB toOPT (P)

maxP = 2x1+3x2

4x1 + 8x2 ≤12

2x1 + x2 ≤3

3x1 + 2x2 ≤4

x , y ≥ 0

2x1 + 3x2 ≤ 4x1 + 8x2 ≤ 12⇒ OPT (P) ≤ 122x1 + 3x2 ≤ (4x1 + 8x2)/2 ≤ 6⇒ OPT (P) ≤ 62x1 + 3x2 ≤ ((4x1 + 8x2) + (2x1 + x2))/3 ≤ 5⇒OPT (P) ≤ 5

We have to take linear combinations of the constrains, to sharp thebound for 2x1 + 3x2, using coefficients y1, y2, y3:

2x1 + 3x2 ≤ y1(4x1 + 8x2) + y2(2x1 + x2) + y3(3x1 + 2x2)

= (4y1 + 2y2 + 3y3)x1 + (8y1 + y2 + 2y3)x2

Page 25: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

LP Duality

From 2x1 + 3x2 ≤ (4y1 + 2y2 + 3y3)︸ ︷︷ ︸2

x1 + (8y1 + y2 + 2y3)︸ ︷︷ ︸3

x2

To get a tight bound for max(2x1 + 3x2) is equivalent to getmin((8 + 4)y1 + (2 + 1)y2 + (3 + 2)y3) i.e. to solve the LP:

min 12y1 + 3y2 + 4y3

4y1 + 2y2 + 3y3 ≥ 2

8y1 + y2 + 2y3 ≥ 3

y1, y2, y3 ≥ 0,

which is called, the dual D of the primal P.

Page 26: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Primal-Dual definition

Given a primal LP P

max cT xsubject to

A~x ≤ ~b~x ≥ 0

Define the dual D as

min ~bT~ysubject toAT~y ≥ ~c~y ≥ 0

In explicit notation:

minP =∑n

j=1 cjxjsubject to∑n

j=1 aijxj ≥ bi , i = 1, . . . ,m

xj ≥ 0, j = 1, . . . , n

maxD =∑m

i=1 biyisubject to∑m

i=1 aijyi ≤ cj , j = 1, . . . , nyi ≥ 0, i = 1, . . . ,m

Notice: The dual of the dual is the primal: So we could use themaximization or minimization as primal

Page 27: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example P/D for a given LP.

maxP = x1 + x2

x1 ≤ 4

x2 ≤ 3 ~x∗ = (4, 3)

x1, x2 ≥ 0 c

x1

x2

x1=4

x2=3x*

x1+x2

min D = 4y1 + 3y2

y1 ≥ 1

y2 ≥ 1 ~y∗ = (1, 1)

y1, y2 ≥ 0

y*

y2

4y1+3y2y1

P-D Theorem: ~cxT · ~x∗ = ~cy

T~y∗ ⇒ (1, 1) ·(

43

)= (4, 3) ·

(11

).

Page 28: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Another example.Given a LP:

min 7x1 + x2 + 5x3

x1 − x2 + 3x3 ≥ 10

5x1 + 2x2 − x3 ≥ 6

x1, x2, x3 ≥ 0.

Its dual:

max 10y1 + 6y2

1y1 + 5y2 ≤ 7

−1y1 + 2y2 ≤ 1

3y1 − y2 ≤ 5

y1, y2 ≥ 0.

~cxT · ~x∗ = ~cy

T~y∗ ⇒ (7, 1, 5) ·

x∗1x∗2x∗3

= (10, 6) ·

y∗1

y∗2

.

Page 29: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Duality Theorem

The following theorem is proved in any course in OR, and says thatevery feasible solution to the dual D is a lower bound on theoptimum value of the primal P, and vice versa.

Theorem (LP-duality theorem)

Let x∗ = (x∗1 , . . . , x∗n ) be a finite optimal solution for the primal P,

and let y∗ = (y∗1 , . . . , y∗m) be the finite optimal for the dual D, then

n∑j=1

cjx∗j =

m∑i=1

biy∗j .

Page 30: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Consequences of the P-D Theorem

If P and D are the primal and the dual of a LP, then one fo thefour following cases occurs:

1. Both P and D are infeasible.

2. P is unbounded and D is infeasible.

3. D is unbounded and P is infeasible.

4. Both are feasible and there exist optimal solutions x∗ to Pand y∗ to D such that cT x∗ = bT y∗.

Page 31: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Linear programming formulation of max-flow.

max fsa + fsb + fsc

fsa + fba =fad

fsc + fdc =fce

fsb =fbd + fba

fad + fbd =fdc + fde + fdt

fce + fde =fet

fsa ≤ 3; fsb ≤ 5; fdt ≤ 2; fsc ≤4

fba ≤ 9; fad ≤ 2; fbd ≥ 1; fet ≤5

fdc ≤ 1; fce ≤ 5; fde ≥1

fsa, fsb, fsc , fba, fad , fbd ≥0

fdc , fce , fde , fdt , fet ≥0.

2/2

S

A D

T

E

B

C

2/2

5/5

5/5

1/1

1/5

0/9

0/3

2/4

4/41/2

Page 32: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Example: The Max-Flow problem

The max-flow min-cut theorem is a special case of duality.Considerem

2

s

a

b

t1

13

3

max fsa + fsb

fsa ≤ 3

fsb ≤ 2

fab ≤ 1

fat ≤ 1

fbt ≤ 3

fsa − fab − fat = 0

fsb + fab − fbt = 0

fsa, fsb, fab, fat , fbt ≥ 0.

Page 33: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

The dual of the previous LP:

min 3ysa + 2ysb + yab + yat + ybt

ysa + ua ≥ 1

ysb + ub ≥ 1

yab − ua + ub ≥ 0

yat − ua ≤ 1

ybt − ub ≤ 3

ysa, ysb, yab, yat , ybt , ua, ub ≥ 0.

This D - LP defines the min-cut problem where for x ∈ {a, b},ux = 1 iff vertex x ∈ S , and yxz = 1 iff (x , z) ∈ cut (S ,T ).

By the LP-duality Theorem, any optimal solution to max-flow = toany optimal solution to min-cut.

Page 34: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Integer Linear Programming (ILP)

Consider again the Min Vertex Cover problem: Given undirectedG = (V ,E ) with |V | = n and |E | = m, want to find S ⊆ V withminimal cardinality s.t.. it covers all edges e ∈ E .This can be express as a linear program of the following kind:Let ~x ∈ {0, 1}n a vector s.t. ∀i ∈ V :

xi =

{1 if i ∈ S

0 otherwise

Moreover we also ask the constrain ∀(i , j) ∈ V xi + xj ≥ 1 (1)Define the incident matrix A of G as the m × n matrix, where

Aij =

{1 if vertex j is an endpoint of edge i ∈ E

0 otherwise

Therefore we can write (1) as: A~x ≥ ~1

Page 35: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Integer Linear Programming

We can express the min VC problem as:

min ~1T~xsubject to

A~x ≥ ~1~x ∈ (Z+ ∪ {0})m,

where we have a new constrain, we require the solution to beinteger.

Asking for the best possible integral solution for a LP is known asthe Integer Linear Programming:

Page 36: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Integer Linear Programming

The ILP problem is defined:Given A ∈ Zn×m together with ~b ∈ Zn and ~c ∈ Zm, find a ~x thatmax (min) ~cT subject to:

min ~cT~xsubject to

A~x ≥ ~1~x ∈ Zm,

Big difference between LP and ILP:Ellipsoidal methods put LP in the class Pbut ILP is in the class NP-hard.

Page 37: Introduction to Linear Programmingmjserna/docencia/grauA/T17/IntroLP.pdf · Linear Programming. In a linear programming problem we are given a set ofvariables, an objective functiona

Solvers for LP

Due to the importance of LP and ILP as models to solveoptimization problem, there is a very active research going on todesign new algorithms and heuristics to improve the running timefor solving LP (algorithms) IPL (heuristics).

There are a myriad of solvers packages:

I GLPK: https://www.gnu.org/software/glpk/

I LP-SOLVE: http://www3.cs.stonybrook.edu/

I CPLEX:http://ampl.com/products/solvers/solvers-we-sell/cplex/

I GUROBI Optimizer:http://www.gurobi.com/products/gurobi-optimizer


Top Related