# introduction to linear programming mjserna/docencia/graua/t17/ آ  linear programming. in a linear...     Post on 13-Mar-2020

9 views

Category:

## Documents

Embed Size (px)

TRANSCRIPT

• Introduction to Linear Programming

• Linear Programming.

In a linear programming problem we are given a set of variables, an objective function a set of linear constrains and want to assign real values to the variables as to:

I satisfy the set of linear equations,

I maximize or minimize the objective function.

LP is of special interest because many combinatorial optimization problems can be reduced to LP: Max-Flow; Assignment problems; Matchings; Shortest paths; MinST; . . .

• Example.

A company produces 2 products P1, and P2, and wishes to maximize the profits.

Each day, the company can produce x1 units of P1 and x2 units of P2. The company makes a profit of 1 for each unit of P1; and a profit of 6 for each unit of P2.

Due to supply limitations and labor constrains we have the following additional constrains: x1 ≤ 200, x2 ≤ 300 and x1 + x2 ≤ 400.

What are the best levels of production?

• We format this problem as a linear program:

Objective function: max(x1 + 6x2)

subject to the constraints: x1 ≤ 200 x2 ≤ 300 x1 + x2 ≤ 400 x1, x2 ≥ 0.

Recall a linear equation in x1 and x2 defines a line in R2. A linear inequality define a half-space. The set of feasibles region of this LP are the (x1, x2) which in the convex polygon defined by the linear constrains.

• x1

400

300

200

100

100 200 300 400

x1 ≤ 200

x2 ≤ 300

x1 + x2 ≤ 400

x2

x1 + 6x2 = 1200

400

300

200

100

100 200 300 400

x1 ≤ 200

x2 ≤ 300

x2

x1

x1 + x2 ≤ 400

x1 + 6x2 = 600

x1 + 6x2 = 1900

• In a linear program the optimum is achieved at a vertex of the feasible region.

A LP is infeasible if

I The constrains are so tight that there are impossible to satisfy all of them. For ex. x ≥ 2 and x ≤ 1,

I The constrains are so loose that the feasible region is unbounded. For ex. max(x1 + x2) with x1, x2 ≥ 0

• Higher dimensions.

The company produces products P1, P2 and P3, where each day it produces x1 units of P1, x2 units of P2 and x3 units of P3. and makes a profit of 1 for each unit of P1, a profit of 6 for each unit of P2 and a profit of 13 for each unit of P3. Due to supply limitations and labor constrains we have the following additional constrains: x1 ≤ 200, x2 ≤ 300, x1 + x2 + x3 ≤ 400 and x2 + 3x3 ≤ 600.

max(x1+6x2 + 13x3)

x1 ≤ 200 x2 ≤ 300

x1 + x2 + x3 ≤ 400 x2 + 3x3 ≤ 600 x1, x2, x3 ≥ 0.

• x2

x1

x3

• Standard form of a Linear Program.

INPUT: Given real numbers {ci}ni=1, {aji}1≤j≤m&1≤i≤n{bi}ni=1 OUTPUT: real values for variables {xi}ni=1 A linear programming problem is the problem or maximizing (minimizing) a linear function the objective function P =

∑n i=1 cixj subject to finite set of linear constraints

max ∑n

i=1 cixj , subject to:∑n

i=1 ajixi = bj for 1 ≤ j ≤ m xi ≥ 0 for 1 ≤ i ≤ n

A LP is in standard form if the following are true: 1.- Non-negative constraints for all variables. 2.- All remaining constraints are expressed as = constraints.

3.- All bi ≥ 0.

• Equivalent formulations of LP.

A LP has many degrees of freedom:

1. It can be a maximization or a minimization problem.

2. Its constrains could be equalities or inequalities.

3. The variables are often restricted to be non-negative, but they also could be unrestricted.

Most of the ”real life” constrains are given as inequalities. The main reason to convert a LP into standard form is because the simplex algorithm starts with a LP in standard form. But it could be useful the flexibility to be able to change the formulation of the original LP.

• Transformations among LP forms

1.- To convert inequality sumni=1aixi ≤ bi into equality: introduce a slack variable si to get

∑n i=1 aixi + si = bi with s ≥ 0.

The slack variable si measures the amount of “non-used resource.” Ex: x1 + x2 + x3 ≤ 40 ⇒ x1 + x2 + x3 + s1 = 40 So that s1 = 40− (x1 + x2 + x3)

2.- To convert inequality ∑n

i=1 aixi ≥ −b into equality: introduce a surplus variable and get

∑n i=1 aixi − si = bi with s ≥ 0.

The surplus variable si measures the extra amount of used resource. Ex: −x1 + x2 − x3 ≥ 4 ⇒ −x1 + x2 − x3 − s1 = 4

• Transformations among LP forms (cont.)

3.- To to deal with an unrestricted variable x (i.e. x can be positive or negative): introduce x+, x− ≥ 0, and replace all occurrences of x by x+ − x−. Ex: x unconstrained ⇒ x = x+ − x− with x+ ≥ 0 and x− ≥ 0.

4- To turn max. problem into min. problem: multiply the coefficients of the objective function by -1. Ex: max(10x1 + 60x2 + 140x3) ⇒ min(−10x1 − 60x2 − 140x3).

Applying these transformations, we can reduce any LP into standard form, in which variables are all non-negative, the constrains are equalities, and the objective function is to be minimized.

• Example:

max(10x1 + 60x2 + 140x3)

x1 ≤ 20 x2 ≤ 30

x1 + x2 + x3 ≤ 40 x2 + 3x3 ≤ 60 x1, x2, x3 ≥ 0.

min(−10x1 − 60x2 − 140x3) x1 + s1 = 20

x2 + s2 = 30

x1 + x2 + x3 + s3 = 40

x2 + 3x3 + s5 = 60

x1, x2, x3, s1, s2, s3, s4, s5 ≥ 0.

• Algebraic representation of LP

Let ~c = {ci}ni=1 ~x = {xi}ni=1, ~b = {bi}ni=1 and A the n × n matrix of the variables involved in the constrains.

A L.P. can be represented using matrix and vectors:

• Example.

max 100x1 + 600x2 + 1400x3

x1 ≤ 200 x2 ≤ 300

x1 + x2 + x3 ≤ 400 x2 + 3x3 ≤ 600 x1, x2, x3 ≥ 0.

A =

 1 0 0 0 1 0 1 1 1 0 1 1

 ; ~x = x1x2 x3

 ; ~b = 

200 300 400 600

 ; ~c =  100600

1400



• Given a LP

min ~cT~x subject to

A~x ≤ ~b ~x ≥ 0

Any ~x that satisfies the constraints is a feasible solution. A LP is feasible if there exists a feasible solution. Otherwise is said to be infeasible. A feasible solution ~x∗ is an optimal solution if

~cT ~x∗ = max(min){~cT~x |A~x ≤ ~b, ~x ≥ ~0}

• The Geometry of LP

Consider:

maxP = 2x+5y

3x + y ≤9 y ≤3

x + y ≤4 x , y ≥ 0

P: 2x+5y=c

MAX

Theorem: If there exists an optimal solution to P = ~cT~x , then there exists one that is a vertex of the polytope.

Intuition of proof If ~x is not a vertex, move in a non-decreasing direction until reach a boundary. Repeat, following the boundary. x

x *

c= (2,5)

• The objective function vector and the cone of constrains vectors

Given a LP:

min P = ~cT~x subject to

A~x ≤ ~b ~x ≥ 0

Let ~x∗ be an optimal solution for max P The objective function vector is ~c = (c1, . . . , cn) in P. The constrain cone is the set K = {x |

∑ i λi~ai , λi ≥ 0} generated for the row

vectors {~ai} ∈ A. Then ~c lies inside K .

*

P=x+2y

c

a

a

a

c

a

1

1

1

1

x

• Example: Max-Flow Given a network G = (V , ~E) with source and sink s, t ∈ V and capacities c : ~E → Z+, find the Max-Flow from s → t. Consider the example in the Figure below, where ~xT = (x1, x2, x3) amount of flow to maximize.

max x1 + x2 + x3

x1 ≤ 2 x1 + x2 ≤ 2

x2 ≤ 1 x2 + x3 ≤ 2

x3 ≤ 2 x1, x2, x3 ≥ 0

max ~cT~x subject to

A~x ≤ ~b ~x ≥ 0

A =

 1 0 0 1 1 0 0 1 0 0 1 1 0 1 1

 ; ~b = 

2 2 1 2 2

 ; ~c = 11

1



• The polytope of feasible solutions for Max-Flow

Notice the Max Flow correspond to ~x = (2, 0, 2) .

• The Simplex algorithm

LP can be solved efficiently: George Dantzing (1947)

It uses hill-climbing: Start in a vertex of the feasible polytope and look for an adjacent vertex of better objective value. Until reaching a vertex that has no neighbor with better objective function.

x2

x1

x3

x2

x1

x3

• Complexity of LP:

Input to LP: The number n of variables in the LP.

Simplex could be exponential on n: there exists specific input (the Klee-Minty cube) where the usual versions of the simplex algorithm may actually ”cycle” in the path to the optimal. (see Ch.6 in Papadimitriou-Steiglitz, Comb. Optimization: Algorithms and Complexity)

In practice, the simplex algorithm is quite efficient and can find the global optimum (if certain precautions against cycling are taken).

It is known that simplex solves ”typical” (random) problems in O(n3) steps. Simplex is the main choice to solve LP, among engineers.

But some software packages use interior-points algorithms, which guarantee poly-time termination.

• The Simplex algorithm: Searching the optimal can take long

Figure: from The Nature of Computation by Moore and Mertens

Recommended