introduction to linear...

13
Chapter 2 Introduction to linear programming 2.1 Single-objective optimization problem We study problems of the following form: Given a set S and a function f : S R, find, if possible, an element x S that minimizes (or maximizes) f . Such a problem is called a single-objective optimization problem, or simply an optimization problem. A compact way to write down such a problem is: min (or max) f (x) subject to x S, or more simply, min (or max) {f (x): x S }. The set S is called the feasible set. The function f is called the objective function. An element y S is called a feasible solution. The objective function value of a feasible solution y S is the value f (y). Often, the set S is described as a set of elements of some other set satisfying certain conditions called constraints. For instance, if S = {x R :0 < x, x 1}, then the inequalities 0 <x and x 1 are constraints. An optimization problem with S empty is said to be infeasible. An optimization problem that minimizes the objective function is called a minimization problem. An optimization problem that maximizes the objective function is called a maximization problem. For a minimization problem, an element x * S is an optimal solution if f (x * ) f (x) for all x S . (In other words, x * is an element in S that minimizes f .) For a maximization problem, an element x * S is an optimal solution if f (x * ) f (x) for all x S . The objective function value of an optimal solution is called the optimal value of the problem. Remark. The difference between a minimization problem and a maximization problem is essentially cosmetic as minimizing a function is the same as maximizing the negative of the function. 3

Upload: others

Post on 13-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Chapter 2

Introduction to linear programming

2.1 Single-objective optimization problem

We study problems of the following form: Given a set S and a function f : S ! R, find, if possible,

an element x " S that minimizes (or maximizes) f . Such a problem is called a single-objective

optimization problem, or simply an optimization problem. A compact way to write down such a

problem is:

min (or max) f(x)

subject to

x " S,

or more simply,

min (or max) {f(x) : x " S}.

The set S is called the feasible set. The function f is called the objective function. An element

y " S is called a feasible solution. The objective function value of a feasible solution y " S is the

value f(y). Often, the set S is described as a set of elements of some other set satisfying certain

conditions called constraints. For instance, if S = {x " R : 0 < x, x # 1}, then the inequalities

0 < x and x # 1 are constraints. An optimization problem with S empty is said to be infeasible.

An optimization problem that minimizes the objective function is called a minimization problem.

An optimization problem that maximizes the objective function is called a maximization problem.

For a minimization problem, an element x! " S is an optimal solution if f(x!) # f(x) for all x " S.

(In other words, x! is an element in S that minimizes f .) For a maximization problem, an element

x! " S is an optimal solution if f(x!) $ f(x) for all x " S. The objective function value of an

optimal solution is called the optimal value of the problem.

Remark. The di!erence between a minimization problem and a maximization problem is essentially

cosmetic as minimizing a function is the same as maximizing the negative of the function.

3

Page 2: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Example 2.1.1. Let S be the set of all four-letter English words. Given a word w " S, let f(w)

be the number of the letter ! in w. Consider the following optimization problem:

max f(x)

s.t.

x " S.

In this problem, we want to find a four-letter English word having the maximum number of !’s.

What is the optimal value?

Two obvious questions one could ask about an optimization problem are:

1. How do we find an optimal solution quickly (if one exists)?

2. How do we prove optimality?

Not all optimization problems have optimal solutions. For example,

max{x3 : x > 0}.

A maximization (minimzation) problem is unbounded if there exists a sequence of feasible so-

lutions whose objective function values tend to % (&%). An optimization problem that is not

unbounded is called bounded.

Not all bounded problems have optimal solutions. For example, min{ex : x " R}.

2.2 Definition of a linear programming problem

A (real-valued) function in variables x1, x2, . . . , xn is said to be linear if it has the form

a1x1 + a2x2 + · · · + anxn

where a1, . . . , an " R.

A constraint on the variables x1, x2, . . . , xn is said to be linear if it has the form

• a1x1 + a2x2 + · · · + anxn # b, or

• a1x1 + a2x2 + · · · + anxn $ b, or

• a1x1 + a2x2 + · · · + anxn = b,

where b, a1, a2, . . . , an " R. The first two types of linear constraints are called linear inequalities

while the third type is called a linear equation.

A linear programming (or linear optimization) problem is an optimization problem with finitely

many variables (called decision variables) in which a linear function is minimized (or maximized)

subject to a finite number of linear constraints. The feasible set of a linear programming problem

is usually called the feasible region.

4

Page 3: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Example 2.2.1.

max x1

subject to x1 + x2 $ 4

x1 + x2 # 3

x1 $ 0.

Remark. When writing down an optimization problem, if a variable does not have its type specified,

it is understood to be a real variable.

A central result in linear optimization is the following:

Theorem 2.1 (Fundamental Theorem of Linear Programming). Given a linear program-

ming problem (P ), exactly one of the following holds:

1. (P ) is infeasible.

2. (P ) is unbounded.

3. (P ) has an optimal solution.

We will see at least one proof of Theorem 2.1.

2.3 Linear programming formulation and graphical method

Some real-life problems can be modelled as linear programming (LP) problems. In the case when

the number of decision variables is at most two, it might be possible to solve the problem graphically.

We now consider an example. Say you are a vendor of lemonade and lemon juice. Each unit of

lemonade requires 1 lemon and 2 litres of water. Each unit of lemon juice requires 3 lemons and 1

litre of water. Each unit of lemonade gives a profit of $3. Each unit of lemon juice gives a profit

of $2. You have 6 lemons and 4 litres of water available. How many units of lemonade and lemon

juice should you make to maximize profit?

Let x denote the number of units of lemonade to be made and y denote the number of units

of lemon juice to be made. Note that x and y cannot be negative. Then, the number of lemons

needed to make x units of lemonade and y units of lemon juice is x+ 3y and cannot exceed 6. The

number of litres of water needed to make x units of lemonade and y units of lemon juice is 2x + y

and cannot exceed 4. The profit you get by making x units of lemonade and y units of lemon juice

is 3x + 2y, which you want to maximize subject to the conditions we have listed. Hence, you want

to solve the LP problem:

maximize 3x + 2y

subject to x + 3y # 6

2x + y # 4

x $ 0

y $ 0.

5

Page 4: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

This problem can be solved graphically as follows. Take the objective function 3x+2y and turn

it into an equation of a line 3x + 2y = z where z is a parameter. The normal vector of the line,"

3

2

#

, gives the direction in which the line moves as the value of z increases. (Why?) As we are

maximizing, we want the largest z such that the line 3x + 2y = z intersects the feasible region. In

Figure 2.1, the lines with z taking on the values 0, 4 and 6.8 have been drawn. From the picture,

one can see that if z is greater than 6.8, the line defined by 3x + 2y = z will not intersect the

feasible region. In other words, no point in the feasible region can have objective function value

greater than 6.8. As the line 3x + 2y = 6.8 does intersect the feasible region, the optimal value is

6.8. To obtain an optimal solution, one simply takes a point in the feasible region that is also on

the line defined by 3x + 2y = 6.8. There is only one such point:"

x

y

#

=

"

1.2

1.6

#

. So you want to

make 1.2 units of lemonade and 1.6 units of lemon juice to maximize profit.

(1.2,1.6)

3x+2y=0

3x+2y=6.8

x>=0

y>=0

2x+y<=4

x+3y<=6

3x+2y=4

Direction of improvement

Figure 2.1: Graphical solution

6

Page 5: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

One can in fact show algebraically that 6.8 is the optimal value. Notice that the sum of 0.2

times the first inequality and 1.4 times the second inequality is 3x + 2y # 6.8. Now, all feasible

solutions must satisfy this inequality because they satisfy the first two inequalities. Hence, any

feasible solution must have objective function value at most 6.8. So 6.8 is an upper bound on the

optimal value. But"

x

y

#

=

"

1.2

1.6

#

is a feasible solution with objective function value equal to 6.8.

Hence, 6.8 must be the optimal value.

Now, one might ask if it is always possible to find an algebraic proof like the one above for any

linear programming problem. If the answer is “yes”, how does one find such a proof? We will see

answers to this question later on.

Now, consider the following LP problem:

minimize &2x + y

subject to &x + y # 3

x & 2y # 2

x $ 0

y $ 0.

Exercise. Draw the feasible region of the above problem.

Note that for any t $ 0,"

x

y

#

=

"

t

t

#

is a feasible solution having objective function value &t.

As t ! %, the objective function value of"

x

y

#

=

"

t

t

#

tends to &%. The problem is therefore

unbounded. Actually, one could also show unboundedness using"

x

y

#

=

"

2t + 2

t

#

for t $ 0. Later

in the course, we will see how to detect unboundedness algorithmically.

Exercise. By inspection, find a di!erent set of solutions that also shows unboundedness.

2.4 Exercises

1. Let (P ) denote the following linear programming problem:

min 3x + 2y

s.t. x + 3y # 6

&x & 2y # &1

2x + y # 4

x , y $ 0

(a) Sketch the feasible set (that is, the set of feasible solutions) on the x-y plane.

(b) Give an optimal solution and the optimal value.

(c) Suppose that one adds a constraint to (P ) requiring that 2y be an integer. (Note that

the resulting optimization problem will not be a linear programming problem.) Repeat

parts (a) and (b) with this additional constraint.

7

Page 6: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

2. Consider the example on lemonade and lemon juice in Section 2.3. Note that the optimal

solution requires you to use a fractional number of lemons. Depending on the context, having

to use a fractional number of lemons might not be realistic.

(a) Suppose you are not allowed to use a fractional number of lemons but you are still allowed

to make fractional units of lemonade and lemon juice. How many units of lemonade and

lemon juice should you make to maximize profit? Justify your answer.

(b) Suppose you are not allowed to make fractional units of lemonade and lemon juice. How

many units of lemonade and lemon juice should you make to maximize profit? Justify

your answer.

3. City A and city B have been struck by a natural disaster. City A has 1, 000 people to be

rescued and city B has 2, 000. You are in charge of coordinating a rescue e!ort and the

situation is as follows:

• each rescue team sent to city A must have exactly 4 rescue workers and requires 40 litres

of fuel;

• each rescue team sent to city B must have exactly 5 rescue workers and requires 20 litres

of fuel;

• each rescue team can rescue up to 30 people;

• you have 470 rescue workers and 2, 700 litres of fuel in total.

(a) Show that given the resources that you have, not all 3, 000 people can be rescued.

(b) Formulate an optimization problem using linear constraints and integer variables that

maximizes the number of people rescued subject to the resources that you have. Use the

following variables in your formulation:

• xA for the number of people rescued from city A;

• xB for the number of people rescued from city B;

• zA for the number of rescue teams sent to city A;

• zB for the number of rescue teams sent to city B.

8

Page 7: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Chapter 3

Systems of linear inequalities

Before we attempt to solve linear programming problems, we need to address a basic question:

How does one find a solution to a system of linear constraints? Note that it is su"cient to consider

systems of the form Ax $ b where m and n are positive integers, A " Rm"n, b " Rm, and

x = [x1, . . . , xn]T is a vector of real variables because an inequality aTx # " can be replaced with

&aTx $ &" and an equation aTx = " can be replaced with a pair of inequalites aTx $ " and

&aTx $ &" without changing the set of solutions. Another way to handle equations is as follows:

Supppose that the system is

Ax $ b

Bx = d

where m# is a positive integer, B " Rm!"n, and d " Rm!. One could first apply Gaussian elimination

to row-reduce Bx = d and then use the pivot rows to eliminate the pivot variables in Ax $ b to

obtain a system of inequalities without any of the pivot variables. The advantage with this method

is that the resulting system has fewer variables and constraints.

3.1 Fourier-Motzkin elimination

Fourier-Motzkin elimination is a classical procedure that can be used to solve a system of linear

inequalities Ax $ b by eliminating one variable at a time.

We first illustrate the idea with an example. Consider the following system of linear inequalities:

&2x1 & x2 + x3 $ 4 (1)

&x1 & 2x2 $ &1 (2)

x1 + x2 & x3 $ &1 (3)

3x1 & 2x2 + 3x3 $ &6. (4)

The system can be rewritten as:

9

Page 8: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

&12x2 + 1

2x3 & 2 $ x1 (5)

&2x2 + 1 $ x1 (6)

x1 $ &x2 + x3 & 1 (7)

x1 $ 23x2 & x3 & 2. (8)

(5) was obtained from (1) by dividing both sides by 2 and rearranging the terms. (6)—(8) were

obtained similarly. Clearly, this new system has the same set of solutions as the original system.

The system can be written compactly as:

min{&1

2x2 +

1

2x3 & 2,&2x2 + 1} $ x1 $ max{&x2 + x3 & 1,

2

3x2 & x3 & 2}.

From this, one can see that the system (1)—(4) has a solution if and only if

min{&1

2x2 +

1

2x3 & 2,&2x2 + 1} $ max{&x2 + x3 & 1,

2

3x2 & x3 & 2},

or equivalently,

&1

2x2 +

1

2x3 & 2 $ &x2 + x3 & 1

&1

2x2 +

1

2x3 & 2 $

2

3x2 & x3 & 2

&2x2 + 1 $ &x2 + x3 & 1

&2x2 + 1 $2

3x2 & x3 & 2,

has a solution. Simplifying the last system gives:

12x2 & 1

2x3 $ 1 (9)

&76x2 + 3

2x3 $ 0 (10)

&x2 & x3 $ &2 (11)

&83x2 + x3 $ &3. (12)

Note that this system does not contain the variable x1. The algebraic manipulations carried out

ensure that the system (1)—(4) has a solution if and only if the system (9)—(12) does. Moreover,

given any x2 and x3 satisfying (9)—(12), one can find an x1 such that x1, x2, x3 together satisfy

(1)—(4).

One can generalize the example above and obtain a procedure for eliminating any variable in a

system of linear inequalities. The correctness of the following algorithm is left as an exercise.

Algorithm 3.1 (Fourier-Motzkin Elimination).

Input: An integer k " {1, . . . , n} and a system of linear inequalities

n!

j=1

aijxj $ bi i = 1, . . . ,m.

10

Page 9: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Output: A system of linear inequalities

k$1!

j=1

a#ijxj +n

!

j=k+1

a#ijxj $ b#i i = 1, . . . ,m#

such that if x!1, . . . , x

!n is a solution to the system in the input, then x!

1, . . . , x!k$1, x

!k+1, . . . x

!n is a

solution to the system in the output, and if x!1, . . . , x

!k$1, x

!k+1, . . . x

!n is a solution to the system in

the output, then there exists x!k such that x!

1, . . . , x!n is a solution to the system in the input.

Steps:

1. Let K = {1, . . . , n}\{k}. Let P = {i : aik > 0}, N = {i : aik < 0}, and Z = {i : aik = 0}.

For each i " P , divide both sides of the inequality

n!

j=1

aijxj $ bi

by aik to obtain!

j%K

fijxj + xk $ di.

For each i " N , divide both sides of the inequality

n!

j=1

aijxj $ bi

by |aik| to obtain!

j%K

fijxj & xk $ di.

2. Output the system

!

j%K

(fij + fi!j)xj $ di + di! for all i " P and all i# " N,

!

j%K

aijxj $ bi for all i " Z.

Example 3.1.1.

&2x1 & x2 + x3 $ 4 (1)

&x1 & 2x2 $ &1 (2)

x1 + x2 & x3 $ &1 (3)

3x1 & 2x2 + 3x3 $ &6. (4)

We first eliminate x1 using Fourier-Motzkin elimination: For each linear inequality in which the

coe"cient of x1 is nonzero, we divide by the absolute value of the coe"cient of x1.

11

Page 10: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

&x1 & 12x2 + 1

2x3 $ 2 (5)

&x1 & 2x2 $ &1 (6)

x1 + x2 & x3 $ &1 (7)

x1 & 23x2 + x3 $ &2. (8)

Adding (5) and (7) gives 12x2 &

12x3 $ 1.

Adding (5) and (8) gives & 76x2 + 3

2x3 $ 0.

Adding (6) and (7) gives &x2 & x3 $ &2.

Adding (6) and (8) gives & 83x2 + x3 $ &3.

Hence, the system with x1 eliminated is:

12x2 & 1

2x3 $ 1 (9)

&76x2 + 3

2x3 $ 0 (10)

&x2 & x3 $ &2 (11)

&83x2 + x3 $ &3. (12)

We now eliminate x2. As before, for each linear inequality in which the coe"cient of x2 is

nonzero, we divide by the absolute value of the coe"cient of x2.

x2 & x3 $ 2 (13)

&x2 + 97x3 $ 0 (14)

&x2 & x3 $ &2 (15)

&x2 + 38x3 $ &9

8 . (16)

There is only one linear inequality with a negative x2 coe"cient and three linear inequalities

with a positive x2 coe"cient. Hence, we derive three new linear inequalites. The new system is:

27x3 $ 2 (17)

&2x3 $ 0 (18)

&58x3 $ 7

8 . (19)

Now, observe that 7 ' (17) + (18) gives 0 $ 14, which is absurd. So the original system has no

solution.

One can in fact obtain a nonnegative linear combination of inequalities (1)—(4) that gives a

12

Page 11: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

contradiction by tracing our derivations backwards. Note that

0 $ 14 ( 7 ' (17) + (18)

( 7 ' [(13) + (14)] + [(13) + (15)]

( 8 ' (13) + 7 ' (14) + (15)

( 16 ' (9) + 6 ' (10) + (11)

( 16 ' [(5) + (7)] + 6 ' [(5) + (8)] + [(6) + (7)]

( 22 ' (5) + (6) + 17 ' (7) + 6 ' (8)

( 11 ' (1) + (2) + 17 ' (3) + 2 ' (4).

Remark. In the previous example, each time we apply the Fourier-Motzkin elimination to eliminate

a variable, the variable to eliminate has a positive coe"cient in some inequality and a negative

coe"cient in some other inequality. What if the coe"cients of the variable to eliminate are either

all nonnegative or all nonpositive? In this case, we simply do not derive any new inequality and

we form the new system by taking the inequalites in the original system that do not contain the

variable to eliminate. For example, all the x1 coe"cients are nonnegative in the system

x1 + x2 $ &2

3x1 & 2x2 $ 0

x2 $ 2.

The new system with x1 eliminated is simply x2 $ 2.

3.2 Theorems of the alternative

The previous section contains an example that has no solution because there is a nonnegative linear

combination of the linear inequalities that gives a contradiction. In general, such a nonnegative

linear combination exists whenever a system of linear inequalities of the form Ax $ b has no

solution. The converse is also true. This is the content of the next theorem.

Theorem 3.1 (Farkas’ Lemma). Let m and n be positive integers. Let A " Rm"n and b " Rm.

A system Ax $ b of m inequalites in the variables x1, . . . , xn has a solution if and only if there does

not exist y " Rm such that

y $ 0, yTA = 0, yTb > 0.

Proof. Suppose that there exists y " Rm such that

y $ 0, yTA = 0, yTb > 0.

13

Page 12: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

Suppose that there also exists x! satisfying Ax! $ b. As y $ 0, we can multiply both sides of the

system Ax! $ b on the left by yT to obtain

yTAx! $ yTb.

But yTAx! = (yTA)x! = 0 and yTb > 0 by assumption. So we have 0 > 0, which is impossible. So

there is no solution to the system Ax $ b.

The the converse can be proved by induction on n. Details of the proof are left as an exercise.

Theorem 3.1 is an important classical result in linear programming. It can be used to derive

the following well-known result in linear algebra.

Corollary 3.2. A system Ax = b of m equations has a solution if and only if there does not exist

y " Rm such that

yTA = 0, yTb )= 0.

Proof. Suppose that there exists y " Rm such that yTA = 0 and yTb )= 0. Multiplying both sides

of Ax = b by yT, we obtain

yTAx = yTb.

But the left-hand side is 0 while the right-hand side is not. This is impossible. So, Ax = b cannot

have any solution.

We now prove the converse. Suppose that Ax = b has no solution. Let A# =

"

A

&A

#

and

b# =

"

b

&b

#

. Then the system Ax = b is equivalent to A#x $ b# and so A#x $ b# has no solution.

By Theorem 2, there exist u, v " Rm such that"

u

v

#

$ 0, [uT vT]A# = 0, [uT vT]b# > 0,

or equivalently, u, v $ 0, (u & v)TA = 0, (u & v)Tb > 0.

Setting y = u & v, we obtain

yTA = 0, yTb )= 0.

This completes the proof.

3.3 Exercises

1. Consider the following system of linear inequalities:

&x1 + x2 $ 1

2x1 & x2 & x3 $ 0

x2 & x3 $ 0

x3 $ 3.

14

Page 13: Introduction to linear programmingpeople.math.carleton.ca/~kcheung/Teaching/math5801/W10/lp_note… · Introduction to linear programming 2.1 Single-objective optimization problem

(a) Use Fourier-Motzkin elimination to eliminate the variables x2 and x3.

(b) Find a solution to the system such that x1 is as small as possible.

2. Consider the following system of linear constraints:

x1 + x2 = 4

x1 & x2 + 2x3 = 2

2x1 & x2 & x3 $ 0

x1 , x2 , x3 $ 0.

Does the system have a solution? If so, find one. If not, give a proof.

3. Prove that the Fourier-Motzkin elimination algorithm is correct.

4. Complete the proof of Theorem 3.1.

5. Let A " Rm"n, b " Rm, and x = [x1, . . . , xn]T be a vector of n variables. Use Theorem 3.1 to

prove that the system Ax $ b, x $ 0 has a solution if and only if there does not exist y " Rm

such that

y $ 0, yTA # 0, and yTb > 0.

(Hint: Consider the system A#x $ b# where A# =

"

A

I

#

and b# =

"

b

0

#

.)

15