linear programming 2015 1 chapter 6. large scale optimization

18
Linear Programming 2015 1 Chapter 6. Large Scale Optimization 6.1 Delayed column generation min . is full row rank with a large number of columns. Impractical to have all columns initially. Start with a few columns and a basic feasible solution (restricted problem). Want to generate (find) entering nonbasic variable (column) as needed. ((delayed) column generation) If , then can enter basis. Hence solve min over all . If min , have found an entering variable (column). If min , no entering column exists, hence current basis optimal. If entering column found, add the column to the restricted problem and solve the restricted problem again to optimality. min ( : index set of variables (columns) we have at

Upload: johnathan-beasley

Post on 02-Jan-2016

224 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 1

Chapter 6. Large Scale Optimization6.1 Delayed column generation

min . is full row rank with a large number of columns. Imprac-tical to have all columns initially.

Start with a few columns and a basic feasible solution (restricted problem). Want to generate (find) entering nonbasic variable (col-umn) as needed. ((delayed) column generation)

If , then can enter basis.

Hence solve min over all .

If min , have found an entering variable (column).

If min , no entering column exists, hence current

basis optimal.

If entering column found, add the column to the restricted prob-lem and solve the restricted problem again to optimality.

min

( : index set of variables (columns) we have at hand)

Then continue to find entering columns.

Page 2: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 2

6.2. Cutting stock problem

W = 70

17 17 17 15 scrap

Page 3: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 3

Rolls of paper with width W ( called raw) to be cut into small pieces (called final).

rolls of width , need to be produced.

How to cut the raws to minimize the number of raws used while satisfying order?

ex) W = 70, then 3 of 17 and 1 of 15 can be produced from a raw. This way of production can be represented as pattern (3, 1, 0, 0, … , 0). for pat-tern is feasible if .

Page 4: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 4

Formulation

min

and integer, ,

where is the number of finals produced in pattern.

the number of raws to be cut using cutting pattern .

: total number of possible cutting patterns, which can be very large.

(, where is the vector denoting the cutting pattern.)

We need integer solution, but LP relaxation can be used to find good approximate solution if solution value large (round down the solution to obtain an integer solution).

For initial b.f.s., for , let pattern consists of one final of width and none of the other widths.

Page 5: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 5

After computing vector, we try to find entering nonbasic variable (column).

Candidate for entering column (pattern) is any nonbasic variable with reduced cost , hence solve min over all possible patterns.

solve max over all possible patterns (integer knapsack prob-lem)

max

and integer,

If , have found a cutting pattern (nonbasic variable) that can enter the basis. Otherwise (, current solution to the restricted problem is optimal. Here, can be interpreted as the value of final at the cur-rent solution (current basis ).

Dynamic programming algorithm for integer knapsack problem.

( can assume , integer.

knapsack is NP-hard, so no polynomial time algorithm is known. )

Page 6: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 6

Let be the optimal value of the problem when knapsack capac-ity is .

For

For ,

Suppose is opt. solution when r.h.s. is , then is a feasible solu-tion when r.h.s. is .

Hence ,

Suppose is optimal solution when r.h.s. is , then there exists some with and .

Hence is a feasible solution when r.h.s. is .

So ( for some )

Page 7: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 7

Actual solution recovered by backtracking the recursion.

Running time of the algorithm is which is not polynomial of the length of encoding. Called pseudopolynomial running time ( polynomial of data itself).

Note : the running time becomes polynomial if it is polynomial with respect to and , but which is not polynomial of .

Many practical problems can be naturally formulated similar to the cutting stock problem. Especially in 0-1 IP with many col-umns. For cutting stock problem, we only obtained a fractional solution. But for 0-1 IP, fractional solution can be of little help and we need a mechanism to find optimal integer solution ( branch-and-price approach, column generation combined with branch-and-bound ).

Page 8: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 8

6.3. Cutting plane methods

Dual of column generation (constraint generation)

Consider max (1)

( can be very large )

Solve max (2)

and get optimal solution to (2).

If is feasible to (1), then it is also optimal to (1)

If is infeasible to (1), find a violated constraint in (1) and add it to (2), then reoptimize (2) again. Repeat it.

Page 9: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 9

Separation problem :

Given a polyhedron (described with possibly many inequalities) and a vector determine if . If , find a (valid) inequality violated by .

Solve min over all .

If optimal value If optimal value (violated)

Page 10: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 10

6.4. Dantzig-Wolfe decomposition

Use of decomposition theorem to represent a specially struc-tured LP problem in different form. Column generation is used to solve the problem. Consider a LP in the following form

min

(1)

dimension dimension

Let Assume .

Note that the nonnegativity constraints guarantee that is pointed, hence . ()

Page 11: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 11

min (2)

can be represented as

Plug into (2) get master problem

min

,

dual vec.

dual var.

dual var.

Page 12: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 12

Alternatively, its columns can be viewed as

The new formulation has many variables (columns), but it can be solved by column generation technique.

Actual solution can be recovered from and .

is expressed as convex combination of extreme points of conical combination of extreme rays of .

Page 13: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 13

Decomposition algorithm

Suppose having a b.f.s. to the master problem, dual vector . Then reduced costs are (for )

Entering variable if reduced cost .

Hence solve min (called subproblem)

Page 14: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 14

(a) optimal cost is returns extreme ray with .

Generate a column for , i.e. .

(b) optimal finite and returns extreme point with .

Generate a column for , i.e. .

(c) optimal cost ,

no entering variable among . Perform the same for .

The method can also be used when there are more than 2 blocks or just one block in the constraints.

(see ex. 6.2.)

Page 15: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 15

Starting the algorithm

Find extreme points of and .

May assume that , then solve

min

Initial b.f.s.: , , ,

.

Page 16: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 16

Termination and computational experience

Fast improvement in early iterations, but convergence becomes slow in the tail of the sequence.

Revised simplex is more competitive in terms of running time.

Suitable for large, structured problems.

Researches on improving the convergence speed. Stabilized column generation. Think in dual space. How to obtain dual op-timal solution fast?

Advantages of decompositon approach also lies in the capabil-ity to handle (isolate) difficult structures in the subproblem when we consider large integer programs (e.g., constrained shortest path, robust knapsack problem type).

Page 17: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 17

Bounds on the optimal cost Thm 6.1 : Suppose optimal is finite. Let be the current best so-

lution (upper bound on ), dual variable value for convexity con-straint and finite optimal cost for subproblem. Then .

pf) Modify the current dual solution to a dual feasible solution by decreasing the value of to .

Dual of master problem is

max

Page 18: Linear Programming 2015 1 Chapter 6. Large Scale Optimization

Linear Programming 2015 18

(continued)

Suppose have a b.f.s. to master problem with and .

Have

Optimal cost to the first subproblem finite

Note that currently we have ( reduced cost for entering vari-

able).

If we use in place of , get dual feasibility for the first two sets of dual constraints.

Similarly, use in place of .

Cost is