l21 numerical methods part 1 homework review search problem line search methods summary 1 test 4 wed
TRANSCRIPT
L21 Numerical Methods part 1
• Homework• Review• Search problem• Line Search methods• Summary
1
Test 4Wed
Problem 8.95
2
1 2
1 2
1 2
( ) 20 6. .
3 34 3 8
0i
Min f x xs tx xx x
x
x 1 2
1 2 3
1 2
( ) 20 6. .
3 34 3 8
0i
Min f x xs tx x xx x
x
x
1 2
1 2 3 4 5
1 2 3 4 5
( ) 20 6. .
3 1 1 1 0 34 3 0 0 1 8
0i
Min f x xs tx x x x xx x x x x
x
x
1 2
1 2 3 4
1 2 5
( ) 20 6. .
3 34 3 8
0i
Min f x xs tx x x xx x x
x
x
4 1 2 3
5 1 2
4 5
1 2 3
3 (3 1 1 )8 (4 3 )
Art cost11 7 4 1
x x x xx x x
w x xw x x x
H20 cont’d
3
row basic x1 x2 x3 x4 x5 b b/a_pivota x4 3 -1 -1 1 0 3 3/3b x5 4 -3 0 0 1 8 8/4c cost 20 -6 0 0 0 0d art cost -7 4 1 0 0 -11
row basic x1 x2 x3 x4 x5 b b/a_pivote x1 1 -0.33333 -0.33333 0.333333 0 1 negf x5 0 -1.66667 1.333333 -1.33333 1 4 4/1.333=3g cost 0 0.666667 6.666667 -6.66667 0 -20 negh art cost 0 1.666667 -1.33333 2.333333 0 -4
row basic x1 x2 x3 x4 x5 bj x1 1 -0.75 0 0 0.25 2k x3 0 -1.25 1 -1 0.75 3l cost 0 9 0 0 -5 -40 f=40
m art cost 0 0 0 1 1 0 w=0Lagrange Multipliers y1=0 y2=-5
x1*=2x2*=0f*=40
H20 cont’d
4
Design Variable Symbol Value Unitslogs from F1 to Mill A x1 0 logslogs from F2 to Mill A x2 0 logslogs from F1 to Mill B x3 200 logslogs from F2 to Mill B x4 100 logs
Cost f(x) 52400 dollars
Constraints LHS RHSmill A capacity g1 0 <= 240mill B capacity g2 300 <= 270forrest 1 yield g3 200 <= 200forrest 2 yield g4 100 <= 200
demand g5 300 >= 350
1 2 3 4
1 1 2
2 3 4
3 1 3
4 2 4
5 1 2 3 4
( ) 240 205 172 180. .
: 240 mill A capacity: 300 mill B capacity: 200 forrest 1 yield: 200 forrest 2 yield: 300 demand
Min Cost f x x x xs tg x xg x xg x xg x xg x x x x
xa. Increase cost “by” $0.16, fnew=$53,238 or +$838 inc b. Reduce mill A capacity to 200 logs/dayChanges nothing
c. Reduce mill B capacity to 270 logs/day, increases cost by $750 and new opt sol’n is x1=0, x2=30, x3=200, and x4=70
H20 cont’d
5
Objective Cell (Min)Cell Name Original Value Final Value
$C$9 f(x) Value 79700.00 52400.00
Variable CellsCell Name Original Value Final Value Integer
$C$4 x1 Value 100 0 Contin$C$5 x2 Value 100 0 Contin$C$6 x3 Value 100 200 Contin$C$7 x4 Value 100 100 Contin
ConstraintsCell Name Cell Value Formula Status Slack
$C$12 g1 LHS 0 $C$12<=$E$12 Not Binding 240$C$13 g2 LHS 300 $C$13<=$E$13 Binding 0$C$14 g3 LHS 200 $C$14<=$E$14 Binding 0$C$15 g4 LHS 100 $C$15<=$E$15 Not Binding 100$C$16 g5 LHS 300 $C$16>=$E$16 Binding 0
ConstraintsFinal Shadow Constraint Allowable Allowable
Cell Name Value Price R.H. Side Increase Decrease$C$12 g1 LHS 0 0 240 1E+30 240$C$13 g2 LHS 300 -25 300 0 100$C$14 g3 LHS 200 -8 200 100 100$C$15 g4 LHS 100 0 200 1E+30 100$C$16 g5 LHS 300 205 300 100 0
* ( )
( )( )
( 25)(270 300) $750
ii i
i i i new old
f fy
e ef y e y e e
LaGrange Exshadow pricef
x*
Sensitivity Analyses
6
how sensitive are the:a. optimal value (i.e. f(x) and b. optimal solution (i.e. x)
… to the parameters (i.e. assumptions) in our model?
Model parameters
7
1 1 2 2
11 1 1 1
21 1 2 2
1 1
( ). .
0, 10, 1
n n
n n
n n
m mn n m
i
j
Min f c x c x c xs t
a x a x ba x a x b
a x a x b
b i to mx j to n
x
( ). .Min fs t
Tx c x
Ax bb 0x 0
Consider your abc’s, i.e. A, b and c
Simplex LaGrange Multipliers
8
the right side paramter of the th constraintthe LaGrange multiplier of the th constraint
* ( )( )
i
i
i i i i new oldi i
e iy i
f fy f y e y e e
e e
x*
Constraint Type≤ = ≥slack either surplus
c’ column “regular” artificial artificial
0iy iy 0iy
Find the multipliers in the final tableau (right side)
Let’s minimize f even further
9
1 1 2 2 3 3
1 2 3
2
3
1
(0) (5 / 3) ( 7 / 3)1
1(0) (5 / 3)(1) ( 7 / 3)( 1)12 / 3 4
f y e y e y ef e e eeef ef
Increase/decrease ei to reduce f(x)
Is there more to Optimization
• Simplex is great…but….• Many problems are non-linear• Many of these cannot be “linearized”
Need other methods!
10
General Optimization Algorithms:
• Sub Problem AWhich direction to head next?
• Sub Problem BHow far to go in that direction?
11
( )kx
Magnitude and direction
12
a
Let u be a unit vector of length 1, parallel to a
u u u u
4a ua u
Alpha = magnitude or step size (i.e.scalar)Unit vector = direction (i.e. vector)
( )mag a
13
Figure 10.2 Conceptual diagram for iterative steps of an optimization method.
We are hereWhich direction should we head?
Minimize f(x): Let’s go downhill!
14
( *) 0Tf f x d
1( ) ( *) ( *)( *) ( *) ( *)( *)
2T Tf f f R x x x x x x x H x x x
1( * ) ( *) ( *)
2T Tf f f R x d x x d d H d
( ) ( )let ( *) then new oldor d x x x x * d x x d
( *) 0Tf x d
Descent condition
( *)Tlet fc x
0c d scalar
Dot Product
15
cos( ) a u a u
a
u
( . . )scalar i e numbera u
At what angle does the dot product become most negative?Max descent …..
( *) =Tf d x - c0c d
Desirable Direction
16
2
2
cos( )
cos(180)
( 1)
0
let
c d c dd c
c d c c
c
c
0c d( *)Tlet fc x
Descent is guaranteed!
Ex: Using the “descent condition”
17
2 21 1 2: ( ) 3 2 2 7
( 1,1) (2,1)Determine whether is a descent direction?
given f x x xand at
x
d xd
( )
1
2
6 2 6(2) 2 14( *)
4 4(1) 4k
xf
x x
c x
1 1 2 2
114 4 14( 1) 4(1) 10 0
1
n n
T
c d c d c d
T
T
c d c d
c d
0?c d
Step Size?
How big should we make alpha?Can we step too “far?”
i.e. can our step size be chosen so big that we step over the “minimum?”
18
19
Figure 10.5 Nonunimodal function f() for 0
Nonunimodal functions
Unimodal if stay in the locale?
Monotonic Increasing Functions
20
Monotonic Decreasing Functions
21
continous
22
Figure 10.4 Unimodal function f().
Unimodal functions:
monotonic increasing then monotonic decreasing
monotonic decreasing then monotonic increasing
Some Step Size Methods
• “Analytical”Search direction = (-) gradient, (i.e. line search)Form line search function f(α)Find f’(α)=0
• Region Elimination (“interval reducing”)Equal intervalAlternate equal intervalGolden Section
23
24
Figure 10.3 Graph of f() versus .
Analytical Step size
( 1) ( )( ) ( + ) ( )k kf f f x x d
( )
( ) ( ) ( )
( 1) ( ) ( )
given , and letthen
old
new old k
k k k
d xx x dx x d
( 1) ( )( ) ( + ) ( )'( )=0
k kf f ff
x x dSlope of line
search=c d
Analytical Step Size Example
25
2 21 2: ( ) ( 2) ( 1)
44
find optimal step size *and ( *)!
given f x x
and and at
f
x
d c x 2 21 2
2 2
2
( ) ( 2) ( 1)
( ) ((4 4 ) 2) ((4 6 ) 1)( ) 52 52 13( ) 2(52 ) 52 0
* 1/ 2
f x x
fff
1
2
2 21 2
2 2
4 1/ 2( 4) 24 1/ 2( 6) 12
*1
( *) ( 2) ( 1)
(2 2) (1 1)0
xx
f x x
x
x
1
2
2( 2) 2(4 2)2( 1) 2(4 1)
46
xx
d c
( 1) ( ) ( )
( ) ( )
1 1 1
2 2 2
1
2
4 ( 4) 4 44 ( 6) 4 6
k k k
new oldx x dx x dxx
x x d
Alternative Analytical Step Size
26
( 1) ( )
( 1) ( 1) ( 1)
( 1) ( ) ( )
( 1)( )
( 1) ( )
( 1) ( )
( ) ( + ) ( )'( )=0( ) ( ) ( )
0
since( )
( ) 0
0
k k
k T k k
k k k
kk
k k
k k
f f ffdf f d
d d
d
df
x x d
x x x
xx x d
xd
x d
c d
( 1) ( ) ( )
1
2
4 ( 4) 4 44 ( 6) 4 6
k k k
xx
x x d
( 1) ( ) 04
4 8 , 6 126
4(4 8 ) 6(6 12 ) 016 32 36 72 052 104 0
52 / 104 1/ 2
k k
T
c d
( 1) 1
2
2( 2)2( 1)
2(4 4 2)2(4 6 1)
4 86 12
k xx
c
New gradient must be orthogonal to d for ' ( )=0f
Some Step Size Methods
• “Analytical”Search direction = (-) gradient, (i.e. line search)Form line search function f(α)Find f’(α)=0
• Region Elimination (“interval reducing”)Equal intervalAlternate equal intervalGolden Section
27
28
Figure 10.6 Equal-interval search process. (a) Phase I: initial bracketing of minimum. (b) Phase II: reducing the interval of uncertainty.
“Interval Reducing”Region elimination
“bounding phase”
Interval reductionphase”
( 1)( 1)
2
l
u
u l
I
2 delta!
29
Successive-Equal Interval Algorithm
30
x f(x)-5.0000 22.0067-4.0000 18.0183-3.0000 14.0498-2.0000 10.1353-1.0000 6.36790.0000 3.00001.0000 0.71832.0000 1.38913.0000 10.08554.0000 40.59825.0000 130.4132
( ) 2 - 4 exp( )f x x x x f(x)
0.0000 3.00000.2000 2.42140.4000 1.89180.6000 1.42210.8000 1.02551.0000 0.71831.2000 0.52011.4000 0.45521.6000 0.55301.8000 0.84962.0000 1.3891
x f(x)1.2000 0.52011.2400 0.49561.2800 0.47661.3200 0.46341.3600 0.45621.4000 0.45521.4400 0.46071.4800 0.47291.5200 0.49221.5600 0.51881.6000 0.5530
x f(x)1.3600 0.4561931.3680 0.4554881.3760 0.4550341.3840 0.4548331.3920 0.4548881.4000 0.4552001.4080 0.4557721.4160 0.4566051.4240 0.4577021.4320 0.4590651.4400 0.460696
x lower -5x upper 5
delta 1
02
0.2
1.21.6
0.04
“Interval” of uncertainty
1.361.44
0.008
Successive Equal Inteval Search
• Very robust• Works for continuous and discrete functions• Lots of f(x) evaluations!!!
31
32
Figure 10.7 Graphic of an alternate equal-interval solution process.
Alternate equal interval
Which region to reject?
33
Summary• Sensitivity Analyses add value to your solutions• Sensitivity is as simple as Abc’s• Constraint variation sensitivity theorem can
answer simple resource limits questions• General Opt Alg’ms have two sub problems:
search direction, and step size• In local neighborhood.. Assume uimodal!• Descent condition assures correct direction• Step size methods: analytical, region elimin.
34