karmarkar algorithm
DESCRIPTION
Karmarkar Algorithm. Anup Das, Aissan Dalvandi, Narmada Sambaturu, Seung Min, and Le Tuan Anh. Contents. Overview Projective transformation Orthogonal projection Complexity analysis Transformation to Karmarkar’s canonical form. LP Solutions. Simplex Dantzig 1947 Ellipsoid - PowerPoint PPT PresentationTRANSCRIPT
Karmarkar AlgorithmAnup Das, Aissan Dalvandi, Narmada Sambaturu, Seung Min, and Le Tuan Anh
1
Contents• Overview
• Projective transformation
• Orthogonal projection
• Complexity analysis
• Transformation to Karmarkar’s canonical form
2
LP Solutions• Simplex• Dantzig 1947
• Ellipsoid• Kachian 1979
• Interior Point• Affine Method 1967• Log Barrier Method 1977• Projective Method 1984
• Narendra Karmarkar form AT&T Bell Labs3
Linear Programming
5
MSubject to
MSubject to
Karmarkar’s Canonical FormGeneral LP
• Minimum value of is 0
Karmarkar’s Algorithm• Step 1: Take an initial point • Step 2: While • 2.1 Transformation: such that is center of
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project to feasible region and move in the steepest descent direction. • This gives us the point
• 2.3 Inverse transform: • This gives us
• 2.4 Use as the start point of the next iteration • Set
6
Step 2.1• Transformation: such that is center of - Why?
7
𝑥 ′(𝑘)
𝑟
𝑥(𝑘)
Step 2.2• Projection and movement: Project to feasible region and
move in the steepest descent direction - Why?
8
𝑐 ′
−𝑐 ′
Projection of to feasible region
Big Picture
9
𝑥(𝑘)𝑥 ′(𝑘)
𝑥 ′(𝑘+1)
𝑥(𝑘+1 )
𝑥(𝑘+2 )
Karmarkar’s Algorithm• Step 1: Take an initial point • Step 2: While • 2.1 Transformation: such that is center of
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project to feasible region and move in the steepest descent direction. • This gives us the point
• 2.3 Inverse transform: • This gives us
• 2.4 Use as the start point of the next iteration • Set
11
Projective Transformation• Transformation: such that is center of
12
Transform
𝑥(𝑘) 𝑥 ′(𝑘)=𝑒𝑛
𝑥(𝑘)=[𝑥1(𝑘)
𝑥2(𝑘)
⋮𝑥𝑛
(𝑘) ] 𝑥 ′(𝑘)=[1𝑛1𝑛⋮1𝑛
]=𝑒𝑛
𝑒= [111…1 ]𝑇
Projective transformation• We define the transform such that
• We transform every other point with respect to point
Mathematically:• Defining D = diag • projective transformation:•
• inverse transformation•
13
Problem transformation:• projective transformation:•
• inverse transformation•
The new objective function is not a linear function.
Minimize
s.t.
MinimizeS.t.
Transform
14
Problem transformation:
MinimizeS.t.
Convert
15
MinimizeS.t.
Karmarkar’s Algorithm• Step 1: Take an initial point • Step 2: While • 2.1 Transformation: such that is center of
• This gives us the LP problem in transformed space
• 2.2 Orthogonal projection and movement: Project to feasible region and move in the steepest descent direction. • This gives us the point
• 2.3 Inverse transform: • This gives us
• 2.4 Use as the start point of the next iteration • Set
16
Orthogonal Projection
• Projecting onto • projection on • normalization
17
MinimizeS.t.
Orthogonal Projection• Given a plane and a point outside the plane, which direction
should we move from the point to guarantee the fastest descent towards the plane?• Answer: perpendicular direction
• Consider a plane and a vector
18
𝑐 ′ 𝑐𝑃
𝑠 𝐴’
ℵ (𝐴′)
Orthogonal Projection• Let and be the basis of the plane ()
• The plane is spanned by the column space of ]
• , so
• is perpendicular to and
• 19
𝑐 ′ 𝑐𝑃
𝑠 𝐴’
ℵ (𝐴′)
Orthogonal Projection
• = projection matrix with respect to ’
• We need to consider the vector • Remember
• is the projection matrix with respect to 20
𝑐 ′ 𝑐𝑃
𝑠 𝐴’
ℵ (𝐴′)
Orthogonal Projection• What is the direction of?
• Steepest descent =
• We got the direction of the movement or projected ’ onto
• Projection on
21
𝑐 ′ 𝑐𝑃
𝑠 𝐴’
Calculate step size
• r = radius of the incircle
22
MinimizeS.t.
Movement and inverse transformation
23
Transform
𝑥(𝑘) 𝑥 ′(𝑘)=𝑒𝑛
𝑥 ′(𝑘+ 1)
𝑥(𝑘+1 ) Inverse Transform
Big Picture
24
𝑥(𝑘)𝑥 ′(𝑘)
𝑥 ′(𝑘+1)
𝑥(𝑘+1 )
𝑥(𝑘+2 )
Matlab Demo
25
Contents• Overview
• Projective transformation
• Orthogonal projection
• Complexity analysis
• Transformation to Karmarkar’s canonical form
26
Running Time• Total complexity of iterative algorithm =
(# of iterations) x (operations in each iteration)
• We will prove that the # of iterations = O(nL)
• Operations in each iteration = O(n2.5)
• Therefore running time of Karmarkar’s algorithm = O(n3.5L)
# of iterations• Let us calculate the change in objective function value at the
end of each iteration
• Objective function changes upon transformation• Therefore use Karmarkar’s potential function
• The potential function is invariant on transformations (upto some additive constants)
# of iterations• Improvement in potential function is
• Rearranging,
# of iterations• Since potential function is invariant on transformations, we
can use
whereandare points in the transformed space.
• At each step in the transformed space,
# of iterations• Simplifying, we get•
Where
• For small
• So suppose both and are small -
# of iterations• It can be shown that • The worst case for
would be if
• In this case
# of iterations• Thus we have proved that
• Thus
where k is the number of iterations to converge
• Plugging in the definition of potential function,
# of iterations• Rearranging, we get
# of iterations• Therefore there exists some constant such that after
iterations,
where , being the number of bits
is sufficiently small to cause the algorithm to converge.
• Thus the number of iterations is in
Rank-one modification• The computational effort is dominated by:
• Substituting A’ in, we have
• The only quantity changes from step to step is D • Intuition:• Let and be diagonal matrices• If and are close the calculation given should be cheap!
Method• Define a diagonal matrix , a “working approximation” to in
step k such that• for • We update D’ by the following strategy• (We will explain this scaling factor in later slides)• For to do
if Then let and update
Rank-one modification (cont)• We have
• Given , computation of can be done in or steps.• If and differ only in entry then
• => rank one modification.• If and differ in entries then complexity is .
Performance Analysis• In each step: • Substituting , and , • Let call • Then Let • Then
Performance analysis - 2• First we scale by a factor
• Then for any entry I such that
• We reset • Define discrepancy between and as• Then just after an updating operation for index
Performance Analysis - 3• Just before an updating operation for index i
Between two successive updates, say at steps k1 and k2
Let call Then
Assume that no updating operation was performed for index I-
Performance Analysis - 4
• Let be the number of updating operations corresponding to index in steps and be the total number of updating operations in steps.
• We have
Because of then
Performance Analysis - 5• Hence
• So
• Finally
Contents• Overview
• Transformation to Karmarkar’s canonical form
• Projective transformation
• Orthogonal projection
• Complexity analysis
44
Transform to canonical form
45
MSubject to
• x
MSubject to
• x
Karmarkar’s Canonical FormGeneral LP
Step 1:Convert LP to a feasibility problem• Combine primal and dual problems
• LP becomes a feasibility problem46
MSubject to
• x
MSubject to
•
Primal Dual
Combined
Step 2: Convert inequality to equality• Introduce slack and surplus variable
47
Step 3: Convert feasibility problem to LP• Introduce an artificial variable to create an interior starting
point.• Let be strictly interior points in the positive orthant.
48
• Minimize • Subject to• • + •
Step 3: Convert feasibility problem to LP
• Change of notation
49
MSubject to
• x
• Minimize • Subject to• • + •
Step4: Introduce constraint • Let ,…, ] be a feasible point in the original LP
• Define a transformation• for
• Define the inverse transformation• for
50
Step4: Introduce constraint • Let denote the column of • If • Then
• We define the columns of a matrix ’ by these equations• for • =
• Then
51
Step 5: Get the minimum value of Canonical form • Substituting
• We get
• Define by
• Then implies
52
Step 5: Get the minimum value of Canonical form • The transformed problem is
53
M
Subject to
References• Narendra Karmarkar (1984). "
A New Polynomial Time Algorithm for Linear Programming”• Strang, Gilbert (1 June 1987). "Karmarkar’s algorithm and its
place in applied mathematics". The Mathematical Intelligencer (New York: Springer) 9 (2): 4–10. doi:10.1007/BF03025891. ISSN 0343-6993. MR '''883185'‘
• Robert J. Vanderbei; Meketon, Marc, Freedman, Barry (1986). "A Modification of Karmarkar's Linear Programming Algorithm". Algorithmica 1: 395–407. doi:10.1007/BF01840454. ^ Kolata, Gina (1989-03-12). "IDEAS & TRENDS; Mathematicians Are Troubled by Claims on Their Recipes". The New York Times. 54
References• Gill, Philip E.; Murray, Walter, Saunders, Michael A., Tomlin, J.
A. and Wright, Margaret H. (1986). "On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method". Mathematical Programming 36 (2): 183–209. doi:10.1007/BF02592025.
• oi:10.1007/BF02592025. ^ Anthony V. Fiacco; Garth P. McCormick (1968). Nonlinear Programming: Sequential Unconstrained Minimization Techniques. New York: Wiley. ISBN 978-0-471-25810-0. Reprinted by SIAM in 1990 as ISBN 978-0-89871-254-4.
• Andrew Chin (2009). "On Abstraction and Equivalence in Software Patent Doctrine: A Response to Bessen, Meurer and Klemens". Journal Of Intellectual Property Law 16: 214–223.
55
Q&A
56