approximation algorithms - tunielomaa/teach/appral-17-1.pdf · 1. an introduction to approximation...

27
3/8/2017 1 Approximation Algorithms Prof. Tapio Elomaa [email protected] Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours of lectures per week Weekly exercises start next week We will assume familiarity with – Necessary mathematics – Basic programming 8-Mar-17 MAT-72606 ApprAl, Spring 2017 2

Upload: others

Post on 24-May-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

1

Approximation Algorithms

Prof. Tapio Elomaa

[email protected]

Course Basics

• A 4 credit unit course• Part of Theoretical Computer Science

courses at the Laboratory of Mathematics• There will be 4 hours of lectures per week• Weekly exercises start next week• We will assume familiarity with

– Necessary mathematics– Basic programming

8-Mar-17MAT-72606 ApprAl, Spring 2017 2

Page 2: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

2

Organization & Timetable

• Lectures: Prof. Tapio Elomaa– Mon & Wed 12–14 in TB224– Mar. 6 – Apr 26, 2017– Easter break: Apr. 12–18

• Exercises: Ph.D. Juho LauriThu 12–14 TB216

• Exam: Fri May 5, 2017 @ 13–16

8-Mar-17MAT-72606 ApprAl, Spring 2017 3

Course Grading

8-Mar-17MAT-72606 ApprAl, Spring 2017 4

• Exam: Maximum of 30 points• Weekly exercises yield extra points

• 40% of questions answered: 1 point• 80% answered: 6 points• In between: linear scale (so that

decimals are possible)• Hands-on project

Page 3: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

3

Material

• The textbook of the course is– David P. Williamson & David B. Shmoys:

The Design of Approximation Algorithms,Cambridge University Press, 2011

• There is no prepared material, the slidesappear in the web as the lectures proceed– http://math.tut.fi/~elomaa/teach/72606.html

• The exam is based on the lectures (i.e., noton the slides only)

8-Mar-17MAT-72606 ApprAl, Spring 2017 5

Content (Plan)

1. An introduction to approximation algorithms2. Greedy algorithms and local search3. Rounding data and dynamic programming4. Deterministic rounding of linear programs5. Random sampling and randomized rounding of

linear programs6. Randomized rounding of semidefinite programs7. The primal-dual method8. Cuts and metrics

8-Mar-17MAT-72606 ApprAl, Spring 2017 6

Page 4: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

4

1. Introduction

The whats and whysThe set cover problem

A deterministic rounding algorithmRounding a dual solutionThe primal-dual method

A greedy algorithmA randomized rounding algorithm

1.1 The whats and whys

• Many interesting discrete optimization problemsare NP-hard

• If P NP, we can’t have algorithms that1) find optimal solutions2) in polynomial time3) for any instance

• At least one of these requirements must berelaxed in any approach to dealing with an NP-hard optimization problem

8-Mar-17MAT-72606 ApprAl, Spring 2017 8

Page 5: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

5

Definition 1.1: An -approximation algorithm foran optimization problem is• a polynomial-time algorithm that• for all instances of the problem produces a

solution• whose value is within a factor of of the value of

an optimal solution.

• We call the performance guarantee of thealgorithm

• It is also often called the approximation ratio orapproximation factor of the algorithm

8-Mar-17MAT-72606 ApprAl, Spring 2017 9

We will follow the convention that > 1 forminimization problems, while < 1 formaximization problems

Definition 1.2: A polynomial-time approximationscheme (PTAS) is a family of algorithms { },where• there is an algorithm for each > 0, such that• is a (1 + )-approximation algorithm (for

minimization problems) or• a (1 )-approximation algorithm (for

maximization problems).8-Mar-17MAT-72606 ApprAl, Spring 2017 10

Page 6: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

6

• E.g., the knapsack problem and the Euclideantraveling salesman problem, both have a PTAS

• A class of problems that is not so easy is MAXSNP

• It contains problems, such as the maximumsatisfiability problem and the maximum cutproblem

Theorem 1.3: For any MAX SNP-hard problem,there does not exist a PTAS, unless P = NP.

8-Mar-17MAT-72606 ApprAl, Spring 2017 11

• E.g., maximum clique problem is very hard:– Input: an undirected graph = ( , )– Goal: find a maximum-size clique; i.e., find

that maximizes | | so that for each pair , , itmust be the case that ( ,

• Almost any nontrivial approximation guarantee ismost likely unattainable:

Theorem 1.4: Let be the number of vertices in aninput graph, and consider any constant > 0. Thenthere does not exist an ( )-approximationalgorithm for the max clique problem, unless P = NP

8-Mar-17MAT-72606 ApprAl, Spring 2017 12

Page 7: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

7

• It is very easy to get an -approximationalgorithm for the problem:– just output a single vertex

• This gives a clique of size 1, whereas the size ofthe largest clique can be at most , the numberof vertices in the input

• Theorem 1.4 states that finding something onlyslightly better than this completely trivialapproximation algorithm implies that P = NP

8-Mar-17MAT-72606 ApprAl, Spring 2017 13

1.2 Linear programs:The set cover problem

• Given: a ground set of elements = { , … , },some subsets of those elements , … , whereeach , and a nonnegative weight 0for each subset

• Find: a minimum-weight collection of subsetsthat covers all of ;– i.e., we wish to find an {1, … , } that

minimizes subject to =• If = 1 for each , the problem is unweighted

8-Mar-17MAT-72606 ApprAl, Spring 2017 14

Page 8: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

8

• The SC was used in the development of anantivirus product, which detects computerviruses

• It was desired to find salient features that occurin viruses designed for the boot sector of acomputer, such that the features do not occur intypical computer applications

• These features were then incorporated into aneural network for detecting these boot sectorviruses

• The elements of the SC were the known bootsector viruses (about 150 at the time)

8-Mar-17MAT-72606 ApprAl, Spring 2017 15

• Each set corresponded to some three-bytesequence occurring in these viruses but not intypical computer programs; there were about21,000 such sequences

• Each set contained all the viruses that had thecorresponding sequence somewhere in it

• The goal was to find a small number of suchsequences ( 150) that would be useful for theneural network

• By using an approximation algorithm to solve theproblem, a small set of sequences was found,and the neural network was able to detect manypreviously unanalyzed boot sector viruses

8-Mar-17MAT-72606 ApprAl, Spring 2017 16

Page 9: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

9

• The set cover problem also generalizes thevertex cover problem:– Given: an undirected graph = ( , ) and a

nonnegative weight 0 for each vertex

– Find: a minimum-weight subset of verticessuch that for each edge ( , , either

or• Again, if = 1 for each vertex , the problem is

unweighted

8-Mar-17MAT-72606 ApprAl, Spring 2017 17

• To see that the vertex cover problem is a specialcase of the set cover problem:

• for any instance of the vertex cover problem,create an instance of the set cover problem inwhich– the ground set is the set of edges, and– a subset of weight is created for each

vertex containing the edges incident to• It is not difficult to see that for any vertex cover

, there is a set cover = of the same weight,and vice versa

8-Mar-17MAT-72606 ApprAl, Spring 2017 18

Page 10: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

10

MAT-72606 ApprAl, Spring 2017 8-Mar-17

12

342-3

1-3

1-2

2-4S2

S4

S3S1

19

8-Mar-17MAT-72606 ApprAl, Spring 2017 20

• Linear programming plays a central role in thedesign and analysis of approximation algorithms

• Each linear program (LP) or integer program (IP)is formulated in terms of some number ofdecision variables that represent some sort ofdecision that needs to be made

• The variables are constrained by a number oflinear inequalities and equalities calledconstraints

• Any assignment of real numbers to the variablessuch that all of the constraints are satisfied iscalled a feasible solution

Page 11: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

11

• In the set cover problem, we need to decidewhich subsets to use in the solution– A decision variable to represent this choice

• We would like to be 1 if the set is includedin the solution, and 0 otherwise– Thus, we introduce constraints 1 and 0

for all subsets• This does not guarantee that 0,1 , so we

formulate the problem as an IP to excludefractional solutions

• Requiring to be integer along with theconstraints suffices to guarantee that 0,1

8-Mar-17MAT-72606 ApprAl, Spring 2017 21

• We also want to make sure that any feasiblesolution corresponds to a set cover, so weintroduce additional constraints

• In order to ensure that every element iscovered, it must be the case that at least one ofthe subsets containing is selected

• This will be the case if

1:

,

for each , = 1, … ,

8-Mar-17MAT-72606 ApprAl, Spring 2017 22

Page 12: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

12

• In addition to the constraints, LPs and IPs aredefined by a linear function of the decisionvariables called the objective function

• The LP or IP seeks to find a feasible solutionthat either maximizes or minimizes this objectivefunction

• Such a solution is called an optimal solution• The value of the objective funct. for a particular

feasible solution is the value of that solution• The value of the objective function for an optimal

solution is called the value of the LP or IP8-Mar-17MAT-72606 ApprAl, Spring 2017 23

• We solve the LP if we find an optimal solution• In the case of the set cover problem, we want to

find a set cover of minimum weight

• Given the decision variables and constraintsdescribed above, the weight of a set cover is

• Thus, the objective function of the IP is, and we wish to minimize this function

8-Mar-17MAT-72606 ApprAl, Spring 2017 24

Page 13: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

13

• IPs and LPs are usually written in a compactform stating first the objective function and thenthe constraints

• The problem of finding a minimum-weight setcover is equivalent to the following IP:

8-Mar-17MAT-72606 ApprAl, Spring 2017 25

minimize

subject to 1:

, = 1, … ,

0,1 , = 1, … ,

• Let denote the optimum value of this IP for agiven instance of the set cover problem

• Since the IP exactly models the problem, wehave that = OPT, where OPT is the value ofan optimum solution to the set cover problem

• In general, IPs cannot be solved in polynomialtime

• This is clear because the set cover problem isNP-hard, so solving the IP above for any setcover input in polynomial time would imply thatP = NP

8-Mar-17MAT-72606 ApprAl, Spring 2017 26

Page 14: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

14

• However, LPs are polynomial-time solvable• We cannot require the variables to be integers• Even in cases such as the SC problem, we are

still able to derive useful information from LPs• If we replace the constraints 0,1 with the

constraints 0, we obtain the following LP,which can be solved in polynomial time

8-Mar-17MAT-72606 ApprAl, Spring 2017 27

minimizesubject to 1: , = 1, … ,

0, = 1, … ,

• We could also add the constraints 1, foreach = 1, … , , but they would be redundant:– in any optimal solution to the problem, we can

reduce any > 1 to = 1 without affectingthe feasibility of the solution and withoutincreasing its cost

• The LP is a relaxation of the original IP:1. every feasible solution for the original IP is

feasible for this LP; and2. the value of any feasible solution for the IP

has the same value in the LP

8-Mar-17MAT-72606 ApprAl, Spring 2017 28

Page 15: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

15

• To see that the LP is a relaxation, note that anysolution for the IP such that– 0,1 for each = 1, … , and– 1: for each = 1, … ,

will certainly satisfy all the constraints of the LP• Furthermore, the objective functions of both the

IP and LP are the same, so that any feasiblesolution for the IP has the same value for the LP

• Let denote the optimum value of this LP• Any optimal solution to the IP is feasible for the

LP and has value8-Mar-17MAT-72606 ApprAl, Spring 2017 29

• Thus, any opt. solution to the LP will have value= OPT, since this minimization LP

finds a feasible solution of lowest possible value• Using a poly-time solvable relaxation in order to

obtain a lower bound (in min) or an upper bound(in max) on the optimum value of the problem isa common concept

• We will see that a fractional solution to the LPcan be rounded to a solution to the IP ofobjective function value that is within a certainfactor of the value of the LP

• Thus, the integer solution will cost OPT

8-Mar-17MAT-72606 ApprAl, Spring 2017 30

Page 16: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

16

1.3 A deterministic rounding algorithm

• Suppose that we solve the LP relaxation of theset cover problem

• Let denote an optimal solution to the LP• How then can we recover a solution to the set

cover problem?• Given the LP solution , we include subset in

our solution iff 1 , where is themaximum number of sets in which any elementappears

8-Mar-17MAT-72606 ApprAl, Spring 2017 31

• More formally, let = : be the numberof sets in which element appears, = 1, … , ;then = max

,…,

• Let denote the indices of the subsets in thissolution

• In effect, we round the fractional solution toan integer solution by setting = 1 if

1 , and = 0 otherwise• It is easy to prove that is a feasible solution to

the IP, and indeed indexes a set cover

8-Mar-17MAT-72606 ApprAl, Spring 2017 32

Page 17: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

17

Lemma 1.5: The collection of subsets , , is aset cover.Proof: Consider the solution specified by thelemma. An element is covered if this solutioncontains some subset containing .Each element is covered: Because the optimalsolution is a feasible solution to the LP, we knowthat 1: for element .

By the definition of and , there are termsin the sum, so at least one term must be 1 .Thus, for some such that , 1 .Therefore, , and element is covered.

8-Mar-17MAT-72606 ApprAl, Spring 2017 33

Theorem 1.6: The rounding algorithm is an-approx. algorithm for the set cover problem.

Proof: It is clear that the algorithm runs inpolynomial time. By our construction, ·for each . From this, and the fact that eachterm is nonnegative for = 1, … , , we seethat

· = = OPT

where the final inequality follows from the fact thatOPT.

8-Mar-17MAT-72606 ApprAl, Spring 2017 34

Page 18: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

18

• In the vertex cover, = 2 for each , sinceeach edge is incident to exactly two vertices

• Thus, the rounding algorithm gives a 2-approx.algorithm for the vertex cover problem

• This particular algorithm allows us to have an afortiori guarantee for each input

• We know that for any input, the solutionproduced has cost at most a factor of morethan the cost of an optimal solution

• We can for any input compare the value of thesolution we find with the value of the LPrelaxation

8-Mar-17MAT-72606 ApprAl, Spring 2017 35

• If the algorithm finds a set cover , let

=

• From the proof above, we know that• However, for any given input, it could be the

case that is significantly smaller than ; in thiscase we know that

= OPT,

and the solution is within a factor of of optimal• The algorithm can easily compute , given that it

computes and solves the LP relaxation8-Mar-17MAT-72606 ApprAl, Spring 2017 36

Page 19: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

19

1.4 Rounding a dual solution

• Suppose that each element is charged somenonnegative price 0 for its coverage by aset cover

• Intuitively, it might be the case that someelements can be covered with low-weightsubsets, while other elements might requirehigh-weight subsets to cover them– we would like to be able to capture this distinction

by charging low prices to the former and highprices to the latter

8-Mar-17MAT-72606 ApprAl, Spring 2017 37

• In order for the prices to be reasonable, it cannotbe the case that the sum of the prices ofelements in a subset is more than its weight,since we are able to cover all of those elementsby paying weight

• Thus, for each subset we have the followinglimit on the prices:

:

• We can find the highest total price that theelements can be charged by the following LP:

8-Mar-17MAT-72606 ApprAl, Spring 2017 38

Page 20: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

20

• This is the dual LP of the set cover LP relaxation• If we derive a dual for a given LP, the given

program is called the primal LP• This dual has a variable for each constraint of

the primal LP (i.e., for : 1), and has aconstraint for each variable of the primal

• This is true of dual LPs in general8-Mar-17MAT-72606 ApprAl, Spring 2017 39

maximize

subject to : , = 1, … , ,0, = 1, … ,

8-Mar-17MAT-72606 ApprAl, Spring 2017 40

minimize

subject to 1:

, = 1, … ,

0, = 1, … ,

maximize

subject to:

= 1, … , ,

0, = 1, … ,

Page 21: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

21

• Dual LPs have a number of very interesting anduseful properties

• E.g., let be any feasible solution to the setcover LP relaxation, and let be any feasible setof prices (any feasible solution to the dual LP)

• Then consider the value of the dual solution :

:

,

since for any , : 1 by the feasibilityof

8-Mar-17MAT-72606 ApprAl, Spring 2017 41

• Rewriting the RHS of this inequality, we have

:

=:

• Finally, noticing that since is a feasible solutionto the dual LP, we know that : forany , so that

:

8-Mar-17MAT-72606 ApprAl, Spring 2017 42

Page 22: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

22

• So we have shown that

• I.e., any feasible solution to the dual LP has avalue any feasible solution to the primal LP

• In particular, any feasible solution to the dual LPhas a value the optimal solution to the primalLP, so for any feasible ,

• This is called the weak duality property of LPs• Since we previously argued that OPT, we

have that for any feasible , OPT

8-Mar-17MAT-72606 ApprAl, Spring 2017 43

• Additionally, there is a quite amazing strongduality property of LPs– as long as there exist feasible solutions to

both the primal and dual LPs, their optimalvalues are equal

• Thus, if is an optimal solution to the set coverLP relaxation, and is an optimal solution tothe dual LP, then

=

8-Mar-17MAT-72606 ApprAl, Spring 2017 44

Page 23: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

23

• Info from a dual LP solution can sometimes beused to derive good approximation algorithms

• Let be an optimal solution to the SC dual LP,and consider the solution in which we choose allsubsets for which the corresponding dualinequality is tight;– i.e., the inequality is met with equality for

subset , and : =

• Let denote the indices of the subsets in thissolution

• This algorithm also is an -approximationalgorithm for the set cover problem

8-Mar-17MAT-72606 ApprAl, Spring 2017 45

Lemma 1.7: The collection of subsets , , isa set cover.Proof: Suppose that there exists some uncoveredelement . Then for each subset containing ,it must be the case that

:< .

Let be the smallest difference between the RHSand LHS of all constraints involving ; i.e.,

= min: :

.

By the above inequality, we know that > 0.8-Mar-17MAT-72606 ApprAl, Spring 2017 46

Page 24: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

24

Consider now a new dual solution in which =and every other component of is the

same as in . Then is a dual feasible solutionsince for each such that ,

:=

:

by the definition of . For each such that ,

:=

:,

as before. Furthermore, > , whichcontradicts the optimality of . Thus, all elementsare covered and is a set cover.

8-Mar-17MAT-72606 ApprAl, Spring 2017 47

Theorem 1.8: The dual rounding algorithmdescribed above is an -approximation algorithmfor the set cover problem.Proof: The central idea is the following “charging”argument: when we choose a set to be in thecover, we “pay” for it by charging to each of itselements ; each element is charged at most oncefor each set that contains it (and hence at mosttimes), and so the total cost is at most , or

times the dual objective function.More formally, since only if = : ,we have that the cost of the set cover is

8-Mar-17MAT-72606 ApprAl, Spring 2017 48

Page 25: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

25

=:

= :

OPTThe second eq. follows from the fact that when weinterchange the order of summation, the coefficientof is equal to the number of times that this termoccurs overall. The final ineq. follows from theweak duality property

8-Mar-17MAT-72606 ApprAl, Spring 2017 49

• This algorithm can do no better than thealgorithm of the previous section– We can show that if indexes the solution

returned by the primal rounding algorithm ofthe previous section, then

• This follows from a property of optimal LPsolutions called complementary slackness

• We showed earlier the following for any feasiblesolution to the set cover LP relaxation, and anyfeasible solution to the dual LP:

=::

8-Mar-17MAT-72606 ApprAl, Spring 2017 50

Page 26: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

26

• Furthermore, we claimed that strong dualityimplies that for optimal solutions and ,

=• Thus, for any optimal solutions and the two

inequalities in the chain of inequalities abovemust in fact be equalities

• The only way this can happen is that whenever> 0 then : = 1, and whenever >

0, then : =

• That is, whenever a LP variable (primal or dual)is nonzero, the corresponding constraint in thedual or primal is tight

8-Mar-17MAT-72606 ApprAl, Spring 2017 51

• These conditions are known as thecomplementary slackness conditions

• Thus, if and are optimal solutions, thecomplementary slackness conditions must hold

• The converse is also true:– if and are feasible primal and dual

solutions, respectively, then– if the complementary slackness conditions

hold, the values of the two objective functionsare equal and therefore

– the solutions must be optimal8-Mar-17MAT-72606 ApprAl, Spring 2017 52

Page 27: Approximation Algorithms - TUNIelomaa/teach/ApprAl-17-1.pdf · 1. An introduction to approximation algorithms 2. Greedy algorithms and local search 3. Rounding data and dynamic programming

3/8/2017

27

• In the case of the set cover program, if > 0 forany primal optimal solution , then thecorresponding dual inequality for must be tightfor any dual optimal solution

• Recall that in the algorithm of the previoussection, we put when 1

• Thus, implies that , so that

8-Mar-17MAT-72606 ApprAl, Spring 2017 53