complexity classes for optimization kugele/files/jobsis_alt.pdfjobsis – binntal 2004 complexity...

Download Complexity Classes for Optimization  kugele/files/JoBSIS_alt.pdfJoBSIS – Binntal 2004 Complexity Classes for Optimization Problems – p. 3/67 Famous cartoon (Garey  Johnson, 1979) “I can’t

Post on 28-May-2018

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Complexity Classesfor Optimization Problems

    JoBSIS Binntal 2004

    Stefan Kugelekugele@cs.tum.edu

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 2/67

    Famous cartoon (Garey & Johnson, 1979)

    I cant find an efficient algorithm. I guess Im just to dump.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 3/67

    Famous cartoon (Garey & Johnson, 1979)

    I cant find an efficient algorithm, because no such algorithm is possible!

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 4/67

    Famous cartoon (Garey & Johnson, 1979)

    I cant find an efficient algorithm, but neither can all these famous people.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 5/67

    Why using Approximation?

    We are not able to solve NP-complete problems efficiently, that is, there is no known way to solve them in polynomial time unless P = NP.Why not looking for a approximate solution?

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 6/67

    Two Basic principles

    Algorithm designGreedy approachLocal searchLinear programming (LP)Dynamic programming (DP)Randomized algorithms

    Complexity ClassesThats what we are dealing with today

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 7/67

    Agenda

    IntroductionOptimization problemErrors, approximation algorithmClasses

    NPO

    APX

    PTAS

    FPTAS

    F-APXOutlook

    AP-ReductionsMax-SNP

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 8/67

    Optimization Problem

    Definition:

    O = (I, SOL, m, type)I the instance setSOL(i) the set of feasible solutions for instance i

    (SOL(i) for i I )m(i, s) the measure of solution s with respect to instance i

    (positive integer for i I ) and s SOL(i) )type {min; max}

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 9/67

    Example: Knapsack

    Given is a knapsack with capacity C and a set of items S={1, 2, , n}, where item i has weight wi and value vi.The problem is to find a subset T S that maximizes the value of iT vigiven that iT wi C; that is all the items fit in the knapsack with capacity C.All set T S: iTw(i) C are feasible solutions.iTvi is the quality of the solution T with respect to the instance i.

    KNAPSACK = (I, SOL, m, max)I = {(S, w, C, v ) | S={1, 2, , n}, w,v : S N }

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 10/67

    Agenda

    IntroductionOptimization problemErrors, approximation algorithmsClasses

    NPO

    APX

    PTAS

    FPTAS

    F-APXOutlook

    AP-ReductionsMax-SNP

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 11/67

    Approximation Algorithm

    Definition:

    Given an optimization problem O = (I, SOL, m, type), an Algorithm A is an approximation algorithm for O if, for any given instance i I, it returns an approximate solution, that is a feasible solution A(i) SOL(i) with certain properties.

    Question: But what is an approximate solution?Answer: A solution whose value is not too far from the optimum.

    Whats the absolute error we make by approximating the solution?

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 12/67

    Absolute error

    Definition:

    Given an optimization problem O, for any instance i I and for any feasible solution s of i, the absolute error for s with respect to i is defined as:

    where m*(i) denotes the measure of an optimal solution of instance iand m(i, s) denotes the measure of solution s.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 13/67

    Absolute approximation algorithm

    Definition:

    Given an optimization problem O and an approximation algorithm A for O, we say that A is an absolute approximation algorithm if there exists a constant k such that, for every instance i of O,

    To express the quality of an approximate solution, commonly usednotations are:

    the relative error andthe performance ratio

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 14/67

    Relative error

    Definition:

    Given an optimization problem O, for any instance i of O and for any feasible solution s of i, the relative error with respect to i is defined as

    For both, maximization and minimization problems, the relative error is equal to 0 when the solution obtained is optimal, and becomes close to 1 when the approximate solution is very poor.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 15/67

    approximate algorithm

    Definition:

    Given an optimization problem O and an approximation algorithm A for O, we say that A is an approximate algorithm for O if, given any input instance i of O, the relative error of the approximate solution A(i)provided by algorithm A is bounded by , that is

    Alternatively, the quality can be expressed by means of a different, but related, measure.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 16/67

    Performance ratio

    Definition:

    Given an optimization problem O, for any instance i of O and for any feasible solution s of i, the performance ratio of s with respect to i is defined as

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 17/67

    Performance ratio (cont.)

    For both, minimization and maximization, the value of the performance ratio is equal to 1 in the case of an optimal solution, and can assume arbitrarily large values in the case of an poor approximate resolution.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 18/67

    Relative error performance ratio

    Relative error and performance ratio are related:

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 19/67

    Example: E(i, s), R(i, s) (MinVC)

    A

    B

    C

    D

    E

    FG

    A

    B

    C

    D

    E

    FG

    Whiteboard

    approx. solution optimal solution

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 20/67

    r approximate algorithm

    Definition:

    Given an optimization problem O and an approximation algorithm A for O, we say that A is an r - approximate algorithm for O, given any input instance i of O, the performance ratio of the approximate solution A(i)is bounded by r, that is

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 21/67

    Agenda

    IntroductionOptimization problemErrors, approximation algorithmClasses

    NPO

    APX

    PTAS

    FPTAS

    F-APXOutlook

    AP-ReductionsMax-SNP

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 22/67

    The class NPO

    Definition:

    NPO is the class of optimization problems whose decision versions are in NP.

    O = (I, SOL, m, type) NPO iff polynomial p : i I, s SOL(i) : |s| p(|i|)deciding s SOL(i) is in P

    computing m(s, i) is in FP

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 23/67

    The class APX

    Definition:

    APX is the class of all NPO problems such that, for some r 1, there exists a polynomial-time r approximate algorithm for O.

    Examples: MinVertexCover, MaxSat, MaxKnapsack, MaxCut, MaxBinPacking, MaxPlanarGraphColoring

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 24/67

    Proof:

    Idea: TSP can not be r approximated, no matter how large is the performance ratio r.

    Reduction from the NP-complete Hamiltonian Circuit decision problem.

    Let G=(V, E) an instance of HC with |V| =n.Construct for any r 1 a MinTSP instance such that if we had a poly-time r approximate algorithms for MinTSP, then we could decide whether the graph G has a HC in polynomial time.The instance of MinTSP is defined on the same set of nodes V and with distances:

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 25/67

    Proof: (cont.)

    This instance of MinTSP has a solution of measure n iff G has a HC.The next smallest approximate solution has measure at least n(1+r)(n-1+(1+nr) = n+(nr) = n(1+r)) and the performance ratio

    is hence greater than r.If G has no HC, then the optimal solution has measure at least n(1+r).Therefore, if we had a polynomial r approximate algorithm for MinTSP, we could use it to decide whether G has a HC in the following way: apply the approximation algorithm to the instance of MinTSP and answer YES iff it returns a solution of measure n.

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 26/67

    Example: MinimumVertexCover

    Instance: Graph G=(V, E)Query: Smallest vertex coverTheorem: MinimumVertexCover is 2-approximatable, i.e.

    MinimumVertexCover APXProof: The corresponding decision problem is NP-complete.

    algorithm VertexCover-2-Approx (V, E)

    while E dopick an arbitrary edge {u, v} Eadd u and v to the vertex coverdelete all edges covered by u or v from E

    od

  • JoBSIS Binntal 2004 Complexity Classes for Optimization Problems p. 27/67

    Example: MinimumVertexCover (cont.)

    A

    B

    C

    D

    E

    FG

    algorithm VertexCover-2-Approx (V, E)

    while E dopick an arbitrary edge {u, v} Eadd u and v to the vertex coverdelete all edges covered by u or v from E

    od

    VC = {}

    E={(A,G), (A, E), (A, D), (A, C), (B, G), (B, F), (B,D), (B, C),(D, G), (D, F)

    }

Recommended

View more >