generalization and specialization of kernelization daniel lokshtanov

50
Generalization and Specialization of Kernelization Daniel Lokshtanov

Upload: kaya-trillo

Post on 14-Dec-2015

246 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Generalization and Specialization of Kernelization Daniel Lokshtanov

Generalization and Specialization of Kernelization

Daniel Lokshtanov

Page 2: Generalization and Specialization of Kernelization Daniel Lokshtanov

We Kernels

∃ ¬ Kernels

Why?

Page 3: Generalization and Specialization of Kernelization Daniel Lokshtanov

What’s Wrong with Kernels(from a practitioners point of view)

1. Only handles NP-hard problems.2. Don’t combine well with heuristics.3. Only capture size reduction.4. Don’t analyze lossy compression.

Doing something about (1) is a different field altogether.

This talk; attacking (2)

Some preliminary work on (4) high fidelity redections

Page 4: Generalization and Specialization of Kernelization Daniel Lokshtanov

”Kernels don’t combine with heuristics” ??

Kernel mantra; ”Never hurts to kernelize first, don’t lose anything”

We don’t lose anything if after kernelization we will solve the compressed instance exactly. Do not necessarily preserve approximate solutions.

Page 5: Generalization and Specialization of Kernelization Daniel Lokshtanov

Kernel

I,k I’,k’

In this talk, parameter = solution size / quality

Solution of size ≤ k Solution of size ≤ k’

Solution of size 1.2k’Solution of size 1.2k??

Page 6: Generalization and Specialization of Kernelization Daniel Lokshtanov

Known/Unknown k

Don’t know OPT in advance.

Solutions:- The parameter k is given and we only care

whether OPT ≤ k or not. - Try all values for k.- Compute k ≈ OPT by approximation algorithm.

Overhead

If k > OPT, does kernelizing with k preserve OPT?

Page 7: Generalization and Specialization of Kernelization Daniel Lokshtanov

Buss kernel for Vertex Cover

Vertex Cover: Find S V(G)⊆ of size ≤ k such that every edge has an endpoint in S.

- Remove isolated vertices- Pick neighbours of degree 1 vertices into

solution (and remove them)- Pick degree > k vertices into solution and

remove them.

Reduction rules are independent of k. Proof of correctness transforms any solution, not only any optimal solution.

Page 8: Generalization and Specialization of Kernelization Daniel Lokshtanov

Degree > k rule

Any solution of size ≤ k must contain all vertices of degree > k.

We preserve all solutions of size ≤ k.

Lose information about solutions of size ≥ k.

Page 9: Generalization and Specialization of Kernelization Daniel Lokshtanov

Buss’ kernel for Vertex Cover

- Find a 2-approximate solution S.- Run Buss kernelization with k = |S|.

I,k I,k’

Solution of size 1.2k’Solution of size 1.2k’ + (k-k’) ≤1.2k

Page 10: Generalization and Specialization of Kernelization Daniel Lokshtanov

Buss’ - kernel

- Same size as Buss kernel, O(k2), up to constants.

- Preserves approximate solutions, with no loss compared to the optimum in the compression and decompression steps.

Page 11: Generalization and Specialization of Kernelization Daniel Lokshtanov

NT-Kernel

In fact the Nemhauser Trotter 2k-size kernel for vertex cover already has this property – the crown reduction rule is k-independent!

Proof: Exercise

Page 12: Generalization and Specialization of Kernelization Daniel Lokshtanov

Other problems

For many problems applying the rules with a value of k preserves all ”nice” solutions of size ≤ k approximation preserving kernels.

Example 2: Feedback Vertex Set, we adapt a O(k2) kernel of [T09].

Page 13: Generalization and Specialization of Kernelization Daniel Lokshtanov

Feedback Vertex Set

FVS: Is there a subset S V(G)⊆ of size ≤ k such that G \ S is acyclic?

R1: Delete vertices of degree 0 and 1.R2: Replace degree 2 vertices by edges.

R3: If v appears in > k cycles that intersect only in v, select v into S.

R1 & R2 preserve all reasonable solutions

R3 preserves all solutions of size ≤ k

Page 14: Generalization and Specialization of Kernelization Daniel Lokshtanov

Feedback Vertex Set

R4 (handwave): If R1-R3 can’t be applied and there is a vertex x of degree > 8k, we can identify a set X such that in any feedback vertex set S of size ≤ k, either x S∈ or X S⊆ .

R4 preserves all solutions of size ≤ k

Page 15: Generalization and Specialization of Kernelization Daniel Lokshtanov

Feedback Vertex Set Kernel

Apply a 2-approximation algorithm for Feedback Vertex Set to find a set S.

Apply the kernel with k=|S|. Kernel size is O(OPT2).

Preserves approximate solutions, with no loss compared to the optimum in the compression step.

Page 16: Generalization and Specialization of Kernelization Daniel Lokshtanov

Remarks;

If we don’t know OPT, need an approximation algorithm.

Most problems that have polynomial kernels also have constant factor or at least Poly(OPT) approximations.

Using f(opt)-approximations to set k results in larger kernel sizes for the approximation preserving kernels.

Page 17: Generalization and Specialization of Kernelization Daniel Lokshtanov

Right definition?

Approximation preserving kernels for optimization problems, definition 1:

I I’|I’I≤ poly(OPT)

OPT

c*OPT

OPT’Poly time

Poly time

c*OPT’

Page 18: Generalization and Specialization of Kernelization Daniel Lokshtanov

Right definition?

Approximation preserving kernels for optimization problems, definition 2:

I I’|I’I≤ poly(OPT)

OPT

OPT + t

OPT’Poly time

Poly time

OPT’ + t

Page 19: Generalization and Specialization of Kernelization Daniel Lokshtanov

What is the right definition?

Definition 1 captures more, but Definition 2 seems to capture most (all?) positive answers.

Exist other reasonable variants that are not necessarily equivalent.

Page 20: Generalization and Specialization of Kernelization Daniel Lokshtanov

What do approximation preserving kernels give you?

When do approximation preserving kernels help in terms of provable running times?

If Π has a PTAS or EPTAS, and an approximation preserving kernel, we get (E)PTASes with running time f(ε)poly(OPT) + poly(n) or OPTf(ε) + poly(n).

Page 21: Generalization and Specialization of Kernelization Daniel Lokshtanov
Page 22: Generalization and Specialization of Kernelization Daniel Lokshtanov

Problems on planar (minor-free) graphs

Many problems on planar graphs and H-minor-free graphs admit EPTAS’s and have linear kernels.

Make the kernels approximation preserving?

These Kernels have only one reduction rule; the protrusion rule.

(to rule them all)

Page 23: Generalization and Specialization of Kernelization Daniel Lokshtanov

Protrusions

A set S V(G)⊆ is an r-protrusion if- At most r vertices in S have neighbours

outside S.- The treewidth of G[S] is at most r.

Page 24: Generalization and Specialization of Kernelization Daniel Lokshtanov

Protrusion Rule

A protrusion rule takes a graph G with an r-protrusion S of size > c, and outputs an equivalent instance G’, with V(G’) < V(G).

Usually, the entire part G[S] is replaced by a different and smaller protrusion that ”emulates” the behaviour of S.

The constant c depends on the problemand on r.

Page 25: Generalization and Specialization of Kernelization Daniel Lokshtanov

Kernels on Planar Graphs

[BFLPST09]: For many problems, a protrusion rule is sufficient to give a linear kernel on planar graphs.

To make these kernels apx-preserving, we need an apx-preserving protrusion rule.

Page 26: Generalization and Specialization of Kernelization Daniel Lokshtanov

Apx-Preserving Protrusion Rule

I I’|I’I< I

OPT

OPT + t

OPT’≤ OPT

Poly time

Poly time

OPT’ + t

S

Page 27: Generalization and Specialization of Kernelization Daniel Lokshtanov

Kernels on Planar Graphs

[BFLPST09]: – If a problem has finite integer index it has a

protrusion rule.– Simple to check sufficient condition for a problem

to have finite integer index.

Finite integer index is not enough for apx-preserving protrusion rule. But the sufficient condition is!

Page 28: Generalization and Specialization of Kernelization Daniel Lokshtanov

t-boundaried graphs

A t-boundaried graph is a graph G with t distinguished vertices labelled from 1 to t. These vertices are called the boundary of G.

G can be colored, i.e supplied with some vertex/edge sets C1,C2…

C1 C2

Page 29: Generalization and Specialization of Kernelization Daniel Lokshtanov

Gluing

Gluing two colored t-boundaried graphs: (G1,C1,C2) ⊕ (G2,D1,D2) (G3, C1 ∪ D1, C2 ∪ D2)means identifying the boundary vertices with the same label, vertices keep their colors.

C1 C2

1

2

3

D2 D1

1

2

3C1 C2

D2 D1

1

2

3

Page 30: Generalization and Specialization of Kernelization Daniel Lokshtanov

Canonical Equivalence

For a property Φ of 1-colored graphs we define the equivalence relation ≣Φ on the set of t-boundaried c-colored graphs.

(G1,X1) ≣Φ (G2,X2) For every (G’, X’):⇔

Φ(G1 G’, X⊕ 1 X’) ∪ ⇔ Φ(G2 G’, X⊕ 2 X’) ∪ Can also define for 10-colorable problems in the same way

Page 31: Generalization and Specialization of Kernelization Daniel Lokshtanov

Canonical Equivalence

(G1,X) ≣Φ (G2,Y) means ”gluing (G1,X) onto something has the same effect as gluing (G2,Y) onto it”

X1 X2

12

3Z2 Z1

1

2

3

Y1

Y21

23

Page 32: Generalization and Specialization of Kernelization Daniel Lokshtanov

Finite State

Φ is finite state if for every integer t, ≣Φ has a finite number of equivalence classes on t-boundaried graphs.

Note: The number of equivalence classes is a function f(Φ,t) of Φ and t.

Page 33: Generalization and Specialization of Kernelization Daniel Lokshtanov

Variant of Courcelle’s TheoremFinite State Theorem (FST): If Φ is CMSOL-

definable, then Φ is finite state.

Quantifiers: ∃ and ∀ for variables for vertex sets and edge sets, vertices and edges.

Operators: = and ∊Operators: inc(v,e) and adj(u,v) Logical operators: ∧, ∨ and ¬Size modulo fixed integers operator: eqmodp,q(S)

EXAMPLE: p(G,S) = “S is an independent set of G”:p(G,S) = u, v S, ¬adj(u,v)∀ ∊

Page 34: Generalization and Specialization of Kernelization Daniel Lokshtanov

CMSOL Optimization Problemsfor colored graphs

Φ-OptimizationInput: G, C1, ... Cx

Max / Min |S|So that Φ(G, C1, Cx, S) holds.

CMSOL definable proposition

Page 35: Generalization and Specialization of Kernelization Daniel Lokshtanov

Sufficient Condition

[BFLPST09]: – If a CMSO-optimization problem Π is strongly

monotone Π has finite integer index it has a protrusion rule.

Here:– If a CMSO-optimization problem Π is strongly

monotone Π has apx-preserving protrusion rule.

Page 36: Generalization and Specialization of Kernelization Daniel Lokshtanov

Signatures (for minimization problems)

G

H3

H2

H1

SH3

SH2

SH1

|SG1| = 2

|SG3|=1

|SG2|=5

2

5

1

Choose smallest S V(G) ⊆ to make Φ hold

Intuition: f(H,S) returns the best way to complete in G a fixed partial solution in H.

Page 37: Generalization and Specialization of Kernelization Daniel Lokshtanov

Signatures (for minimization problems)

The signature of a t-boundaried graph G is a function fG with

Input: t-boundaried graph H and SH V(H) ⊆

Output: Size of the smallest SG V(G) ⊆ such that Φ(G ⊕H, SG S∪ H) holds.

Output: ∞ if SG does not exist.

Page 38: Generalization and Specialization of Kernelization Daniel Lokshtanov

Strong Monotonicity(for minimization problems)

A problem Π is strongly monotone if for any t-boundaried G, there is a vertex set Z V(G) ⊆such that |Z| ≤ fG(H,S) + g(t) for an arbitrary function g.

Signature of G, evaluated at (H,S)

Size of the smallest S’ V(G)⊆ such that S’ S ∪is a feasible solution of G H⊕

Page 39: Generalization and Specialization of Kernelization Daniel Lokshtanov

Strong monotonicity - intuition

Intuition: A problem is strongly monotone if for any t-boundaried G there ∃ partial solution S that can be glued onto ”anything”, and S is only g(t) larger than the smallest partial solution in G.

Page 40: Generalization and Specialization of Kernelization Daniel Lokshtanov

Super Strong Monotonicity Theorem

Theorem: If a CMSO-optimization problem Π is strongly monotone, then it has apx-preserving protrusion rule.

Corollary: All bidimensional’, strongly monotone CMSO-optimization problems Π have linear size apx-preserving kernels on planar graphs.

Page 41: Generalization and Specialization of Kernelization Daniel Lokshtanov

Proof of SSMT

Lemma 1: Let G1 and G2 be t-boundaried graphs of constant treewidth, f1 and f2 be the signatures of G1 and G2, and c be an integer such that for any H, SH V(H)⊆ : f1(H,SH) + c = f2(H,SH). Then:

G1 H⊕

Feasible solution

Z1

G2 H⊕

Feasible solution

Z2 Poly time

Decrease size by c

Poly time

Increase size by c

Page 42: Generalization and Specialization of Kernelization Daniel Lokshtanov

Proof of Lemma 1

G1

H

H

G2

Decrease size by cPoly time?

Constant treewidth!

Page 43: Generalization and Specialization of Kernelization Daniel Lokshtanov

Proof of SSMT

Lemma 2: If a CMSO-min problem Π is strongly monotone, then:

For every t there exists a finite collection F of t-boundaried graphs such that:

For every G1, there is a G2 F∈ and c ≥ 0 such that:

For any H, SH V(H)⊆ : f1(H,SH) + c = f2(H,SH).

Page 44: Generalization and Specialization of Kernelization Daniel Lokshtanov

SSMT = Lemma 1 + 2

Keep a list F of graphs t-boundaried graphs as guaranteed by Lemma 2.

Replace large protrusions by the corresponding guy in F. Lemma 1 gives correctness.

Page 45: Generalization and Specialization of Kernelization Daniel Lokshtanov

Proof of Lemma 2

(H1, S1)

Signaturevalue

(H2, S2) (H3, S3) (H4, S4)(H5, S5)(H6, S6)(H7, S7)(H8, S8)...

G1 ≤ g(t)

G2

Page 46: Generalization and Specialization of Kernelization Daniel Lokshtanov

Proof of Lemma 2

Only a constant number of finite, integer curves that satisfy max-min ≤ t (up to translation).

Infinite number of infinite such curves.

Since Π is a min-CMSO problem, we only need to consider the signature of G on a finite number of pairs (Hi,Si).

Page 47: Generalization and Specialization of Kernelization Daniel Lokshtanov

Super Strong Monotonicity Theorem

Theorem: If a CMSO-optimization problem Π is strongly monotone, then it has apx-preserving protrusion rule.

Corollary: All bidimensional’, strongly monotone CMSO-optimization problems Π have linear size apx-preserving kernels on planar graphs.

Page 48: Generalization and Specialization of Kernelization Daniel Lokshtanov

Recap

Approximation preserving kernels are much closer to the kernelization ”no loss” mantra.

It looks like most kernels can be made approximation preserving at a small cost.

Is it possible to prove that some problems have smaller kernels than apx-preserving kernels?

Page 49: Generalization and Specialization of Kernelization Daniel Lokshtanov

What I was planning to talk about, but didn’t.

”Kernels” that do not reduce size, but rather reduce a parameter to a function of another in polynomial time.

– This IS pre-processing– Many many examples exist already– Fits well into Mike’s ”multivariate” universe.

Page 50: Generalization and Specialization of Kernelization Daniel Lokshtanov

THANK YOU!