if np languages are hard on the worst-case then it is easy to find their hard instances danny...

28
If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma, Tel-Aviv U.

Upload: jodie-allison

Post on 23-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

If NP languages are hard on the worst-case then it is easy to find their hard instances

Danny Gutfreund, Hebrew U.Ronen Shaltiel, Haifa U.

Amnon Ta-Shma, Tel-Aviv U.

Page 2: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Impagliazzo’s worldsAlgorithmica: SAT NP. NP=P

Heuristica: NP is easy on avg on EASY distributions.

Pessiland: NP hard on the avg, but no OWF.Minicrypt: OWF, but no public-key crypto.Cryptomania: Public-key cryptography.

Page 3: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Pseudo-P [IW98,Kab-01]

L Pseudop-P if there exists a polynomial-time algorithm A=AL s.t.:

For every samplable distribution {Dn},

for every input length n,

Pr x Dn [ A(x)=L(x) ] > p

D={Dn} is samplable if there exists S P s.t. S(1n,Up(n))=Dn

Page 4: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Distributional complexity (Levin)

Def: L Avgp(n)P,

if for every samplable distribution D

there exists A=AL,D P s.t. Pr x Dn [A(x)=L(x)] ≥ p(n)

Page 5: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Heuristica vs. Super-Heuristica

Heuristica: every (avg-case) solution to some hard problem is bound to a specific distribution. If the distribution changes we need to come up with a new solution.

Super-Heuristica: once a good heuristic for some NP-complete problem is developed then every new problem just needs to be reduced to it.

Natural for cryptography (and lower-bounds)

Natural for algorithmsNatural for complexity,Naturally appears in

derandomization (IW98,K01,..)

Page 6: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Connection to cryptography

The right hardness for cryptography.

E.g., a standard assumption in cryptography is that (FACTORING,D) AvgBPP

Where D is the samplable distribution obtained by sampling primes p,q and outputting

N =pq.

Page 7: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

A remark about reductions For the distributional setting one needs to define “approximation preserving reductions” [Levin]

[L86,BDCGL90,Gur90,Gur91] Showed complete problems in DistNP.

[IL90] showed a reductions to the uniform distribution.

For Pseudo-classes any (Cook/Karp) reduction is good.

if L reduces to L’ via R then for every samplable D, R(D) is samplable.

So SAT PseudoP NP PseudoP

Page 8: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

A refined pictureAlgorithmica: NP=P

Super-Heuristica: NP Pseudo-P.

Pessiland: NP hard on the avg, but no OWF.

Heuristica: NP Avg-P.

Page 9: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Our main result

NP P NP Pseudo2/3+εP

Worst-case hardness weak average-case hardness

Also, NP BPP NP Pseudo2/3+εBPP

Compare with the open problem: NP BPP ? NP Avg 1-1/p(n) BPP

Page 10: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Back to Impagliazzo’s worldsAlgorithmica: NP=P

Super-Heuristica: NP Pseudo-P.

Pessiland: NP hard on the avg, but no OWF.

Heuristica: NP Avg-P.

Page 11: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

In words Super-Heuristica does not exist: if

we do well on every samplable distribution we do well on every input.

Heuristics for NP-complete problems will always have to be bound to specific samplable distributions (unless NP is easy on the worst-case).

Page 12: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Main Lemma

Given a description of a poly-time algorithm that fails to solve SAT, we can efficiently produce on input 1n up to 3 formulas (of length n) s.t. at least one is hard for the algorithm.

We also have a probabilistic version.

Page 13: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Proof - main lemma We are given a description of DSAT, and

we know that DSAT fails to solve SAT.

The idea is to use DSAT to find instances on which it fails. Think of DSAT as an adversary.

Page 14: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

First question to DSAT: can you solve SAT on the worst-case?

Write as a formula: (n)={0,1}^n

[SAT() DSAT()]

Problem - not an NP formula:

(n)={0,1}^n [ ()=true DSAT()=0]

[ ()=false DSAT()=1]

Page 15: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Search to decisionE.g., starting with a SAT sentence (x1,…,xn)

DSAT claims is satisfiable.

For each variable try setting xi=0,xi=1,

If DSAT says none is satisfiyng, we found a contradiction.

Otherwise, chose xi so that DSAT says it is satisfyable.

At the end check the assignment is satisfying.

Page 16: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Can SSAT solve SAT on the worst-case? SSAT has one-sided error: it can’t say “yes” on an unsatisfied formula.

(n)={0,1}^n [ ()=true

SSAT()=no] Notice that (n) SAT

Notice that we use the of DSATcode

Page 17: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

First question to DSAT: can SSAT solve SAT on the worst-case? If DSAT((n))=false output (n). [Note that (n) SAT ] Otherwise, run the search algorithm on

(n) with DSAT. Case 1: the search algorithm fails. Output the three contradicting statements.

DSAT((n)[1…ixi+1…xm])=true,DSAT((n)[1…i0…xm])=false, and ,DSAT((n)[1…i1…xm])=false.

Case 2: The search algorithm succeeds .We hold SAT such that SSAT()=false.

Or DSAT((n)[1…m])=false

Page 18: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Are we done? We hold on which SSAT is wrong (SAT but SSAT()=false )

What we need is a sentence on which DSAT is wrong.

Page 19: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Now work with

If DSAT((n))=false output (n). [Note that (n) SAT ] Otherwise, run the search algorithm on

(n) with DSAT.

Case 1: the search algorithm fails. Output the three contradicting statements.Case 2: The search algorithm succeeds .SSAT finds a satisfying assignent for . Case 2 never happens, SSAT()=false.

Page 20: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Comments about the reduction

Our reduction is non-black-box (because we use the

description of the TM, and the search to decision reduction), and,.

it is adaptive (even if we use parallel search to decision reductions [BDCGL90]).

So it does not fall in the categories ruled out by [Vio03,BT03] (for average classes)

Page 21: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Dealing with probabilistic algorithms If we proceed as before we get:

(n)={0,1}^n [ ()=t Prr[SSAT(,r)=1]<2/3 ]

Problem: (n) is an MA statement. We do not know how to derandomize without unproven assumptions.

Solution: Derandomize using Adelman (BPPP/Poly)

Page 22: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Back to the proof We replace the formula

(n)={0,1}^n [ ()=t Prr[SSAT(,r)=1]<2/3 ]

with a distribution over formulas:

(n,r’)={0,1}^n [ ()=t Prr[SSAT(,r’)=0]<2/3 ]

With very high probability SSAT’(input,r’) behaves like SSAT and the argument continues as before.

Page 23: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

A weak Avg version

Thm: Assume NP RP. Let f(n)=n(1) . Then there exists a distribution D samplable in time f(n), such that for every NP-complete languge L,

(L,D) Avg 1-1/n^3 BPP

Remark: the corresponding assertion for deterministic classes can be proved directly by diagonalization.

The distribution is more complicated than the algorithms it’s hard for.

Page 24: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Why worst-case to avg-case reductions are hard?

Thm 2 says that the first is not the problem.

1. An exponential search space.2. A weak machine has to beat stronger

machines.

Here are two possible exlanations :

Page 25: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Proof of Theorem 2 – cont.We define the distribution D={Dn} as

follows:

on input 1n choose uniformly a machine M from Klog(n) run it for (say) nlog(n) steps.

If it didn’t halt, output 0n otherwise, output the output of M

(trimmed or padded to length n).

Km - the set of all probabilistic TM of description length at most m.

Page 26: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Proof of Theorem 2 – cont. By Thm 1 for every algorithm A,

exists a samplable distribution D that outputs hard instances for it.

With probability at least n-2 we choose the machine that generates D, and then with probability > 1/3 we get a hard instance for A.

Page 27: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

Hardness amplification for Pseudo-classes.

Reminisicent of hardness amplification for AvgBPP, but:

Many old techniques don’t work.

E.g.: Many reconstruction proofs don’t work, because the

reconstructing algorithm can not sample the distribution.

Some techniques work better.

E.g., boosting. If for every samplable distribution the

algorithm can find a witness for non-negligble fraction of inputs, then it finds a witness for almost

all inputs in any samplable distribution .

Page 28: If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,

We proved

NP BPP P||,NP Pseudo ½+ε BPP.

Open problem: NP BPP NP Pseudo ½+ε BPP

Using: [VV,BDCGL90] parallel search to decision reduction, error correcting codes,

boosting

The first two are common ingredients in hardness amplification .

Boosting is a kind of replacement to the

hard core lemma.