hardness magnification for all sparse np languages · 2021. 5. 15. · hardness magnification for...

54
Hardness Magnification for all Sparse NP Languages Lijie Chen MIT Ce Jin Tsinghua U. Ryan Williams MIT

Upload: others

Post on 24-Aug-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Hardness Magnification for all Sparse NP Languages

Lijie Chen

MIT

Ce Jin

Tsinghua U.

Ryan Williams

MIT

Page 2: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Minimum Circuit Size Problem

Page 3: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Minimum Circuit Size Problem

MCSP 𝑠 𝑚 ∈ 𝐍𝐏; solvable in 𝑛 ⋅ 2 ෨𝑂 𝑠 𝑚 time.

Page 4: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

We believe MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲!

(otherwise, no strong PRGs exist [Razborov-Rudich])

(MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲 ⇒ “𝐏/𝐩𝐨𝐥𝐲-natural property useful

against 𝐏/𝐩𝐨𝐥𝐲” )

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Minimum Circuit Size Problem

MCSP 𝑠 𝑚 ∈ 𝐍𝐏; solvable in 𝑛 ⋅ 2 ෨𝑂 𝑠 𝑚 time.

Page 5: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

We believe MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲!

(otherwise, no strong PRGs exist [Razborov-Rudich])

(MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲 ⇒ “𝐏/𝐩𝐨𝐥𝐲-natural property useful

against 𝐏/𝐩𝐨𝐥𝐲” )

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Minimum Circuit Size Problem

MCSP 𝑠 𝑚 ∈ 𝐍𝐏; solvable in 𝑛 ⋅ 2 ෨𝑂 𝑠 𝑚 time.

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

Page 6: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

We believe MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲!

(otherwise, no strong PRGs exist [Razborov-Rudich])

(MCSP[𝑚10] ∉ 𝐏/𝐩𝐨𝐥𝐲 ⇒ “𝐏/𝐩𝐨𝐥𝐲-natural property useful

against 𝐏/𝐩𝐨𝐥𝐲” )

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Minimum Circuit Size Problem

MCSP 𝑠 𝑚 ∈ 𝐍𝐏; solvable in 𝑛 ⋅ 2 ෨𝑂 𝑠 𝑚 time.

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

“Hardness Magnification”

Page 7: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

Similar magnification results for MKtP (Minimum time-bounded

Kolmogorov Complexity, Kt 𝑥 )

(“Gap-MKtP[𝑎, 𝑏]”: distinguish between Kt 𝑥 ≤ 𝑎 and Kt 𝑥 ≥ 𝑏)

Hardness Magnification for MCSP

Page 8: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

Similar magnification results for MKtP (Minimum time-bounded

Kolmogorov Complexity, Kt 𝑥 )

Kt(x) = “measure of how much info needed to generate 𝑥 quickly”MKtP ≈ MCSP with “EXP-oracle gates”

(“Gap-MKtP[𝑎, 𝑏]”: distinguish between Kt 𝑥 ≤ 𝑎 and Kt 𝑥 ≥ 𝑏)

Hardness Magnification for MCSP

Page 9: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Similar magnification results for MKtP (Minimum time-bounded

Kolmogorov Complexity, Kt 𝑥 )

Kt(x) = “measure of how much info needed to generate 𝑥 quickly”MKtP ≈ MCSP with “EXP-oracle gates”

(“Gap-MKtP[𝑎, 𝑏]”: distinguish between Kt 𝑥 ≤ 𝑎 and Kt 𝑥 ≥ 𝑏)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚) doesn’t have 𝑛3polylog 𝑛-size

(De Morgan) Formulas, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

(Oliveira-Pich-Santhanam’19)

Hardness Magnification for MCSP(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

Page 10: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Similar magnification results for MKtP (Minimum time-bounded

Kolmogorov Complexity, Kt 𝑥 )

Kt(x) = “measure of how much info needed to generate 𝑥 quickly”MKtP ≈ MCSP with “EXP-oracle gates”

(“Gap-MKtP[𝑎, 𝑏]”: distinguish between Kt 𝑥 ≤ 𝑎 and Kt 𝑥 ≥ 𝑏)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚) doesn’t have 𝑛3polylog 𝑛-size

(De Morgan) Formulas, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

(Oliveira-Pich-Santhanam’19)

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛 depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

(McKay-Murray-Williams’19)

Hardness Magnification for MCSP

(Other Hardness Magnification Results)𝑛1−𝜀-approximate Clique [Sri’03]

Average-case MCSP [OS’18]

𝑘-Vertex-Cover [OS’18]

low-depth circuit LBs for 𝐍𝐂𝟏 [AK’10,CT’19]

sublinear-depth circuit LBs for 𝐏 [LW’13]

Page 11: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Weak LB

Strong LB

Magnification

Suggests new

approaches to

proving strong

lower bounds?

How to view Hardness Magnification?

Page 12: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

It is argued that HM can bypass the Natural Proof Barrier [Razborov-Rudich]

Weak LB

Strong LB

Magnification

Suggests new

approaches to

proving strong

lower bounds?

How to view Hardness Magnification?

Page 13: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

It is argued that HM can bypass the Natural Proof Barrier [Razborov-Rudich]

• A heuristic argument [AK’10, OS’18]: HM seems to yield strong LBs only for certain functions, not for most of them (violating the “largeness” condition of Natural Proofs)

Weak LB

Strong LB

Magnification

Suggests new

approaches to

proving strong

lower bounds?

How to view Hardness Magnification?

Page 14: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

It is argued that HM can bypass the Natural Proof Barrier [Razborov-Rudich]

• A heuristic argument [AK’10, OS’18]: HM seems to yield strong LBs only for certain functions, not for most of them (violating the “largeness” condition of Natural Proofs)

• A real theorem [CHOPRS to appear in ITCS’20] In some cases, the required weak LB actually implies the non-existence of natural proofs

Weak LB

Strong LB

Magnification

Suggests new

approaches to

proving strong

lower bounds?

How to view Hardness Magnification?

Page 15: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Extending Known Lower Bounds?

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

[OPS’19]

Page 16: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Extending Known Lower Bounds?

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

We know how to prove 𝑛1.99-size formula

lower bound for Gap-MKtP ! [OPS’19]

[OPS’19]

Page 17: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Extending Known Lower Bounds?

Can we improve it by a factor of 𝑛1+𝜀?

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

We know how to prove 𝑛1.99-size formula

lower bound for Gap-MKtP ! [OPS’19]

[OPS’19]

Page 18: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Extending Known Lower Bounds?

Formula-⊕: De Morgan Formulas

where each leaf node computes

XOR of a subset of input bits

[OPS’19]

Page 19: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Known LB against Formula-⊕(Tal’16) :

𝐅𝟐-Inner-Product ∉ 𝐅𝐨𝐫𝐦𝐮𝐥𝐚-⊕ 𝒏𝟐/ 𝐩𝐨𝐥𝐲𝐥𝐨𝐠 𝒏

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Extending Known Lower Bounds?

Stronger LB than required

Page 20: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Known LB against Formula-⊕(Tal’16) :

𝐅𝟐-Inner-Product ∉ 𝐅𝐨𝐫𝐦𝐮𝐥𝐚-⊕ 𝒏𝟐/ 𝐩𝐨𝐥𝐲𝐥𝐨𝐠 𝒏

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Stronger LB than required

Extending Known Lower Bounds?

Much easier than Gap-MKtP??!

Page 21: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Known LB against Formula-⊕(Tal’16) :

𝐅𝟐-Inner-Product ∉ 𝐅𝐨𝐫𝐦𝐮𝐥𝐚-⊕ 𝒏𝟐/ 𝐩𝐨𝐥𝐲𝐥𝐨𝐠 𝒏

Much easier than Gap-MKtP??!

Can we adapt the proof techniques to Gap-MKtP?

(Input length 𝑛 = 2𝑚)

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formulas, then

𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Stronger LB than required

Extending Known Lower Bounds?

Page 22: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Weak LB

Strong LB

Magnification

Indicates proving

“weak” lower bounds

are even harder than

previously thought??

Suggests new

approaches to

proving strong

lower bounds?

How to view Hardness Magnification?

Page 23: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Weak LB

Strong LB

Magnification

Indicates proving

“weak” lower bounds

are even harder than

previously thought??

Suggests new

approaches to

proving strong

lower bounds?

•Hardness magnification:

Proving almost-linear size lower bounds is already as hard as proving super-polynomial lower bounds…

How to view Hardness Magnification?

Page 24: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

What is special about MCSP and MKtP?

Is it because they are “compression” problems?

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formula, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

Page 25: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Observation: MCSP[𝑚10] and MKtP[𝑚10]

are sparse languages!

MCSP 𝑠(𝑚) is 2 ෨𝑂 𝑠(𝑚) -sparse;

there are at most 2 ෨𝑂 𝑠(𝑚) many circuits!

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formula, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Problem: MCSP[𝑠 𝑚 ]·Given: 𝑓: {0,1}𝑚 → {0,1} as a truth table of length 𝑛 = 2𝑚

·Decide: Does 𝑓 have a circuit of size at most 𝑠(𝑚)?

What is special about MCSP and MKtP?

Is it because they are “compression” problems?

Page 26: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Our result: Hardness magnification holds for

all sparse 𝐍𝐏 languages!

(Input length 𝑛 = 2𝑚)

If MCSP[𝑚10] doesn’t have circuits of 𝑛 ⋅ polylog 𝑛 size and polylog 𝑛depth, then 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲.

If Gap-MKtP 𝑚10, 𝑚10 + 𝑂(𝑚)· doesn’t have 𝑛3polylog 𝑛-size (De Morgan) Formula, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

· doesn’t have 𝑛 ⋅ polylog 𝑛-size Formula-⊕, then 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏.

Observation: MCSP[𝑚10] and MKtP[𝑚10]

are sparse languages!

What is special about MCSP and MKtP?

Is it because they are “compression” problems?

MCSP 𝑠(𝑚) is 2 ෨𝑂 𝑠(𝑚) -sparse;

there are at most 2 ෨𝑂 𝑠(𝑚) many circuits!

Page 27: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

HM for all sparse NP languages

Let 𝐿 be any 2𝑛𝑜 1

-sparse 𝐍𝐏 language.

· If 𝐿 doesn’t have 𝒏𝟏.𝟎𝟏-size circuits, then for all 𝒌, 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 .

· If 𝐿 doesn’t have 𝒏𝟑.𝟎𝟏-size formulas, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size formulas.

· If 𝐿 doesn’t have 𝒏𝟐.𝟎𝟏-size branching programs, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size branching programs.

Theorem 1:

Page 28: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

HM for all sparse NP languages

Similar results for other models!

Let 𝐿 be any 2𝑛𝑜 1

-sparse 𝐍𝐏 language.

· If 𝐿 doesn’t have 𝒏𝟏.𝟎𝟏-size circuits, then for all 𝒌, 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 .

· If 𝐿 doesn’t have 𝒏𝟑.𝟎𝟏-size formulas, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size formulas.

· If 𝐿 doesn’t have 𝒏𝟐.𝟎𝟏-size branching programs, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size branching programs.

Theorem 1:

Page 29: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

HM for all sparse NP languages

Similar results for other models!

Let 𝐿 be any 2𝑛𝑜 1

-sparse 𝐍𝐏 language.

· If 𝐿 doesn’t have 𝒏𝟏.𝟎𝟏-size circuits, then for all 𝒌, 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 .

· If 𝐿 doesn’t have 𝒏𝟑.𝟎𝟏-size formulas, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size formulas.

· If 𝐿 doesn’t have 𝒏𝟐.𝟎𝟏-size branching programs, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size branching programs.

Theorem 1:

Compared with [MMW’19]: Our techniques yield weaker

consequences (e.g. they get 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲), but apply to more

restricted models.

Page 30: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

HM for all sparse NP languages

Similar results for other models!

Let 𝐿 be any 2𝑛𝑜 1

-sparse 𝐍𝐏 language.

· If 𝐿 doesn’t have 𝒏𝟏.𝟎𝟏-size circuits, then for all 𝒌, 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 .

· If 𝐿 doesn’t have 𝒏𝟑.𝟎𝟏-size formulas, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size formulas.

· If 𝐿 doesn’t have 𝒏𝟐.𝟎𝟏-size branching programs, then

for all 𝒌, 𝐍𝐏 doesn’t have 𝒏𝒌-size branching programs.

Theorem 1:

Compared with [MMW’19]: Our techniques yield weaker

consequences (e.g. they get 𝐍𝐏 ⊄ 𝐏/𝐩𝐨𝐥𝐲), but apply to more

restricted models.

(Best known formula LB: 𝒏𝟑/𝐩𝐨𝐥𝐲𝐥𝐨𝐠 𝒏) [Håstad 90s, Tal]

(Best known branching program LB: 𝒏𝟐/𝐩𝐨𝐥𝐲𝐥𝐨𝐠 𝒏) [Nečiporuk 60s]

Page 31: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Theorem 2:

(Input length 𝑛 = 2𝑚)

Similar results for other models!

Hardness Magnification for MCSP

If MCSP 𝑚10 doesn’t have 𝑛3polylog 𝑛-size (De Morgan)

Formulas, then 𝐏𝐒𝐏𝐀𝐂𝐄 ⊄ (nonuniform) 𝐍𝐂𝟏.

Page 32: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Theorem 2:

(Input length 𝑛 = 2𝑚)

Similar results for other models!

Hardness Magnification for MCSP

If MCSP 𝑚10 doesn’t have 𝑛3polylog 𝑛-size (De Morgan)

Formulas, then 𝐏𝐒𝐏𝐀𝐂𝐄 ⊄ (nonuniform) 𝐍𝐂𝟏.

Best MCSP lower bound (Cheraghchi-Kabanets-Lu-Myrisiotis’19) :

MCSP[2𝑚/10𝑚] requires 𝑛3−𝑜 1 -size formulas.

(doesn’t work for 𝑚10…)

Page 33: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Theorem 2:

Similar results for MKtP 𝑚10 and 𝐄𝐗𝐏 ⊄ 𝐍𝐂𝟏 (improving upon

[OPS’19] which required lower bounds for Gap-MKtP)

(Input length 𝑛 = 2𝑚)

Similar results for other models!

Hardness Magnification for MCSP

If MCSP 𝑚10 doesn’t have 𝑛3polylog 𝑛-size (De Morgan)

Formulas, then 𝐏𝐒𝐏𝐀𝐂𝐄 ⊄ (nonuniform) 𝐍𝐂𝟏.

Best MCSP lower bound (Cheraghchi-Kabanets-Lu-Myrisiotis’19) :

MCSP[2𝑚/10𝑚] requires 𝑛3−𝑜 1 -size formulas.

(doesn’t work for 𝑚10…)

Page 34: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Algorithms with small non-uniformity

Theorem 3:

Let 𝐿 be a 2𝑛𝑜 1

-sparse 𝐍𝐏 language not computable by an

𝒏𝟏.𝟎𝟏-time 𝒏𝟎.𝟎𝟏-space deterministic algorithm with 𝒏𝟎.𝟎𝟏 bits

of advice, then 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 for all 𝒌.

Page 35: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Algorithms with small non-uniformity

Theorem 3:

There is a 𝟐𝒏𝟎.𝟎𝟏

⋅ 𝒏 -sparse language 𝐿 ∈ 𝐃𝐓𝐈𝐌𝐄[ ෨𝑂(𝑛1.01)],

not computable by an 𝒏𝟏.𝟎𝟏-time deterministic algorithm with

𝒏𝟎.𝟎𝟏 bits of advice.

The hypothesis is “close” to what we can prove!

Let 𝐿 be a 2𝑛𝑜 1

-sparse 𝐍𝐏 language not computable by an

𝒏𝟏.𝟎𝟏-time 𝒏𝟎.𝟎𝟏-space deterministic algorithm with 𝒏𝟎.𝟎𝟏 bits

of advice, then 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 for all 𝒌.

(Adaptation of time hierarchy theorem)

Page 36: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Algorithms with small non-uniformity

Theorem 3:

There is a 𝟐𝒏𝟎.𝟎𝟏

⋅ 𝒏 -sparse language 𝐿 ∈ 𝐃𝐓𝐈𝐌𝐄[ ෨𝑂(𝑛1.01)],

not computable by an 𝒏𝟏.𝟎𝟏-time deterministic algorithm with

𝒏𝟎.𝟎𝟏 bits of advice.

The hypothesis is “close” to what we can prove!

Let 𝐿 be a 2𝑛𝑜 1

-sparse 𝐍𝐏 language not computable by an

𝒏𝟏.𝟎𝟏-time 𝒏𝟎.𝟎𝟏-space deterministic algorithm with 𝒏𝟎.𝟎𝟏 bits

of advice, then 𝐍𝐏 ⊄ 𝐒𝐈𝐙𝐄 𝒏𝒌 for all 𝒌.

(Adaptation of time hierarchy theorem)

Can we make it sparser?

Page 37: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Let 𝐿 be any 2𝑛𝑜 1

-sparse 𝐍𝐏 language.

· If 𝐿 doesn’t have 𝒏𝟑.𝟎𝟏-size formulas, then for every 𝒌, 𝐍𝐏doesn’t have 𝒏𝒌-size formulas.

Page 38: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Page 39: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Intuition

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

N N N N N Y N N N N Y N N Y N N

(Sparse) 𝐿 ∩ {0,1}𝑛

0000 0001 1110 1111…………

Page 40: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Intuition

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

N N N N N Y N N N N Y N N Y N N

Y Y N Y

(Sparse) 𝐿 ∩ {0,1}𝑛

(Dense) Auxiliary 𝐍𝐏 language 𝐾 ∩ {0,1}𝑛0.001/𝑘

(“kernel problem”)

0000 0001 1110 1111…………

00 01 10 11

Page 41: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Intuition

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

N N N N N Y N N N N Y N N Y N N

Y Y N Y

(Sparse) 𝐿 ∩ {0,1}𝑛

(Dense) Auxiliary 𝐍𝐏 language 𝐾 ∩ {0,1}𝑛0.001/𝑘

(“kernel problem”)

0000 0001 1110 1111…………

00 01 10 11

We will construct cubic-size

formulas for 𝐿, with oracle

access to 𝐾.

Page 42: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Set 𝑡 ≔ 𝒏𝟎.𝟎𝟎𝟏/𝒌 > log Sparsity of 𝐿 .

Standard hashing tricks imply:

There is a hash function 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) that is

· Perfect: maps YES-instances of 𝐿 into distinct images

· described by an 𝑂 𝑡 -bit seed 𝑠

· linear over 𝐅2 (there is a “correct”

seed 𝑠 that makes the

hash function 𝐻𝑠 perfect)

Page 43: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Set 𝑡 ≔ 𝒏𝟎.𝟎𝟎𝟏/𝒌 > log Sparsity of 𝐿 .

Standard hashing tricks imply:

There is a hash function 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) that is

· Perfect: maps YES-instances of 𝐿 into distinct images

· described by an 𝑂 𝑡 -bit seed 𝑠

· linear over 𝐅2

(Construction: pick some coordinates

from the Error Correcting Code)

Page 44: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

(𝑡 ≔ 𝑛0.001/𝑘 > log Sparsity of 𝐿 )

(Perfect hash 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) with seed s = 𝑂 𝑡 )

Input: Hash seed 𝑠, hash value ℎ, index 𝑖 ∈ [𝑛]Output: The 𝑖-th bit of some 𝑥 ∈ 𝐿 such that 𝐻𝑠 𝑥 = ℎ.

Define an 𝑂(𝑡)-input auxiliary NP problem 𝐾 (“kernel problem”):

For the “correct” 𝑠, this 𝑥is unique

Page 45: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

𝐍𝐏 has 𝑛𝑘-size formulas ⇒ 𝐾 has formulas of size 𝑛0.001!

Input: Hash seed 𝑠, hash value ℎ, index 𝑖 ∈ [𝑛]Output: The 𝑖-th bit of some 𝑥 ∈ 𝐿 such that 𝐻𝑠 𝑥 = ℎ.

Define an 𝑂(𝑡)-input auxiliary NP problem 𝐾 (“kernel problem”):

(𝑡 ≔ 𝑛0.001/𝑘 > log Sparsity of 𝐿 )

(Perfect hash 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) with seed s = 𝑂 𝑡 )

Page 46: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

𝐍𝐏 has 𝑛𝑘-size formulas ⇒ 𝐾 has formulas of size 𝑛0.001!

On input (𝑠, ℎ, 𝑖), guess (𝑥, 𝑦), where 𝑦 witnesses 𝑥 ∈ 𝐿.Accept 𝑥𝑖 = 1 and 𝐻𝑠(𝑥) = ℎ.

Input: Hash seed 𝑠, hash value ℎ, index 𝑖 ∈ [𝑛]Output: The 𝑖-th bit of some 𝑥 ∈ 𝐿 such that 𝐻𝑠 𝑥 = ℎ.

Define an 𝑂(𝑡)-input auxiliary NP problem 𝐾 (“kernel problem”):

(𝑡 ≔ 𝑛0.001/𝑘 > log Sparsity of 𝐿 )

(Perfect hash 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) with seed s = 𝑂 𝑡 )

Page 47: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

Proof of Theorem 1.2

Assume: 𝐍𝐏 has 𝑛𝑘-size formulas for some 𝑘.

Goal: Design 𝑛3.01-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Claim: for the “correct” 𝑠, the following decides L:

On input 𝑥 ∈ {0,1}𝑛, accept iff:

∀𝑖 ∈ 𝑛 , 𝐾 𝑠,𝐻𝑠 𝑥 , 𝑖 = 𝑥𝑖

Input: Hash seed 𝑠, hash value ℎ, index 𝑖 ∈ [𝑛]Output: The 𝑖-th bit of some 𝑥 ∈ 𝐿 such that 𝐻𝑠 𝑥 = ℎ.

Define an 𝑂(𝑡)-input auxiliary NP problem 𝐾 (“kernel problem”):

(𝑡 ≔ 𝑛0.001/𝑘 > log Sparsity of 𝐿 )

(Perfect hash 𝐻𝑠: {0,1}𝑛 → {0,1}𝑂(𝑡) with seed s = 𝑂 𝑡 )

Page 48: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

On input 𝑥 ∈ {0,1}𝑛, accept iff:

∀𝑖 ∈ 𝑛 , 𝐾 𝑠,𝐻𝑠 𝑥 , 𝑖 = 𝑥𝑖

AND

𝐾

𝑠 1

=?

𝑥1

𝐻𝑠(𝑥)

⋯=?

𝐾

𝑠 𝑛𝐻𝑠(𝑥)

𝑥𝑛

𝐾: 𝑛0.001-size

Goal: Design 𝒏𝟑.𝟎𝟏-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Page 49: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

On input 𝑥 ∈ {0,1}𝑛, accept iff:

∀𝑖 ∈ 𝑛 , 𝐾 𝑠,𝐻𝑠 𝑥 , 𝑖 = 𝑥𝑖

AND

𝐾

𝑠 1

=?

𝑥1

𝐻𝑠(𝑥)

⋯=?

𝐾

𝑠 𝑛𝐻𝑠(𝑥)

𝑥𝑛

𝐾: 𝑛0.001-size

Hash seed 𝑠hardwired into formulas

Goal: Design 𝒏𝟑.𝟎𝟏-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Page 50: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

On input 𝑥 ∈ {0,1}𝑛, accept iff:

∀𝑖 ∈ 𝑛 , 𝐾 𝑠,𝐻𝑠 𝑥 , 𝑖 = 𝑥𝑖

AND

𝐾

𝑠 1

=?

𝑥1

𝐻𝑠(𝑥)⊕ ⋯⊕

⋯=?

𝐾

𝑠 𝑛𝐻𝑠(𝑥)⊕ ⋯⊕

𝑥𝑛

𝐾: 𝑛0.001-size

Hash seed 𝑠hardwired into formulas

Each bit of 𝐻𝑠 𝑥 is

an XOR function

(implemented by De

Morgan formulas of

size 𝑂 𝑛2 )

Goal: Design 𝒏𝟑.𝟎𝟏-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Page 51: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

On input 𝑥 ∈ {0,1}𝑛, accept iff:

∀𝑖 ∈ 𝑛 , 𝐾 𝑠,𝐻𝑠 𝑥 , 𝑖 = 𝑥𝑖

AND

𝐾

𝑠 1

=?

𝑥1

𝐻𝑠(𝑥)⊕ ⋯⊕

⋯=?

𝐾

𝑠 𝑛𝐻𝑠(𝑥)⊕ ⋯⊕

𝑥𝑛

𝐾: 𝑛0.001-size

Hash seed 𝑠hardwired into formulas

Each bit of 𝐻𝑠 𝑥 is

an XOR function

(implemented by De

Morgan formulas of

size 𝑂 𝑛2 )

Total size

𝒏 ⋅ 𝒏𝟎.𝟎𝟎𝟏 ⋅ 𝑶 𝒏𝟐

Goal: Design 𝒏𝟑.𝟎𝟏-size formulas for 2𝑛𝑜 1

-sparse 𝐍𝐏 language 𝐿.

Page 52: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

• Are there any other natural sparse NP languages for which one can prove some concrete lower bounds?

Open Problems

Page 53: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

• Are there any other natural sparse NP languages for which one can prove some concrete lower bounds?

• Is it possible to show hardness magnification results for “denser” variants of MCSP or MKtP, such as MCSP[𝟐𝒎/𝒎𝟑]?

Open Problems

Page 54: Hardness Magnification for all Sparse NP Languages · 2021. 5. 15. · Hardness Magnification for MCSP (Other Hardness Magnification Results) 1−𝜀-approximate Clique [Sri’03]

• Are there any other natural sparse NP languages for which one can prove some concrete lower bounds?

• Is it possible to show hardness magnification results for “denser” variants of MCSP or MKtP, such as MCSP[𝟐𝒎/𝒎𝟑]?

Open Problems

Thank you!