ryan o’donnell carnegie mellon university analysisofbooleanfunctions.org
TRANSCRIPT
Social Choice,
Computational Complexity,
Gaussian Geometry,
& Boolean Functions and
Ryan O’Donnell
Carnegie Mellon University
analysisofbooleanfunctions.org
f : {−1,+1}n → {−1,+1}{−1,+1}n
+1
+1
+1+1
−1
−1
−1−1
f = ±1S
S ⊆ {−1,+1}n
Form “ρ-correlated”
For each 1 ≤ i ≤ n…
with probability ρ,
with probability 1−ρ,
+
+
−
−
+
+
+
+
−
+
−
−
+
+
−
−
−
−
+1
+1
+1+1
−1
−1
−1−1
f : {−1,+1}n → {−1,+1}{−1,+1}n
For .9-correlated
x
f = ±1S
ρ-Sensitivity[f] :=
a kind of measure of the “boundary size” of S
We’ll focus on “volume - sets” S.
Equivalently, “balanced” f:
Which balanced f : {−1,+1}n → {−1,+1}
minimizes ρ-Sensitivity[f]?
ρ-Isoperimetric Problem on Discrete Cube
Social Choice interpretation
Election with n voters, 2 candidates named ±1.
f : {−1,+1}n → {−1,+1} is the voting rule:
xj ∈ {−1,+1} is jth voter’s preference.
f(x) = f(x1, …, xn) = winner of the election.
E.g.: f(x) = Majority(x) = sgn(x1 + ∙∙∙ + xn)
f(x) = ElectoralCollege(x)
f(x) = +1 (not balanced)
Social Choice interpretation
Impartial Culture Assumption [GK’68]:
Voters’ preferences are uniformly random.
“Faulty voting machine twist”:
Each vote recorded correctly with prob. ρ,
changed to a random vote with prob. 1−ρ.
ρ-Sens[f] = Pr[faulty machines affect outcome]
Which balanced f : {−1,+1}n → {−1,+1}
minimizes ρ-Sensitivity[f]?
Answer:
Dictatorships, f(x) = xj
(and negated-dictatorships, f(x) = −xj)
+1
+1
+1
+1
−1
−1
−1
−1
+1
+1
+1
+1
−1
−1
−1
−1
+1+1
+1+1
−1
−1 −1
−1
−1−1
−1−1
+1
+1 +1
+1
Which balanced f : {−1,+1}n → {−1,+1}
minimizes ρ-Sensitivity[f]?
Theorem:
∀ balanced f : {−1,+1}n → {−1,+1},
ρ-Sens[f] ≥ ρ-Sens[±Dictators] = (1−ρ)/2
Proof:
Fourier analysis of Boolean functions.
One more social choice detour…
Three candidates A, B, C, ranked by n voters.
Societal ranking produced by holding 3 pairwise
elections using some f : {−1,+1}n → {−1,+1}. (Condorcet election / Independence of Irrelevant Alternatives)
Condorcet’s Paradox (1785): With f = Majority,
might obtain “A beats B, B beats C, C beats A”!
Arrow’s Theorem (1950):
Paradox never occurs ⇒ f = ±Dictator. ☹
Kalai’s Proof (2002):
Same Fourier analysis as in previous theorem.
Every mathematics talk should contain…
a joke
a proof
+1
+1
+1+1
−1
−1
−1−1
+1
+1
+1+1
−1
−1
−1−1
j
Infj [ ith Dictator ]
=
Examples:
Infj [Majorityn] ∀ j
Which balanced f : {−1,+1}n → {−1,+1}
with Influencej[f] “small” for all 1 ≤ j ≤ n
minimizes ρ-Sensitivity[f]?
Stablest voting rule problem
If f : {−1,+1}n → {−1,+1} is balanced,
and Influencej[f] ≤ δ for all 1 ≤ j ≤ n, then
ρ-Sens[f] ≥ ρ-Sens[Majority] − ϵ(δ)
(where ϵ(δ) → 0 as δ → 0)
Majority Is Stablest Conjecture [KKMO’04]
[Guilbaud’52]
If f : {−1,+1}n → {−1,+1} is balanced,
and Influencej[f] ≤ δ for all 1 ≤ j ≤ n, then
ρ-Sens[f] ≥ ρ-Sens[Majority] − ϵ(δ)
(where ϵ(δ) → 0 as δ → 0)
Majority Is Stablest
[Guilbaud’52]
Theorem [MOO’05]
ρ 10
0
ρ-Sens
(1−ρ)/2
(quality of voting machines)
(pro
babili
ty o
utc
om
e a
ffect
ed
)
If f : {−1,+1}n → {−1,+1} is balanced,
and Influencej[f] ≤ δ for all 1 ≤ j ≤ n, then
ρ-Sens[f] ≥ ρ-Sens[Majority] − ϵ(δ)
(where ϵ(δ) → 0 as δ → 0)
Majority Is Stablest [KKMO’04/MOO’05]
2013: New proof by De, Mossel, Neeman
[KKMO’04] motivation:
“Majority Is Stablest” is the exact statement
needed to show an optimal computational
complexity result for the algorithmic task called
Maximum-Cut.
Max-Cut
Input:
“Almost bipartite” N-vertex graph
Output:
Optimal bipartition
“mistake edges”
Max-Cut
“Brute force” algorithm: ≈ 2N steps.
Question:
Is there an “efficient” (= NC step) algorithm?
Answer:
No. (Assuming “P≠NP”. Max-Cut is “NP-hard”.)
Max-Cut
Input:
“Almost bipartite” N-vertex graph
Output:
“mistake edges”
ApproximateOptimal bipartitionDo your best
There is an efficient algorithm s.t. ∀ ρ
if input graph is “ρ-bipartite”,
then algorithm outputs a bipartition
with fraction of mistake edges ≤
Theorem: [GLS’88,DP’90,GW’94]
“optimal bipartition has ≤ (1−ρ)/2 fraction of mistake edges”
≥ .69
ρ 10
0
(1−ρ)/2
How bipartite the input graph is
Fraction ofmistake edges
GW alg’sguarantee
optimalbipartition
prev bestefficient
algorithm
[KKMO’04] Theorem:
“Majority Is Stablest”
⇒ NP-hard to beat GW’s Max-Cut algorithm
“UG-hard”
Raghavendra ’08: (see also [KKMO’04,Aus’06,Aus’07,OW’07,RS08])
∃ a generic, efficient algorithm A such that
for all “constraint satisfaction problems” M,
it’s UG-hard to approx. M better than A does.
Proving Majority Is Stablest:
enter Gaussian geometry.
+1
+1
+1+1
−1
−1
−1−1
f : {−1,+1}n → {−1,+1} balanced{−1,+1}n
For ρ-correlated
x
f = ±1S
If f : {−1,+1}n → {−1,+1} is balanced,
and Influencej[f] ≤ δ for all 1 ≤ j ≤ n, then
ρ-Sens[f] ≥ “ρ-Sens[Majority]” − ϵ(δ)
(where ϵ(δ) → 0 as δ → 0)
Majority Is Stablest Theorem
[exercise, Sheppard 1899]
“Gaussian-ρ-Sensitivity”[sgn]
sgn : ℝ1 → {−1,+1}
ℝ1
= ±1S
(Note: S has Gaussian volume ½; i.e., sgn is “balanced”.)
n-dim. Boolean function Majority is the 1-dim. Gaussian function sgn in disguise!
z
S = (0,∞) ⊆ ℝ1
More generally, for g : ℝd → {−1,+1}, g = ±1S, define
Gaussian-ρ-Sens[g] =
The Gaussian function g can be “disguised” bya sequence of (small-influence) Boolean functions.
S
ℝ2
As n → ∞: • ρ-Sens[f] → Gaussian-ρ-Sens[g]
• if g is “balanced” (Pr [z ∈ S] = ½),
f → balanced
• Influencej[f] → 0 ∀ j
Majority Is Stablesthypotheses
If g : ℝd → {−1,+1} is balanced,
Gaussian-ρ-Sens[g] ≥ .
∴Majority Is Stablest Theorem implies…
Gaussian-ρ-Sens [sgn]
Borell’s Isoperimetric Inequality
∴Majority Is Stablest Theorem implies…
[Borell ’85](special case)
Equality if S is halfline in ℝ1, orindeed any halfspace thru 0 in ℝd
If S ⊆ ℝd has Gaussian volume ½,
ρ → 1 implies classical Gaussian Isoperimetric Inequality[Borell’74, Sudakov−Tsirelson’74]
∴Majority Is Stablest ⇒ Borell’s Isoperim. Ineq.
Proofs of Borell’s Isoperimetric Inequality:
• Borell ’85: Gaussian rearrangement, very hard• Beckner ’90:
Analogue on the sphere by 2-point symm.,pretty easy, implies Gaussian version [CL’90]
• [KO’12]: vol.-½, : four sentences
Every mathematics talk should contain…
a joke
a proof
∴Majority Is Stablest ⇒ Borell’s Isoperim. Ineq.
Proofs of Borell’s Isoperimetric Inequality:
• Borell ’85: Gaussian rearrangement, very hard• Beckner ’90:
Analogue on the sphere by 2-point symm.,pretty easy, implies Gaussian version [CL’90]
• [KO’12]: vol.-½, : four sentences
First proof of Majority Is Stablest:
[MOO’05] proved “Invariance Principle” (nonlinear CLT)
to obtain Borell’s Isoperim. Ineq. ⇒ Majority Is Stablest,
whence UG-hardness of beating GW Max-Cut algorithm.
∴Majority Is Stablest ⇒ Borell’s Isoperim. Ineq.
Proofs of Borell’s Isoperimetric Inequality:
• Borell ’85: Gaussian rearrangement, very hard• Beckner ’90:
Analogue on the sphere by 2-point symm.,pretty easy, implies Gaussian version [CL’90]
• [MN’12]: Semigroup method
• [DMN’13]: Discrete proof of Majority Is Stablest (hence also Borell’s Isoperimetric Ineq.) by induction on n.
• [KO’12]: vol.-½, : four sentences
• Eldan ’13: Stochastic calculus
Conclusion: Importance of multiple proofs
[MOO] proof of Majority Is Stablest:
• Invariance Principle, reduced to Gaussian geom.
• Advantage: Invariance Principle useful elsewhere: Social Choice, Learning Theory, Comp. Complexity [Raghavendra’08]
[DMN] proof of Majority Is Stablest:
• Direct induction on n, completely discrete
• Advantage: Proof expressible in “SOS proof system”, which has algorithmic implications…
Thanks!