Analysis of Boolean Functions Ryan ODonnell Carnegie Mellon
![Analysis of Boolean Functions Ryan O’Donnell Carnegie Mellon University Analysis of Boolean Functions Ryan O’Donnell Carnegie Mellon University](https://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-1.jpg)
Analysis of Boolean Functions Ryan O’Donnell Carnegie Mellon University
![Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-2.jpg)
Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s Theorem
![10 Minute Break 10 Minute Break](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-3.jpg)
10 Minute Break
![Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-4.jpg)
Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps
![Sadly no time for: Learning theory Pseudorandomness Arithmetic combinatorics Random graphs / percolation Communication Sadly no time for: Learning theory Pseudorandomness Arithmetic combinatorics Random graphs / percolation Communication](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-5.jpg)
Sadly no time for: Learning theory Pseudorandomness Arithmetic combinatorics Random graphs / percolation Communication complexity Metric / Banach spaces Coding theory etc.
![1 A. Fourier expansion basics 1 A. Fourier expansion basics](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-6.jpg)
1 A. Fourier expansion basics
![f : {0, 1}n {0, 1} f : {0, 1}n {0, 1}](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-7.jpg)
f : {0, 1}n {0, 1}
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-8.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-9.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-10.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-11.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-12.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-13.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-14.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-15.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-16.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-17.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-18.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-19.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-20.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-21.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-22.jpg)
![Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-23.jpg)
Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (u niq ue ly )
![Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-24.jpg)
Proposition: (indeed, → ℝ) Every f : {− 1, +1}n {− 1, +1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (u niq ue ly )
![⇓ Rest: 0 ⇓ Rest: 0](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-25.jpg)
⇓ Rest: 0
![Why? Coefficients encode useful information. When? 1. Uniform probability involved 2. Hamming distances relevant Why? Coefficients encode useful information. When? 1. Uniform probability involved 2. Hamming distances relevant](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-26.jpg)
Why? Coefficients encode useful information. When? 1. Uniform probability involved 2. Hamming distances relevant
![Parseval’s Theorem: Let f : {− 1, +1}n {− 1, +1}. Then avg { Parseval’s Theorem: Let f : {− 1, +1}n {− 1, +1}. Then avg {](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-27.jpg)
Parseval’s Theorem: Let f : {− 1, +1}n {− 1, +1}. Then avg { f(x)2 }
![“Weight” of f on S ⊆ [n] = “Weight” of f on S ⊆ [n] =](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-28.jpg)
“Weight” of f on S ⊆ [n] =
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-29.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-30.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-31.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-32.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-33.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-34.jpg)
![1 B. Concepts: Bias, Influences, Noise Sensitivity 1 B. Concepts: Bias, Influences, Noise Sensitivity](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-35.jpg)
1 B. Concepts: Bias, Influences, Noise Sensitivity
![Social Choice: Candidates ± 1 n voters Votes are random f : {− 1, Social Choice: Candidates ± 1 n voters Votes are random f : {− 1,](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-36.jpg)
Social Choice: Candidates ± 1 n voters Votes are random f : {− 1, +1}n {− 1, +1} is the “voting rule”
![Bias of f: avg f(x) = Pr[+1 wins] − Pr[− 1 wins] Fact: Weight Bias of f: avg f(x) = Pr[+1 wins] − Pr[− 1 wins] Fact: Weight](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-37.jpg)
Bias of f: avg f(x) = Pr[+1 wins] − Pr[− 1 wins] Fact: Weight on ∅ = measures “imbalance”.
![Influence of i on f: Pr[ f(x) ≠ f(x(⊕i)) ] = Pr[voter i is Influence of i on f: Pr[ f(x) ≠ f(x(⊕i)) ] = Pr[voter i is](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-38.jpg)
Influence of i on f: Pr[ f(x) ≠ f(x(⊕i)) ] = Pr[voter i is a swing voter] Fact:
![Maj(x 1, x 2, x 3) {1, 2, 3} {1, 2} {1, 3} {2, Maj(x 1, x 2, x 3) {1, 2, 3} {1, 2} {1, 3} {2,](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-39.jpg)
Maj(x 1, x 2, x 3) {1, 2, 3} {1, 2} {1, 3} {2, 3} {1} {2} {3} ∅
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-40.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-41.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-42.jpg)
![avg Infi(f) = frac. of edges which are cut edges avg Infi(f) = frac. of edges which are cut edges](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-43.jpg)
avg Infi(f) = frac. of edges which are cut edges
![LMN Theorem: If then f is in AC 0 avg Infi(f) LMN Theorem: If then f is in AC 0 avg Infi(f)](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-44.jpg)
LMN Theorem: If then f is in AC 0 avg Infi(f)
![⇒ avg Infi(Parityn) ⇒ Parity ∉ AC 0 ⇒ avg Infi(Majn) = ⇒ Majority ⇒ avg Infi(Parityn) ⇒ Parity ∉ AC 0 ⇒ avg Infi(Majn) = ⇒ Majority](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-45.jpg)
⇒ avg Infi(Parityn) ⇒ Parity ∉ AC 0 ⇒ avg Infi(Majn) = ⇒ Majority ∉ AC 0 = 1
![KKL Theorem: If Bias(f) = 0, then Corollary: Assuming f monotone, − 1 or KKL Theorem: If Bias(f) = 0, then Corollary: Assuming f monotone, − 1 or](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-46.jpg)
KKL Theorem: If Bias(f) = 0, then Corollary: Assuming f monotone, − 1 or +1 can bribe o(n) voters and win w. p. 1−o(1).
![Noise Sensitivity of f at ϵ: NSԑ(f) = Pr[wrong winner wins], when each vote Noise Sensitivity of f at ϵ: NSԑ(f) = Pr[wrong winner wins], when each vote](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-47.jpg)
Noise Sensitivity of f at ϵ: NSԑ(f) = Pr[wrong winner wins], when each vote misrecorded w/prob ϵ f( + − + + − − ) f( − − + + + − − )
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-48.jpg)
![Learning Theory principle: [LMN’ 93, …, KKMS’ 05] If all f ∈ C have Learning Theory principle: [LMN’ 93, …, KKMS’ 05] If all f ∈ C have](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-49.jpg)
Learning Theory principle: [LMN’ 93, …, KKMS’ 05] If all f ∈ C have small NSԑ(f) then C is efficiently learnable.
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-50.jpg)
![Proposition: 1 for small ԑ, with Electoral College: 0 ϵ 1 Proposition: 1 for small ԑ, with Electoral College: 0 ϵ 1](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-51.jpg)
Proposition: 1 for small ԑ, with Electoral College: 0 ϵ 1
![1 C. Kalai’s proof of Arrow’s Theorem 1 C. Kalai’s proof of Arrow’s Theorem](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-52.jpg)
1 C. Kalai’s proof of Arrow’s Theorem
![Ranking 3 candidates Condorcet [1775] Election: A > B? B > C? C > Ranking 3 candidates Condorcet [1775] Election: A > B? B > C? C >](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-53.jpg)
Ranking 3 candidates Condorcet [1775] Election: A > B? B > C? C > A?
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-54.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-55.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-56.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-57.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-58.jpg)
![Arrow’s Impossibility Theorem [1950]: If f : {− 1, +1}n {− 1, +1} never Arrow’s Impossibility Theorem [1950]: If f : {− 1, +1}n {− 1, +1} never](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-59.jpg)
Arrow’s Impossibility Theorem [1950]: If f : {− 1, +1}n {− 1, +1} never gives irrational outcome in Condorcet elections, then f is a Dictator or a negated-Dictator.
![Gil Kalai’s Proof [2002]: Gil Kalai’s Proof [2002]:](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-60.jpg)
Gil Kalai’s Proof [2002]:
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-61.jpg)
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-62.jpg)
![Gil Kalai’s Proof: Gil Kalai’s Proof:](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-63.jpg)
Gil Kalai’s Proof:
![Gil Kalai’s Proof: Gil Kalai’s Proof:](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-64.jpg)
Gil Kalai’s Proof:
![Gil Kalai’s Proof, concluded: f never gives irrational outcomes ⇒ equality ⇒ all Fourier Gil Kalai’s Proof, concluded: f never gives irrational outcomes ⇒ equality ⇒ all Fourier](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-65.jpg)
Gil Kalai’s Proof, concluded: f never gives irrational outcomes ⇒ equality ⇒ all Fourier weight “at level 1” ⇒ f(x) = ±xj for some j (exercise).
![⇓ Guilbaud’s Number ≈. 912 Guilbaud’s Theorem [1952] ⇓ Guilbaud’s Number ≈. 912 Guilbaud’s Theorem [1952]](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-66.jpg)
⇓ Guilbaud’s Number ≈. 912 Guilbaud’s Theorem [1952]
![Corollary of “Majority Is Stablest” [MOO 05]: Infi(f) ≤ o(1) for all i, If Corollary of “Majority Is Stablest” [MOO 05]: Infi(f) ≤ o(1) for all i, If](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-67.jpg)
Corollary of “Majority Is Stablest” [MOO 05]: Infi(f) ≤ o(1) for all i, If then Pr[rational outcome with f]
![10 minute break 10 minute break](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-68.jpg)
10 minute break
![Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-69.jpg)
Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps
![2 A. The Hypercontractive Inequality AKA Bonami-Beckner Inequality 2 A. The Hypercontractive Inequality AKA Bonami-Beckner Inequality](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-70.jpg)
2 A. The Hypercontractive Inequality AKA Bonami-Beckner Inequality
![KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-71.jpg)
KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold FKN Theorem Bourgain’s Junta Theorem Majority Is Stablest Theorem all use “Hypercontractive Inequality”
![Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-72.jpg)
Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + cn xn, where xi’s are indep. , unif. random ± 1.
![Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-73.jpg)
Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + cn xn, Mean: μ = c 0 Variance:
![Hypercontractive Inequality*: Let Mean: μ = Variance: Hypercontractive Inequality*: Let Mean: μ = Variance:](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-74.jpg)
Hypercontractive Inequality*: Let Mean: μ = Variance:
![Hypercontractive Inequality: Let Then for all q ≥ 2, Hypercontractive Inequality: Let Then for all q ≥ 2,](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-75.jpg)
Hypercontractive Inequality: Let Then for all q ≥ 2,
![Hypercontractive Inequality: Let Then F is a “reasonabled” random variable. Hypercontractive Inequality: Let Then F is a “reasonabled” random variable.](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-76.jpg)
Hypercontractive Inequality: Let Then F is a “reasonabled” random variable.
![Hypercontractive Inequality: Let Then for all q ≥ 2, Hypercontractive Inequality: Let Then for all q ≥ 2,](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-77.jpg)
Hypercontractive Inequality: Let Then for all q ≥ 2,
![“q = 4” Hypercontractive Inequality: Let Then “q = 4” Hypercontractive Inequality: Let Then](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-78.jpg)
“q = 4” Hypercontractive Inequality: Let Then
![“q = 4” Hypercontractive Inequality: Let Then “q = 4” Hypercontractive Inequality: Let Then](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-79.jpg)
“q = 4” Hypercontractive Inequality: Let Then
![KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-80.jpg)
KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold FKN Theorem Bourgain’s Junta Theorem Majority Is Stablest Theorem all use Hypercontractive Inequality
![KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-81.jpg)
KKL Theorem Friedgut’s Theorem Talagrand’s Theorem Every monotone graph property has a sharp threshold FKN Theorem Bourgain’s Junta Theorem Majority Is Stablest Theorem just use “q = 4” Hypercontractive Inequality
![“q = 4” Hypercontractive Inequality: Let F be degree d over n i. i. “q = 4” Hypercontractive Inequality: Let F be degree d over n i. i.](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-82.jpg)
“q = 4” Hypercontractive Inequality: Let F be degree d over n i. i. d. ± 1 r. v. ’s. Then Proof [MOO’ 05]: Induction on n. Obvious step. Use induction hypothesis. Use Cauchy-Schwarz on the obvious thing. Use induction hypothesis. Obvious step.
![2 B. Algorithmic Gaps 2 B. Algorithmic Gaps](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-83.jpg)
2 B. Algorithmic Gaps
![“Set-Cover is NP-hard to approximate to factor ln(N)” best poly-time guarantee ln(N) Opt “Set-Cover is NP-hard to approximate to factor ln(N)” best poly-time guarantee ln(N) Opt](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-84.jpg)
“Set-Cover is NP-hard to approximate to factor ln(N)” best poly-time guarantee ln(N) Opt
![“Factor ln(N) Algorithmic Gap for LP-Rand-Rounding” LP-Rand-Rounding guarantee ln(N) Opt “Factor ln(N) Algorithmic Gap for LP-Rand-Rounding” LP-Rand-Rounding guarantee ln(N) Opt](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-85.jpg)
“Factor ln(N) Algorithmic Gap for LP-Rand-Rounding” LP-Rand-Rounding guarantee ln(N) Opt
![“Algorithmic Gap Instance S for LP-Rand-Rounding” LP-Rand-Rounding(S) ln(N) Opt(S) “Algorithmic Gap Instance S for LP-Rand-Rounding” LP-Rand-Rounding(S) ln(N) Opt(S)](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-86.jpg)
“Algorithmic Gap Instance S for LP-Rand-Rounding” LP-Rand-Rounding(S) ln(N) Opt(S)
![Algorithmic Gap instances are often “based on” {− 1, +1}n. Algorithmic Gap instances are often “based on” {− 1, +1}n.](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-87.jpg)
Algorithmic Gap instances are often “based on” {− 1, +1}n.
![Sparsest-Cut: Algorithm: Arora-Rao-Vazirani SDP. Guarantee: Factor Sparsest-Cut: Algorithm: Arora-Rao-Vazirani SDP. Guarantee: Factor](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-88.jpg)
Sparsest-Cut: Algorithm: Arora-Rao-Vazirani SDP. Guarantee: Factor
![](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-89.jpg)
![Opt = 1/n Opt = 1/n](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-90.jpg)
Opt = 1/n
![Opt = 1/n Opt = 1/n](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-91.jpg)
Opt = 1/n
![Opt = 1/n Opt = 1/n](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-92.jpg)
Opt = 1/n
![Opt = 1/n f(x) = sgn( ) Opt = 1/n f(x) = sgn( )](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-93.jpg)
Opt = 1/n f(x) = sgn( )
![Opt = 1/n ARV gets f(x) = sgn(r 1 x 1 + • • Opt = 1/n ARV gets f(x) = sgn(r 1 x 1 + • •](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-94.jpg)
Opt = 1/n ARV gets f(x) = sgn(r 1 x 1 + • • • + rnxn)
![Opt = 1/n ARV gets gap: Opt = 1/n ARV gets gap:](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-95.jpg)
Opt = 1/n ARV gets gap:
![Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator”](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-96.jpg)
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” solutions • “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
![Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator”](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-97.jpg)
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” solutions • “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
![KKL / Talagrand Theorem: If f is balanced, Infi(f) ≤ 1/n. 01 for all KKL / Talagrand Theorem: If f is balanced, Infi(f) ≤ 1/n. 01 for all](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-98.jpg)
KKL / Talagrand Theorem: If f is balanced, Infi(f) ≤ 1/n. 01 for all i, then avg Infi(f) ≥ Gap: Θ(log n) = Θ(log N).
![[CKKRS 05]: KKL + Unique Games Conjecture ⇒ Ω(log log N) hardness-of-approx. [CKKRS 05]: KKL + Unique Games Conjecture ⇒ Ω(log log N) hardness-of-approx.](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-99.jpg)
[CKKRS 05]: KKL + Unique Games Conjecture ⇒ Ω(log log N) hardness-of-approx.
![2 -Colorable 3 -Uniform hypergraphs: Input: 2 -colorable, 3 -unif. hypergraph Output: 2 -coloring 2 -Colorable 3 -Uniform hypergraphs: Input: 2 -colorable, 3 -unif. hypergraph Output: 2 -coloring](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-100.jpg)
2 -Colorable 3 -Uniform hypergraphs: Input: 2 -colorable, 3 -unif. hypergraph Output: 2 -coloring Obj: Max. fraction of legally colored hyperedges
![2 -Colorable 3 -Uniform hypergraphs: Algorithm: SDP [KLP 96]. Guarantee: [Zwick 99] 2 -Colorable 3 -Uniform hypergraphs: Algorithm: SDP [KLP 96]. Guarantee: [Zwick 99]](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-101.jpg)
2 -Colorable 3 -Uniform hypergraphs: Algorithm: SDP [KLP 96]. Guarantee: [Zwick 99]
![Algorithmic Gap Instance Vertices: {− 1, +1}n 6 n hyperedges: { (x, y, z) Algorithmic Gap Instance Vertices: {− 1, +1}n 6 n hyperedges: { (x, y, z)](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-102.jpg)
Algorithmic Gap Instance Vertices: {− 1, +1}n 6 n hyperedges: { (x, y, z) : poss. prefs in a Condorcet election} (i. e. , triples s. t. (xi, yi, zi) NAE for all i)
![Elts: {− 1, +1}n 2 -coloring Edges: Condorcet votes (x, y, z) = f Elts: {− 1, +1}n 2 -coloring Edges: Condorcet votes (x, y, z) = f](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-103.jpg)
Elts: {− 1, +1}n 2 -coloring Edges: Condorcet votes (x, y, z) = f : {− 1, +1}n → {− 1, +1} frac. legally colored hyperedges = Pr[“rational” outcome with f] Instance 2 -colorable? ✔ (2 n optimal solutions: ±Dictators)
![Elts: {− 1, +1}n Edges: Condorcet votes (x, y, z) SDP rounding alg. may Elts: {− 1, +1}n Edges: Condorcet votes (x, y, z) SDP rounding alg. may](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-104.jpg)
Elts: {− 1, +1}n Edges: Condorcet votes (x, y, z) SDP rounding alg. may output f(x) = sgn(r 1 x 1 + • • • + rnxn) Random weighted majority also rational-with-prob. -. 912! [same CLT arg. ]
![Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator”](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-105.jpg)
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance • n optimal “Dictator” solutions • “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
![Corollary of Majority Is Stablest: Infi(f) ≤ o(1) for all i, If then Pr[rational Corollary of Majority Is Stablest: Infi(f) ≤ o(1) for all i, If then Pr[rational](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-106.jpg)
Corollary of Majority Is Stablest: Infi(f) ≤ o(1) for all i, If then Pr[rational outcome with f] Cor: this + Unique Games Conjecture ⇒. 912 hardness-of-approx*
![2 C. Future Directions 2 C. Future Directions](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-107.jpg)
2 C. Future Directions
![Develop the “structure vs. pseudorandomness” theory for Boolean functions. Develop the “structure vs. pseudorandomness” theory for Boolean functions.](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-108.jpg)
Develop the “structure vs. pseudorandomness” theory for Boolean functions.
![Thanks! Thanks!](http://slidetodoc.com/presentation_image_h2/7a8f21adeb7a3b2e7b6c854c690d1c50/image-109.jpg)
Thanks!
- Slides: 109