# CS代考计算机代写 Prof. Mark Bun

Prof. Mark Bun

CAS CS 591 B: Communication Complexity

Lecture Notes 19:

Pattern Matrix Method Continued

Fall 2019

Reading.

• Sherstov, The Pattern Matrix Method

In this lecture, we’ll prove a version of the Pattern Matrix Method lifting approximate degree to discrepancy.

Theorem 1. Let F = f ◦gmn where gm : {−1,1}m ×([m]×{−1,1}) is given by gm(x,(i,w)) = xiw. Then for every δ > 0,

disc(F ) ≤ δ + m− adeg1−δ (f )/2.

Here, we recall the denition of approximate degree and its dual characterization.

Denition 2. Let ε > 0 and let f : {−1,1}n → {−1,1}. A real polynomial p : {−1,1}n → R ε-approximates f if |p(x) − f (x)| ≤ ε for all x ∈ {−1, 1}n . The ε-approximate degree of f is the least degree of a polynomial p which approximates f, and is denoted adegε(f).

Theorem 3. Let f : {−1,1}n → {−1,1} be a boolean function. Then adegε(f) > d if and only if there exists a function ψ : {−1, 1}n → R such that

1. ⟨f, ψ⟩ := x∈{−1,1}n f (x)ψ(x) > ε 2. ∥ψ∥1 := x∈{−1,1}n |ψ(x)| = 1

3. ψˆ(S) = 2−n x∈{−1,1}n ψ(x)χS(x) = 0 for every S ⊆ [n] with |S| ≤ d. Here χS(x) = i∈S xi. 1 Pattern Matrix Method Proof

It will be convenient to dene the notion of a Pattern Matrix of a function ψ : {−1, 1}n → R. In the special case where ψ is boolean, this is simply the communication matrix of ψ ◦ gmn , but the denition of course also makes sense when ψ is real-valued.

Denition 4. The m-Pattern Matrix of a function ψ : {−1, 1}n → R is the 2nm × (2m)n real matrix PMm(ψ) given by

PMm[(x1, . . . , xn), ((i1, w1), . . . (in, wn))] = ψ(g(x1, (i1, w1)), . . . , g(xn, (in, wn))) = ψ(x|I ⊕ w)

where I = (i1, . . . , in) and the notation x|I indicates the projection of x to the coordinates specied by I, i.e., (x1,i1,…,xn,in).

1

Every formulation of the Pattern Matrix Method makes use of the following lemma which relates the spectral norm of a pattern matrix to the Fourier coecients of the underlying function.

Lemma 5. Let ψ : {−1,1}n → R and let Ψ = PMm(ψ) be its pattern matrix. Then the spectral norm of Ψ is given by

∥Ψ∥ = 2nm · (2m)n · max |ψˆ(S)| · m−|S|/2 . S ⊆[n]

Let us see how to use Lemma 5 to prove Theorem 1.

Proof of Theorem 1. Recall from our discussion of discrepancy that for any function F : X × Y →

{−1, 1} we have

∥F∥ disc(F) ≤ |X||Y |.

(Here, for convenience, we will conate a two-party function with its sign matrix.) The proof of this actually gives an upper bound on the discrepancy of F with respect to the uniform distribution. It can be generalized as follows. Let P be any matrix with non-negative entries which sum to 1. Then

disc(F)≤∥F ◦P∥·|X||Y|

where F ◦ P is the matrix obtained by taking the entrywise product of F and P . This upper bound

on discrepancy follows from the calculation

discP(F)= max P[x,y]F[x,y] S⊆X,T⊆Y x∈S y∈T

= max |1TS (P ◦ F )1T | S,T

≤ max∥1S∥2 ·∥P ◦F∥·∥1T∥2 S,T

= ∥P ◦ F∥|X||Y |.

Now suppose f : {−1, 1}n → {−1, 1} is such that deg1−δ(f) > d. Let Ψ be the 2nm × (2m)n Pattern Matrix of 2−nm ·m−n ·ψ. Then we have ∥Ψ∥1 = 1 and ⟨Ψ,SF⟩ > 1−δ by Theorem 3. Now we calculate ∥Ψ∥. First observe that for every S ⊆ [n],

ˆ −n −n −n

|ψ(S)| = 2 ψ(x)χS(x) ≤ 2 ∥ψ∥1 = 2 . x

Using the fact that ψˆ(S) = 0 for all |S| ≤ d, we have by Lemma 5 that ∥Ψ∥ ≤ √s · (2−nm · m−n) · 2−nm−d/2 = s−1/2m−d/2.

wheres=2nm·(2m)n isthesizeofΨ.

Now let us write Ψ = P ◦ H where P is a non-negative matrix whose entries sum to 1 and H

is a sign matrix. We can do this because ∥Ψ∥1 = 1. The above discrepancy calculation then shows

that

discP (H) ≤ ∥P ◦ H∥√s ≤ m−d/2. 2

Moreover, applying the triangle inequality to the denition of discrepancy,

discP(F)≤discP(H)+∥(F −H)◦P∥1.

Let E = {(x,y) : F(x) ̸= H(x)}. We can equivalently write the error term as

∥(F −H)◦P∥1 =2P(x,y) E

= P(x,y)+ P(x,y)− P(x,y)− P(x,y)

(x,y)∈E ̄

= 1 − ⟨F, H ◦ P ⟩ = 1 − ⟨F, Ψ⟩

≤ 1 − (1 − δ).

Putting everything together, we conclude

(x,y)∈E

(x,y)∈E ̄

(x,y)∈E

discP(F)≤discP(H)+∥(F −H)◦P∥1 ≤m−d/2 +δ.

2 Proof of Lemma 5

We now prove Lemma 5 relating the spectral norm of a pattern matrix PMm(ψ) to the Fourier coecients of ψ.

A key fact about the Fourier representation of ψ is that we have ψ(x) = ψˆ(S)χS(x).

S ⊆[n]

For each S ⊆ [n] let AS be the pattern matrix PMm(χS). Then by linearity,

Ψ = PMm(ψ) = ψˆ(S)AS. S ⊆[n]

To understand the singular values of Ψ, we invoke the following lemma relating the singular values of a sum of matrices to the singular values of the individual matrices.

Lemma 6. Let A and B be real matrices with ABT = 0 and ATB = 0. Then the multiset of nonzero singular values of A + B is the union of the singular values of A with singular values of B.

We won’t prove the lemma, but the idea is as follows. The singular values of A + B are just the square roots of the eigenvalues of (A + B)(A + B)T = AAT + BBT . The orthogonality of A and B further implies that vectors in the spectral decomposition of AAT are orthogonal to those in the spectral decomposition of BBT . Hence the set of eigenvalues of AAT + BBT is just the union of the eigenvalues of AAT and BBT .

In order to apply the lemma, we need to show that the matrices AS are orthogonal. To see this, let S,T ⊆ [n] with S ̸= T. Then for every x,x′ ∈ {−1,1}nm,

3

ASAT[x,x′]=χS(x|I ⊕w)χT(x′|I ⊕w) Iw

= χS(x|I)χT(x|I)χS(w)χT(w) Iw

=0

because χS and χT are orthogonal. A similar argument can be used to show that A TS A T = 0 .

So by the lemma, the set of nonzero singular values of Ψ is just the union of the nonzero singular

values of the matrices ψˆ(S)AS. We will be done if we can show that the only nonzero eigenvalue of

ATS AS is 2nm · (2m)n · m−|S| (with multiplicity m|S|).

T 2n×2n

mn ×mn

V [I , I ′ ] = χS (x|I )χS (xI ′ ). x∈{−1,1}n

The rst matrix W has rank 1 and it is easy to see that it has 2n as its only singular value. The second matrix V is similar to 2mn diag(J, . . . , J) where J is the all-ones square matrix with mn−|S| rows. Hence the only nonzero singular value of V is 2mn · mn−|S| (with multiplicity m|S|). So the only nonzero eigenvalue of ATS AS is 2nm · (2m)n · m−|S|.

Now Lemma 5 follows because the spectral norm of Ψ is the largest singular value of any of the matrices ψˆ(S)AS.

This can be done by writing AS AS = W ⊗ V where W ∈ {−1, 1} W[w,w′] = χS(w)χS(w′)

is given by

and V ∈ R

is

4