\documentclass{article}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{libertine-type1}
\usepackage{helvet}
\usepackage[libertine]{newtxmath}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{geometry}
\geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in}
\usepackage{graphicx}
\makeatletter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands.
\theoremstyle{definition}
\newtheorem{example}{\protect\examplename}
\theoremstyle{definition}
\newtheorem{xca}{\protect\exercisename}
\theoremstyle{definition}
\newtheorem{defn}{\protect\definitionname}
\theoremstyle{remark}
\newtheorem{rem}{\protect\remarkname}
\theoremstyle{plain}
\newtheorem{thm}{\protect\theoremname}
\theoremstyle{plain}
\newtheorem{lem}{\protect\lemmaname}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands.
\date{}
\usepackage{MnSymbol}
\newcommand{\btimes}{\mathbin{\rotatebox[origin=c]{36}{$\pentagram$}}}
\newcommand\bleh{%
\mathrel{\ooalign{\hss$\btimes$\hss\cr%
\kern0.025ex\raise-0.88ex\hbox{\scalebox{2.5}
{$\circ$}}}}}
\usepackage{graphicx}
\date{}
\renewcommand\qedsymbol{$\bleh$}
\renewcommand\labelenumi{(\roman{enumi})}
\renewcommand\theenumi\labelenumi
\DeclareMathOperator{\Ann}{Ann}
\DeclareMathOperator{\coker}{coker}
\DeclareMathOperator{\Spec}{Spec}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\End}{End}
\DeclareMathOperator{\Supp}{Supp}
\DeclareMathOperator{\codim}{codim}
\DeclareMathOperator{\ch}{char}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\Frob}{Frob}
\DeclareMathOperator{\Gal}{Gal}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\Span}{Span}
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\tr}{tr}
\DeclareMathOperator{\Sym}{Sym}
\makeatother
\providecommand{\definitionname}{Definition}
\providecommand{\examplename}{Example}
\providecommand{\exercisename}{Exercise}
\providecommand{\lemmaname}{Lemma}
\providecommand{\remarkname}{Remark}
\providecommand{\theoremname}{Theorem}
\begin{document}
\input{preamble.tex}
\handout{CS 229r Information Theory in Computer Science}{Apr 4, 2019}{Instructor:
Madhu Sudan}{Scribe: Matthew Hase-Liu}{Lecture 1}
\global\long\def\wangle#1{\left\langle #1\right\rangle }
\global\long\def\ol#1{\overline{#1}}
\global\long\def\acts{\curvearrowright}
\global\long\def\ord#1#2{\text{ord}_{#1}(#2)}
\global\long\def\Id{\text{Id}}
\global\long\def\A{\mathbb{A}}
\global\long\def\R{\mathbb{R}}
\global\long\def\Q{\mathbb{Q}}
\global\long\def\N{\mathbb{N}}
\global\long\def\C{\mathbb{C}}
\global\long\def\P{\mathbb{P}}
\global\long\def\Z{\mathbb{Z}}
\global\long\def\mf#1{\mathfrak{#1}}
\global\long\def\ep{\varepsilon}
\global\long\def\vec#1{\overrightarrow{#1}}
\global\long\def\re#1{\text{Re}\,\left(#1\right)}
\global\long\def\im#1{\text{Im}\,\left(#1\right)}
\global\long\def\Div#1{\text{Div}\,\left(#1\right)}
\global\long\def\Res#1#2#3{\text{Res}_{#2}^{#3}\left(#1\right)}
\global\long\def\Ind#1#2#3{\text{Ind}_{#2}^{#3}\left(#1\right)}
Today, we will begin studying the parallel repetition theorem, beginning
with 2-prover games, followed by motivation and examples, the repetition
problem and question, and finally the main theorem.
\subsection*{2-Prover Games}
We have three players, two of which are provers and one of which is
a verifier. In the diagram below, we denote the provers Alice and
Bob, say $A$ and $B,$ respectively. \begin{center}\includegraphics{pasted1}\end{center}In
particular, there exactly one round of interaction. We have the following
procedure:
\begin{enumerate}
\item The verifier first selects $(x,y)$ according to some distribution
$\mu.$
\item Alice receives $x$ and Bob receives $y,$ and neither prover knows
the other's input.
\item Alice and Bob then each return to the verifier $a$ and $b,$ respectively.
\item The verifier chooses to accept or reject, depending on $x,y,a,b.$
\end{enumerate}
The simplest example of this is the ``odd cycle game'':
\begin{example}
[Odd Cycle Game] As a rough idea, the provers are claiming that $C_{n}=\left(\Z_{n},E=\{i\bmod n,i+1\bmod n\}\right)$
is 2-colorable ($n$ odd). Here is a possible protocol:
\begin{enumerate}
\item Select $(x,y)$ distributed as follows: \\
With probability $1/2n$, select $(i,i+1)$ (over all possible $n$
values of $i$), and with probability $1/2n,$ select $(i,i)$ (also
over all possible $n$ values of $i$).
\item Alice then sends back to the verifier some $X_{A}(x)=a\in\{0,1\},$
and likewise Bob sends back some $X_{B}(y)=b\in\{0,1\}.$
\item The verifier can then accept the 4-tuple $(x,y,a,b)$ iff we have
$a=b\iff x=y.$
\end{enumerate}
\end{example}
\begin{xca}
In this scenario, when will the verifier be able to catch the lie,
i.e. in what situations will the verifier accept when graph is not
actually 2-colorable?
\end{xca}
Another possible protocol is as follows: Suppose $X_{A}(x)=X_{B}(x)=x\mod2.$
Then, one can check easily that the success probability is $1-1/(2n).$
\begin{xca}
Prove that this is the best you can do.
\end{xca}
\begin{defn}
We define a 2-prover game $G=(\mu,V)$ as follows: We have two provers
$A,B$ which are given inputs $x,y$ from a verifier, where $(x,y)$
is selected according to the distribution $\mu$ supported on ${\cal X}\times{\cal Y}$
(usually finite sets) as follows. After receiving $x,y$ from the
prover, the verifier is sent $a\in{\cal A}$ and $b\in{\cal B}$ from
$A,B,$ respectively. Moreover, $V:{\cal X}\times{\cal Y}\times{\cal A}\times{\cal B}\to\{0,1\}$
is a function that the verifier then uses to either accept or reject,
with the former corresponding to 1 and the latter to 0. The goal of
the provers is to send back $a,b$ so that $V(x,y,a,b)=1.$
\end{defn}
%
In our previous case, we had ${\cal X}=\Z_{n}={\cal Y}.$ We also
had $V:{\cal X}\times{\cal Y}\times{\cal A}\times{\cal B}\to\{0,1\}.$
In the previous case we moreover had ${\cal A}={\cal B}=\{0,1\}.$
\begin{defn}
We define a strategy $f:{\cal X}\to A,g:{\cal Y}\to B$ as a pair
of functions used by the two provers. This gives rise to a notion
of a \textbf{game value} \textbf{dependent on} $f,g$, namely
\[
\text{val}(G,f,g)=\mathbb{E}_{(x,y)\sim\mu}\left[V(x,y,f(x),g(y)\right]
\]
is the value of $G$ given strategy $f,g.$ This moreover gives rise
to the \textbf{game value}, namely
\[
\text{val}(G)=\omega(G)=\max_{f,g}\left\{ \text{val}(G,f,g)\right\} .
\]
For now, suppose that the strategies of the provers are deterministic.
Given a game, how can we compute its value? Note that this has to
be a hard question, because this should be able to answer questions
such as whether or not graphs are 3-colorable. In '92, it was known
that there was a family of games for which it is hard to approximate
values to within $\pm10^{-10}.$ In like manner, are there games whose
values are hard to approximate additively to $1-\ep.$ Idea: let's
just repeat the same game a bunch of times!
\end{defn}
\subsection*{Repetition of Games}
We will no longer be in the 2-prover game case. Instead of just one
question, we can ask a bunch of questions. It turns out that the analysis
is a function of how we ask these questions. In sequential repetition,
where we ask questions one after another, we have $\text{val}\left(G^{k\text{-seq-rep}}\right)=\text{val}(G)^{k}.$
In parallel repetition, we have $\mu^{\otimes k}=k$-fold product
of $\mu$ and
\[
V^{\otimes k}\left(\left(x_{1},\ldots,x_{k}\right),\left(y_{1},\ldots,y_{k}\right),\left(a_{1},\ldots,a_{k}\right),\left(b_{1},\ldots,b_{k}\right)\right)=\bigwedge_{i=1}^{k}V(x_{i},y_{i},a_{i},b_{i}).
\]
We moreover define $\text{val}\left(V^{\otimes k},\ol f,\ol g\right),$
where $\ol f:{\cal X}^{k}\to{\cal A}^{k},\ol g:{\cal Y}^{k}\to{\cal B}^{k}$
in the same fashion as earlier. Note that it is not necessarily true
that $\ol f=f^{\otimes k}$ or even that $\ol f$ can be written as
a $k$-fold product.
\begin{defn}
We define the \textbf{game value }as
\[
\omega(G^{\otimes k})=\max_{\ol f,\ol g}\left\{ \text{val}\left(G^{\otimes k},\ol f,\ol g\right)\right\} .
\]
\end{defn}
Question:
\[
\omega\left(G^{\otimes k}\right)=\omega(G)^{k}?
\]
Note that $\ge$ is easy, but $\le$ is elusive.
\begin{example}
[Feige's Counterexample] There exists some $G$ with $\omega(G^{2})=\omega(G)=1/2.$
In fact, for all $s,$ there are even $G_{s}$ so that $\omega(G^{\otimes S})=\omega(G)<1.$
This suggests that perhaps we can repeat games without changing the
value. In this example, the verifier tosses two coins.
\begin{center}\includegraphics{pasted2}\end{center}Clearly, Alice
can easily guess $x$ and Bob can easily guess $y.$ Note that probability
of success is at most $1/2;$ at least one of Alice and Bob will have
to guess the other person's coin (since we only accept if both guesses
are equal and correct). We can reformulate the game as follows (for
instance, guessing 3 means ``I guess $y=3$''). Then, $V(x,y,a,b)$
accepts iff $a=b$ and $b\in\{x,y\}.$ It's easy to check that $\omega(G)=1/2.$
As a general fact, note that $\omega(G^{k-1})\ge\omega(G^{k}).$
\end{example}
\begin{xca}
Why is this true?
\end{xca}
%
\begin{rem}
There is a modification of the game: ${\cal A}={\cal B},{\cal X}={\cal Y},f:{\cal X}\to A$
and $V:{\cal X}\times{\cal X}\times{\cal A}\times{\cal A}\to\{0,1\}$
and $\mu\sim{\cal X}\times{\cal X}.$ This is denoted the ``PCP''
version of the game. We have $\text{val}(G,\mu)=\max_{f}\text{val}(G,f).$
For this case, apparently, the inequality above may not hold.
\end{rem}
\begin{example}
So clearly we have $\omega(G^{\otimes2})\le1/2.$ In the 2-fold repetition
strategy, we hope that $x_{1}=y_{2}-2.$ If we treat ``hope'' as
an event, the probability of this is $1/2$ (in particular, $x_{1}$
is $y_{1}-2$ or its complement with equal probability). Note that
$\Pr[\text{win}|\text{hope}]=1:$ the strategy is $f(x_{1},x_{2})=(x_{1},x_{1}+2)$
and $g(y_{1},y_{2})=(y_{2}-2,y_{2}).$ In particular, this ensures
that Alice and Bob both win together or lose together – we are tying
the win/loss together, which should improve the win probability.
\end{example}
Verbitsky answers this question: for all $G,$ with $\omega(G)<1,$
for all $\ep>0,$ there exists $k=k(G,\ep)$ such that $\omega(G^{\otimes k})<\ep.$
Essentially every game eventually shrinks if you repeat it enough.
Then Raz came along and said the following:
\begin{thm}
For all ${\cal A},{\cal B},\ep>0,$ there exists $\delta>0$ so that
for all $G$ and for all $k,$ if $\omega(G)\le1-\ep,$ we have $\omega(G^{\otimes k})=(1-\delta)^{k}.$
\end{thm}
%
\subsection*{Raz's Lemma}
Now, suppose $S\subset[k]$ and fix some strategy $\ol f,\ol g.$
Let $W_{S}=\text{event that we win on coordinates }i\in S.$ Equivalently,
$V(x_{i},y_{i},\ol f(x)_{i},\ol g(y)_{i})=1$ for every $i\in S.$
It would be nice to have $\Pr\left[W_{i}|W_{\{1,\ldots,i-1\}}\right]\le\omega(G)$
\begin{xca}
Why doesn't this work? Give an example from before.
\end{xca}
Precisely, we will use the following lemma in the proof of the main
theorem next time:
\begin{lem}
For every ${\cal A},\ep>0,$ there exists some $\gamma>0$ such that
for all $G$ with $\omega(G)\le1-\ep$ for all $\ol f,\ol g,k$, For
every subset $S\subset[k],|S|<\gamma k,$ one of the two happens:
\begin{enumerate}
\item There exists some $i\not\in S$ so that $\Pr\left[W_{i}|W_{S}\right]\le1-\ep/2.$
\item $\Pr[W_{S}]\le2^{-\gamma k}.$
\end{enumerate}
\end{lem}
Why is this important? Suppose $S_{0}=\emptyset.$ Note that $W_{S_{0}}=1.$
There exists some $i_{0}\not\in S_{0}$ so that $\Pr[W_{s_{0}}]\le1-\ep/2.$
We can write $S_{1}=S_{0}\cup\{i_{0}\}.$ We have $\Pr\left[W_{S_{1}}\right]\le1-\ep/2.$
Then there is some $i_{2}\not\in S_{1}$ so that $\Pr\left[W_{i_{2}}|W_{S_{1}}\right]\le1-\ep/2.$
If $S_{2}=S_{1}\cup\{i_{2}\},$ then $\Pr\left[W_{S_{2}}\right]\le(1-\ep/2)^{2}.$
The only reason we would stop is if we run into the second condition
or run out of elements of $[k].$ In the former, we have $\Pr\left[W_{[k]}\right]\le\Pr\left[W_{S}\right]\le2^{-\gamma k},$
which is exponentially small in $k.$ In most cases, we should expect
that $\Pr\left[W_{[k]}\right]\le(1-\ep/2)^{\gamma k}$ by repeating
the process above.
Note that currently our verifiers have randomness but our provers
do not. In the next lecture, we will use correlated sampling and provide
randomness to our provers to continue our analysis.
\end{document}