Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Sketches and Traces

Abstract

In this dissertation, we study two problems that have the theme of extracting information from lower dimensional samples.

A number of recent works have studied algorithms for entrywise $\ell_p$-\emph{low rank approximation}, namely algorithms which given an $n \times d$ matrix $A$ (with $n \geq d$), output a rank-$k$ matrix $B$ minimizing $\|A-B\|_p^p=\sum_{i,j} |A_{i,j} - B_{i,j}|^p$ when $p > 0$; and $\|A-B\|_0 = \sum_{i,j} [A_{i,j} \neq B_{i,j}]$ for $p=0$, where $\|A-B\|_0$ denotes the number of entries $(i,j)$ for which $A_{i,j} \neq B_{i,j}$.

For $p = 1$, this is often considered more robust than the SVD, while for $p = 0$ this corresponds to minimizing the number of disagreements, or robust PCA. This problem is known to be NP-hard for $p \in \{0,1\}$, already for $k = 1$, and while there are polynomial time approximation algorithms, their approximation factor is at best $\poly(k)$. It was left open if there was a polynomial-time approximation scheme (PTAS) for $\ell_p$-approximation for any $p \geq 0$. We show the following:

\begin{enumerate}

\item On the algorithmic side, for $p \in (0,2)$, we use a technique called \emph{sketching} to give the first $n^{\poly(k/\eps)}$ time $(1+\eps)$-approximation algorithm. For $p = 0$, there are various problem formulations, a common one being the binary setting for which $A\in\{0,1\}^{n\times d}$ and $B = U \cdot V$, where $U\in\{0,1\}^{n \times k}$ and $V\in\{0,1\}^{k \times d}$. For this setting, we obtain an algorithm with time $n \cdot d^{\poly(k/\eps)}$.

\item On the hardness front, for $p \in (1,2)$, we show under the Small Set Expansion Hypothesis and Exponential Time Hypothesis (ETH), there is no constant factor approximation algorithm running in time $2^{k^{\delta}}$ for a constant $\delta > 0$, showing an exponential dependence on $k$ is necessary. We also show for finite fields of constant size, under the ETH, that any fixed constant factor approximation algorithm requires $2^{k^{\delta}}$ time for a constant $\delta > 0$.

\end{enumerate}

\emph{Population recovery} is the problem of learning an unknown distribution over an unknown set of $n$-bit strings, given access to independent \emph{traces} from the distribution that have been independently corrupted according to some noise channel. Recent work has intensively studied such problems both for the {bit-flip} noise channel and for the erasure noise channel.

In this dissertation we initiate the study of population recovery under the \emph{deletion channel}, in which each bit $b$ is independently \emph{deleted} with some fixed probability and the surviving bits are concatenated and transmitted. This is a far more challenging noise model than bit-flip~noise or erasure noise; indeed, even the simplest case in which the population is of size 1 (corresponding to a trivial probability distribution supported on a single string) corresponds to the \emph{trace reconstruction} problem, which is a challenging problem that has received much recent attention.

In this work we give algorithms and lower bounds for population recovery under the deletion channel when the population size is some value $\ell > 1$. As our main sample complexity upper bound, we show that for any population size $\ell = o(\log n / \log \log n)$, a population of $\ell$ strings from $\zo^n$ can be learned under deletion channel noise using $\smash{2^{n^{1/2 + o(1)}}}$ samples. On the lower bound side, we show that at least $n^{\Omega{(\ell)}}$ samples are required to perform population recovery under the deletion channel when the population size is $\ell$, for all $\ell \leq n^{1/2 - \epsilon}$.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View