Doing it for the Bit: Applications of Quantization in Data Science and Signal Processing
Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Doing it for the Bit: Applications of Quantization in Data Science and Signal Processing

Abstract

This thesis explores the topic of quantization in the context of data science and digital signal processing. It is already well-known that quantization plays a critical role in signal processing given the ubiquity of digital devices. Applications of quantization to data science have only recently started emerging. Perhaps the most important application is that of quantizing neural networks. Below, we consider quantization in three particular contexts. The first chapter considers the problem of quantizing a pre-trained neural network with a data-driven approach. I propose a novel algorithm for quantizing the weights of a hidden unit in a given layer using a greedy path following algorithm which is equivalent to running a dynamical system. I prove that this dynamical system is stable when quantizing a single layer neural network, or equivalently the first layer of a multi-layer network, when the training data are Gaussian. I then give bounds on the generalization error of the learned quantization and describe how these bounds may be tightened when the feature data lay in a low-dimensional subspace. In the second chapter, I generalize a 2016 result proven by Saab, Wang, and Yilmaz for recovering sparse vectors to the setting of low-rank matrices. Specifically, I show that low-rank matrices may be robustly recovered from \(\Sigma\Delta\) quantized subgaussian linear measurements. Using these techniques I show that the reconstruction error associated with our methods decays polynomially with the oversampling factor, and that one can leverage this result to obtain root-exponential accuracy by optimizing over the choice of quantization scheme. Additionally, if a random encoding of the quantized measurements is used then the reconstruction bound exhibits a near-optimal exponential rate-distortion trade-off. In the third chapter, I generalize work done by Iwen, Krahmer, Krause-Solberg, and Maly in 2018 to prove that signal recovery is possible when given \(\Sigma\Delta\) quantized linear measurements of signals from a manifold where the manifold is only approximately known through a Geometric Multi-Resolution Analysis. In addition to proving error bounds that outperform those in the Memoryless Scalar Quantization approach taken in the aforementioned work, we also extend the ensemble of linear measurements beyond the Gaussian case.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View