Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Channel Estimation and Data Detection Methods for 1-bit Massive MIMO Systems

Abstract

Massive multiple-input multiple-output (MIMO) is a promising technology for next generation communication systems. In massive MIMO, a base station (BS) is equipped with a large antenna with potentially hundreds of antennas elements, allowing many users to be served simultaneously. Unfortunately, the hardware complexity and power consumption will scale with the number of antennas. The use of one-bit analog-to-digital converters (ADCs) provides an attractive solution to solve the above issues, since a one-bit ADC consumes negligible power and complex automatic gain control (AGC) can be removed.

However, the signal distortion from the severe quantization poses significant challenges to the system designer. One bit quantization effectively removes all amplitude information, which is not recoverable by an increase in signal strength. This places a bound on channel estimation performance. Since the channel model is highly nonlinear, linear detector is suboptimal compared to more sophisticated nonlinear techniques.

To reduce the impairment caused by one-bit quantization, a novel antithetic dithering scheme is developed. Antithetic dither is introduced into the system to generate negative correlated noise. Efficient channel estimation algorithms are developed to exploit the induced negative correlated noise in the system. A statistical framework is developed to validate the noise reduction from negative correlated quantized output.

To improve the performance of data detection, feed forward neural network based detectors are developed, performance of these detectors are analyzed, architectural modification and training techniques are employed to partially resolve issues that prevent the networks from reaching ideal maximum likelihood performance.

Next, model based approaches are evaluated and the shortcomings of iterative methods that rely on the exact likelihood are identified. Iterative methods based on the exact likelihood is shown to diverge due to the increasingly large gradient at high SNR. The constant gradient induced by the sigmoid approximation is shown to increase the robustness of these methods. A structured deep learning detector based on stochastic variational inference is proposed. Stochastic estimate of the gradient is introduced to reduce complexity of the algorithm. Damping is added to improve the performance of mean field inference. Parallel processing is proposed to reduce inference time. The proposed detector is shown to outperform existing methods that do not employ a second candidate search step.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View