Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Robust Communication and Optimization over Dynamic Networks

Abstract

Many types of communication and computation networks arising in modern systems have fundamentally dynamic, time-varying, and ultimately unreliably available resources. Specifically, in wireless communication networks, such unreliability may manifest itself as variability in channel conditions, intermittent availability of undedicated resources (such as unlicensed spectrum), or collisions due to multiple-access. In distributed computing, and specifically in large-scale distributed optimization and machine learning, this phenomenon manifests itself in the form of communication bottlenecks, straggling or failed nodes, or running background processes which hamper or slow down the computational task. In this thesis, we develop information-theoretically-motivated approaches that make progress towards building robust and reliable communication and computation networks built upon unreliable resources. In the first part of the thesis, we focus on three problems in wireless networks which involve opportunistically harnessing time-varying resources while providing theoretical performance guarantees. First, we show that in full-duplex uplink-downlink cellular networks, a simple, low-overhead user scheduling scheme that exploits the variations in channel conditions can be used to optimally mitigate inter-user interference in the many-user regime. Next, we consider the use of intermittently available links over unlicensed spectral bands to enhance communication over the licensed cellular band. We show that channel output feedback over such links, combined with quantize-map-forward relaying, provides generalized-degrees-of-freedom gain in interference networks. We characterize the information-theoretic capacity region of this model to within a constant gap. We finally consider the use of such intermittent links in device-to-device cooperation to aid cellular downlink. We develop an optimal dynamic resource allocation algorithm for such networks using stochastic approximation and graph theory techniques, and show that the resulting scheme results in up to 5-6x throughput gain for cell-edge users. In the second part, we consider the problem of distributed optimization and machine learning over large-scale, yet unreliable clusters. Focusing on a master-worker architecture, where large-scale datasets are distributed across worker nodes which communicate with a central parameter server to optimize a global objective, we develop a framework for embedding redundancy in the dataset to combat node failures and delays. This framework consists of an efficient linear transformation (coding) of the dataset that results in an overcomplete representation, combined with a coding-oblivious application of a distributed optimization algorithm. We show that if the linear transformation is designed to satisfy certain spectral properties resembling the restricted isometry property, nodes that fail or delay their computation can be dynamically left out of the computational process, while still converging to a reasonable solution with fast convergence rates, obviating the need for explicit fault-tolerance mechanisms and significantly speeding up overall computation. We implement the techniques on Amazon EC2 clusters to demonstrate the applicability of the proposed technique to various machine learning problems, such as logistic regression, support vector machine, ridge regression, and collaborative filtering; as well as several popular optimization algorithms including gradient descent, L-BFGS, coordinate descent and proximal gradient methods.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View