Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Signal Models for Robust Deep Learning

Abstract

In this thesis, we illustrate via two case studies the utility of bottom-up signal modeling and processing for learning robust models. A key feature of machine learning is the ability to avoid detailed signal models by leveraging the large amounts of data and computational power available today. However, the performance of the resulting networks is hampered by vulnerability to input perturbations and easily spoofed features. We demonstrate in this work how insights from signal modeling can inform the design of robust neural networks.

We begin by studying small adversarial perturbations that can induce large classification errors in state-of-the-art deep networks. Here, we show that a systematic exploitation of sparsity in natural data is a promising tool for defense. For linear classifiers, we show that a sparsifying front end is provably effective against L-infinity-bounded attacks, attenuating output distortion due to the attack by a factor of roughly K/N where N is the data dimension and K is the sparsity level. We then extend this concept to deep networks, showing that a "locally linear" model can be used to develop a theoretical foundation for crafting attacks and defenses. Experiments on the MNIST and CIFAR-10 datasets show the efficacy of the proposed sparsifying front end. Along related lines, we also investigate compressive front ends that can be implemented via binary computations in low-power hardware. Key design questions here include the impact of hardware impairments and constraints on the fidelity of information acquisition. We show that a compressive approach is robust to stochastic nonlinearities, and that spatially localized computations are effective, by evaluating classification and reconstruction performance based on the information acquired.

The second case study pertains to robustness in a radio frequency (RF) setting. We focus here on a potentially powerful tool for wireless security: RF device signatures capable of distinguishing between devices sending exactly the same message. Such signatures should be robust to standard spoofing techniques, and to different levels of noise in data. Since the information in wireless signals resides in complex baseband, we employ complex-valued neural networks to learn these fingerprints. We demonstrate that, while there are potential benefits to using sections of the signal beyond just the preamble to learn signatures, the network cheats when it can, using information such as the device ID, which can be easily spoofed, to artificially inflate performance. We also show that noise augmentation by inserting additive white Gaussian noise can lead to significant performance gains, which indicates that this counter-intuitive strategy helps in learning more robust fingerprints. We provide results for two different wireless protocols, WiFi and ADS-B, demonstrating the effectiveness of the proposed method.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View