Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Secure and Private Machine Learning for Smart Devices

Abstract

Nowadays, machine learning models and especially deep neural networks are achieving outstanding levels of accuracy in different tasks such as image understanding and speech recognition. Therefore, they are widely used in the pervasive smart connected devices, e.g., smartphones, security cameras, and digital personal assistants, to make intelligent inferences from sensor data. However, despite their high level of accuracy, researchers have recently found that malicious attackers can easily fool machine learning models. Therefore, this brings into question the robustness of machine learning models under attacks, especially in the context of privacy-sensitive and safety-critical applications.

In this dissertation, we investigate the security and privacy of machine learning models. First, we consider the problem of adversarial attacks that fool machine learning models under the practical setting where the attacker has limited information about the victim model and also restricted access to it. We introduce, GenAttack, an efficient method to generate adversarial examples against black-box machine learning. GenAttack requires 235 times fewer model queries than previous state-of-the-art methods while achieving a higher success rate in targeted attacks against the large scale Inception-v3 image classification models. We also show how GenAttack can be used to overcome a set of different recently proposed methods of model defenses. Furthermore, while prior research on adversarial attacks against machine learning models has focused only on image recognition models due to the challenges of attacking models of other data modalities such as text and speech, we show GenAttack can be extended to attack both speech recognition and text understanding models with a high success rate. We achieve 87% success rate against a speech command recognition model and 97% success rate against a natural language sentiment classification model.

In the second part of this dissertation, we focus on methods for improving the robustness of machine learning models against security and privacy threats. A significant limitation of deep neural networks is their lack of explanation for their predictions. Therefore, we present NeuroMask, an algorithm for generating accurate explanations of the neural network prediction results. Another serious threat against the voice-controlled devices is the audio spoofing attacks. We present a deep residual convolutional network for detecting two different kinds of attacks: the logical access attack and the physical access attack. Our model achieves 6.02% and 2.78% equal error rate (EER) on the evaluation datasets of the ASVSpoof2019 competition for the detection of the logical access, and physical access attacks, respectively. To alleviate the privacy concerns of unwanted inferences while sharing private sensor data measurements, we introduce, PhysioGAN, a novel model architecture for generating high-quality synthetic datasets of physiological sensor readings. Using evaluation experiments on two different datasets: ECG classification dataset and motion sensors for human activity recognition dataset, we show that compared to previous methods of training sensor data generative models PhysioGAN is capable of producing synthetic datasets that are both more accurate and more diverse. Therefore, synthetic datasets generated by PhysioGAN are a good replacement to be shared instead of the real private datasets with a moderate loss in their utility. Finally, we show how we apply the differential privacy techniques to extend the training of the generative adversarial networks to produce synthetic datasets with formal privacy guarantees.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View