Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Improving and Securing Machine Learning Systems

Abstract

Machine Learning (ML) models refer to systems that could automatically learn patterns from and make predictions on data, without explicit programming from humans. They play an integral role in a wide range of critical applications, from classification systems like facial and iris recognition, to voice interfaces for home assistants, to creating artistic images and guiding self-driving cars.

As ML models are made up with complex numerical operations, they naturally appear to humans as non-transparent boxes. The fundamental architectural difference between ML models and human brains makes it extremely difficult to understand how ML models operate internally. What patterns ML models learn from data? How they produce prediction results? How well they would generalize to untested inputs? These questions have been the biggest challenge in computing today. Despite intense work and effort from the community in recent years, we still see very limited progress towards fully understanding ML models.

The non-transparent nature of ML model has severe implications on some of its most important properties, i.e. performance and security. First, it’s hard to understand the impact of ML model design on end-to-end performance. Without understanding of how ML models operate internally, it would be difficult to isolate performance bottleneck of ML models and improve on top of it. Second, it’s hard to measure the robustness of Machine Learning models. The lack of transparency into the model suggests that the model might not generalize its performance to untested inputs, especially when inputs are adversarially crafted to trigger unexpected behavior. Third, it opens up possibilities of injecting unwanted malicious behaviors into ML models. The lack of tool to “translate” ML models suggests that humans cannot verify what ML model learned and whether they are benign and required to solve the task. This opens possibilities for an attacker to hide malicious behaviors inside ML models, which would trigger unexpected behaviors on certain inputs. These implications reduce the performance and security of ML, which greatly hinders its wide adoption, especially in security-sensitive areas.

Even though, advancement in making ML models fully transparent would solve most of the implications, current status on achieving this ultimate goal remains unsatisfied, unfortunately. Recent progress along this direction does not suggest any significant breakthrough in the near future. In the meantime, issues and implications caused by non- transparency are imminent and threatening all currently deployed ML systems. With this conflict between imminent threats and unsatisfying progress towards full transparency, we need immediate solutions for some of the most important issues. By identifying and addressing these issues, we can ensure an effective and safe adoption of such opaque systems.

In this dissertation, we cover our effort to improve ML models’ performance and security, by performing end-to-end measurements and designing auxiliary systems and solutions. More specifically, my dissertation consists of three components that target each of the three afore-mentioned implications.

First, we focus on performance and seek to understand the impact of Machine Learning model design on end-to-end performance. To achieve this goal, we adopt the data- driven approach to measure ML model’s performance with different high-level design choices on a large number of real datasets . By comparing different design choices and their performance, we quantify the high-level design tradeoffs between complexity, performance, and performance variability. Apart from that, we can also understand which key components of ML models have the biggest impact on performance, and design generalized techniques to optimize these components.

Second, we try to understand the robustness of ML models against adversarial inputs. Particularly, we focus on practical scenarios where normal users train ML models with the constraint of data, and study the most common practice in such scenario, referred as transfer learning. We explore new attacks that can efficiently exploit models trained using transfer learning, and propose defenses to patch insecure models.

Third, we study defenses against potential attacks that embed hidden malicious behaviors into Machine Learning models. Such hidden behavior, referred as “backdoor”, would not affect model’s performance on normal inputs, but changes model’s behavior when a specific trigger is presented in input. In this work, we design a series of tools to detect and identify hidden backdoors in Deep Learning models. Then we propose defenses that could filter adversarial inputs and mitigate backdoors to be ineffective.

In summary, we provide immediate solutions to improve the utility and the security of Machine Learning models. Even though complete transparency of ML remains an impossible mission today, and may still be in the near future, we hope our work could strengthen ML models as opaque systems, and ensure an effective and secure adoption.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View