Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Secure Learning and Learning for Security: Research in the Intersection

Abstract

Statistical Machine Learning is used in many real-world systems, such as web search, network and power management, online advertising, finance and health services, in which adversaries are incentivized to attack the learner, motivating the urgent need for a better understanding of the security vulnerabilities of adaptive systems. Conversely, research in Computer Security stands to reap great benefits by leveraging learning for building adaptive defenses and even designing intelligent attacks on existing systems. This dissertation contributes new results in the intersection of Machine Learning and Security, relating to both of these complementary research agendas.

The first part of this dissertation considers Machine Learning under the lens of Computer Security, where the goal is to learn in the presence of an adversary. Two large case-studies on email spam filtering and network-wide anomaly detection explore adversaries that manipulate a learner by poisoning its training data. In the first study, the False Positive Rate (FPR) of an open-source spam filter is increased to 40% by feeding the filter a training set made up of 99% regular legitimate and spam messages, and 1% dictionary attack spam messages containing legitimate words. By increasing the FPR the adversary affects a Denial of Service attack on the filter. In the second case-study, the False Negative Rate of a popular network-wide anomaly detector based on Principal Components Analysis is increased 7-fold (increasing the attacker's chance of subsequent evasion by the same amount) by a variance injection attack of chaff traffic inserted into the network at training time. This high-variance chaff traffic increases the traffic volume by only 10%. In both cases the effects of increasing the information or the control available to the adversary are explored; and effective counter-measures are thoroughly evaluated, including a method based on Robust Statistics for the network anomaly detection domain.

The second class of attack explored on learning systems, involves an adversary aiming to evade detection by a previously-trained classifier. In the evasion problem the attacker searches for a negative instance of almost-minimal distance to some target positive, by submitting a small number of queries to the classifier. Efficient query algorithms are developed for almost-minimizing Lp cost over any classifier partitioning feature space into two classes, one of which is convex. For the case of a convex positive class and p ≤ 1, algorithms with linear query complexity are provided, along with lower bounds that almost match; when p>1 a threshold phenomenon occurs whereby exponential query complexity is necessary for good approximations. For the case of a convex negative class and p ≥ 1, a randomized Ellipsoid-based algorithm finds almost-minimizers with polynomial query complexity. These results show that learning the decision boundary is sufficient, but not necessary for evasion, and can require much greater query complexity.

The third class of attack aims to violate the confidentiality of the learner's training data given access to a learned hypothesis. Mechanisms for releasing Support Vector Machine (SVM) classifiers are developed. Algorithmic stability of the SVM is used to prove that the mechanisms preserve differential privacy, meaning that for an attacker with knowledge of all but one training example and the learning map, very little can be determined about the final unknown example using access to the trained classifier. Bounds on utility are established for the mechanisms: the privacy-preserving classifiers' predictions should approximate the SVM's predictions with high probability. In the case of learning with translation-invariant kernels corresponding to infinite-dimensional feature spaces (such as the RBF kernel), a recent result from large-scale learning is used to enable a finite encoding of the SVM while maintaining utility and privacy. Finally lower bounds on achievable differential privacy are derived for any mechanism that well-approximates the SVM.

The second part of this dissertation considers Security under the lens of Machine Learning. The first application of Machine Learning is to a learning-based reactive defense. The CISO risk management problem is modeled as a repeated game in which the defender must allocate security budget to the edges of a graph in order to minimize the additive profit or return on attack (ROA) enjoyed by an attacker. By reducing to results from Online Learning, it is shown that the profit/ROA from attacking the reactive strategy approaches that of attacking the best fixed proactive strategy over time. This result contradicts the conventional dogma that reactive security is usually inferior to proactive risk management. Moreover in many cases, it is shown that the reactive defender greatly outperforms proactive approaches.

The second application of Machine Learning to Security is for the construction of an attack on open-source software systems. When an open-source project releases a new version of their system, they disclose vulnerabilities in previous versions, sometimes with pointers to the patches that fixed them. Using features of diffs in the project's open-source repository, labeled by such disclosures, an attacker can train a model for discriminating between security patches and non-security patches. As new patches land in the open-source repository, before being disclosed as security or not, and before being released to users, the attacker can use the trained model to rank the patches according to likelihood of being a security fix. The adversary can then examine the ordered patches one-by-one until finding a security patch. For an 8 month period of Firefox 3's development history it is shown that an SVM-assisted attacker need only examine one or two patches per day (as selected by the SVM) in order to increase the aggregate window of vulnerability by 5 months.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View