Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Background and Occlusion Defenses Against Adversarial Examples and Adversarial Patches

Abstract

Machine learning is increasingly used to make sense of our world in areas from spam detection, recommendation systems, to image classification. However, in each, it is vulnerable to adversarial manipulation. Within adversarial machine learning, we examine image classification attacks and defenses. We construct spoofs of face detection, and we create defenses against two attacks on image classification: normal and patch adversarial example attacks.

We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces, yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces, yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even after the images are printed and then photographed.

Adversarial examples allow crafted attacks against deep neural network classification of images. The attack changes the computer classification of an image without changing how humans classify it.

We propose a defense of expanding the training set with a single, large, and diverse class of background images, striving to ‘fill’ around the borders of the classification boundary. We find that our defense aids the detection of simple attacks on EMNIST, but not advanced attacks. We discuss several limitations of our examination.

An attacker limited to changing just a small patch of an image can still deceive deep learning image classification. We propose a defense against such patch attacks based on multiple partial occlusions of the image such that a few occlusions each completely hide the patch. We provide certified accuracy for CIFAR-10, Fashion MNIST and MNIST, with a tunable tradeoff between the false-positive rate and certified accuracy. For CIFAR-10 and a 5 × 5 patch, we can provide certified accuracy for 43.8% of images, at the cost of only 1.6% in clean image accuracy compared to the architecture we defend or a cost of 0.1% compared to our training of that architecture, including a 0.2% false-positive rate.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View