Defending Against Adversarial Attacks With Low-Rank Factorization
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Defending Against Adversarial Attacks With Low-Rank Factorization

Abstract

Despite the popularity and success of deep learning architectures in recent years, they have shown to be vulnerable to adversarial attacks. Utilizing machine learning methods that are vulnerable to adversarial attacks has raised many concerns, especially in safety-critical domains. Self-driving vehicles and medical imaging are examples of systems where perturbations in the data can have dramatic and irreparable impacts as the malfunction of the system may result in death or serious injuries. Fake product reviews are another example of deliberate perturbations that negatively impact everyone’s life by fooling people into purchasing fake and potentially dangerous products. Therefore, understanding these vulnerabilities and designing robust defense mechanisms against perturbations could benefit a large group of society. Recent studies have addressed these concerns and conducted research to analyze the vulnerability of machine learning algorithms and develop defense techniques and methods that are more robust to attacks. However, the general properties of adversarial attacks have not been investigated. In this dissertation, we analyze the properties of adversarial attacks in different domains, including graphs, images, and recommender systems. Our goal is to identify a unifying theme to propose a general defense mechanism applicable to various domains. Attackers try to achieve their malicious intentions by crafting perturbations into data while attempting to remain unnoticeable. In the image domain, attackers add noise to the high-frequency spectrum of the images that are not perceptible by the human eyes. Unlike images, in other domains like graphs and recommender systems, changes cannot be easily noticed by looking at the data. However, attackers try to preserve some key features and measurable statistics of the original data to remain unnoticeable. This could be achieved by preserving the degree distribution of the nodes in graphs and rating distributions in recommender systems. These imperceptibility constraints prohibit attackers from making significant changes to the data. Therefore, the footprint of the attack is subtle compared to the footprint of the structure of the entire data. In other words, the impact of the adversaries is noticeable mainly in the high-frequency spectrum of the data. In this dissertation, we identify a unifying theme in adversarial attacks across different applications domains (graph classification, image classification, and recommender systems). More specifically, we observe that in all three domains the attack manifests in high ranks of the matrix or tensor representing the data. Motivated by this unifying theme, we propose low-rank solutions in different domains to alleviate the negative impact of adversaries and defend against the attacks. Finally, as a case study, we propose a low-rank tensor-based method to improve the quality of complementary basket recommendations, where the goal is to recommend products that are frequently purchased with the items in the users’ shopping cart.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View