Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Towards Robust and Fair Machine Learning

Abstract

Recent advances in Machine Learning (ML) and Deep Learning (DL) have resulted in the widespread adoption of models across various application pipelines. However, despite these performance improvements, ML/DL models have been shown to be vulnerable to adversarial inputs that can reduce functionality. Concerns over these issues have prompted researchers to study model robustness from multiple perspectives-- such as privacy, fairness, security, interpretability, among others. In this thesis, we build upon these ideas of robustness, by investigating adversarial and social robustness for a number of different learning models and problem settings. We first study adversarial robustness of unsupervised clustering models, by proposing novel poisoning and evasion attacks for both deep and classical models. We then study the social robustness of models in the context of fairness, and propose the antidote data problem for fair clustering, as well as the fair video summarization problem. Finally, we investigate two problems at the intersection of adversarial and social robustness. We propose a new robust fair clustering method that can jointly ensure adversarial and social robustness, and data selection approaches that can improve interpretability, and optimize the utility, fairness, and robustness for classification models. Through the concepts and ideas proposed in this thesis we aim to lay the groundwork for analyzing and ensuring robustness of ML/DL models of the future.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View