Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Designing Explainable Autonomous Driving System for Trustworthy Interaction

Abstract

The past decade has witnessed significant breakthroughs in autonomous driving technologies. We are heading toward an intelligent and efficient transportation system where human errors are eliminated. While excited about the emergence of autonomous vehicles with increasing intelligence, the public has also raised concerns about their reliability. Modern autonomous driving systems usually adopt black-box deep-learning models for multiple function modules (e.g., perception, behavior prediction, behavior generation). The opaque nature of neural networks and their complex system architecture make it extremely difficult to understand the behavior of the overall system, which prevents humans from confidingly sharing the road and interacting with autonomous vehicles. This motivates the design of a more transparent system to build a foundation for trustworthy interaction between humans and autonomous vehicles.

This dissertation is concerned with the design of an explainable autonomous driving system, leveraging the strengths of explainable artificial intelligence, control, and causality. In particular, we focus on the behavior system of an autonomous vehicle, which plays a crucial role in its interaction with human road participants. The work consists of two parts. In Part I, we explore methods to improve model interpretability. The goal is to ensure that the model is more intelligible for humans in the design stage, which is achieved by introducing hard or soft constraints formulated from domain knowledge. We demonstrate how to formulate domain knowledge of social interaction into structured reward functions (Chapter 2) and pseudo labels (Chapter 3) as well as how to utilize them to induce interpretable driving behavior models. We also introduce an interpretable and transferable hierarchical driving policy that combines deep learning with robust model-based control (Chapter 4). In Part II, we explore the usage of post hoc explanation techniques in diagnosing model behavior. We introduce two case studies, in which we utilize sparse graph attention to diagnose interaction modeling in behavior prediction (Chapter 5) and develop a Shapley-value-based method to study the inherent causality issue in conditional behavior prediction (Chapter 6).

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View