Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Intelligent Human-in-the-Loop Vehicular Automation with Real-Time Vision Models

Abstract

Modern vehicular automation technology is enabled by complex interactive systems composed of discrete blocks that accomplish specific tasks. These blocks are responsible for perceiving the environment surrounding the vehicle, predicting the future states and intents of other agents in its vicinity, planning the vehicle's path with an eventual goal in mind, and finally, providing control signals and actuations that achieve this goal. While such systems are widely adopted, they fail to consider a key component of the driving process - the human occupants inside the vehicle. This dissertation proposes that to ensure a safe, smooth, and enjoyable travel experience, it is essential for the human driver and vehicular automation to be aware of each other's states and impending failures, i.e., a relationship built on earned trust rather than blind faith.

To codify this requirement, we employ the Looking-In & Looking-Out (LILO) approach, where systems simultaneously model the inside and outside of vehicles. In addition to the set of "looking-out" tasks that are used in conventional automation, we propose a parallel set of "looking-in" tasks that model and analyze the interior of the vehicle. The outputs generated by both these parallel systems can then be integrated to provide useful controls based on a more complete understanding of the circumstances and the environment.

This dissertation presents the models and algorithms we have developed to accomplish the individual tasks that make up the system described above. This includes our research on real-time models for tasks such as object detection, multi-object tracking, and maneuver/trajectory prediction. We also introduce new approaches for the relatively under-explored "looking-in" tasks - which we hope will inspire many such studies in the future. Some of our contributions in this space include real-time models to analyze different aspects of the driver's state, data augmentation and automatic labelling schemes for tasks with limited data, and solutions to other common hindrances. Finally, we provide a concrete example of how the LILO framework can be used in practice - where the driver’s state estimated by "looking-in" is used in conjunction with metrics computed by "looking-out" to initiate safer and smoother control transitions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View