Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Inferring Structural Models of Travel Behavior: An Inverse Reinforcement Learning Approach

Abstract

Large volumes of digital human trajectories at high spatiotemporal resolution have become increasingly available to researchers and public entities. Derived from anonymized cellular records and social network postings, fine-grained mobility traces present exciting opportunities for longitudinal studies of daily travel and activity planning decisions. Through the application of automated and efficient data-mining techniques, researchers in machine learning, transportation engineering, and related disciplines have been able to use these movement microdata to model and forecast daily traffic conditions at metropolitan scales with unprecedented accuracy (González and Hidalgo, 2008; Lin Z. et al., 2017; Widhalm et al., 2015; Yin, M. et al., 2017).

However, state-of-the-art machine-learning and discrete-choice frameworks do not consider the dynamics of daily mobility decisions at the individual level. Existing methods also do not take into account strategic, interdependent interactions between representative agents, complicating the cost-benefit analysis of innovative decentralized policy instruments such as induced peer-to-peer influence. Interpretable structural models that can provide consistent and disaggregate estimates of replanning behavior are needed in order to evaluate the impacts of these novel regulatory measures.

Therefore, in order to take better advantage of future and emerging technologies as tools to forge cooperative and sustainable relationships between citizens, governments, and the built environment, this thesis develops a framework for data-driven city management that bridges established travel demand planning practices with innovations in big data, reinforcement learning, and strategic decision-making. The work described herein is comprised of three major components. First, we develop a two-stage game theoretic model of peer pressure to investigate feedback between social, geographic, and temporal dimensions of agent choices in a hyper-realistic microsimulation of urban travel behavior. Second, in order to learn representations of dynamic agent utility functions, we extend \textit{inverse reinforcement learning} (IRL) algorithms to novel activity and travel planning environments and estimate associated structural parameters. Finally, we investigate the strength of modern high-dimensional imitation learning techniques to train flexible and accurate models of schedule composition and activity duration. Results from applications of the empirical methods developed herein suggest that our contributions could effectively complement the microsimulation and discrete choice modeling techniques used in disaggregate urban infrastructure planning frameworks such as activity-based transportation demand models.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View