Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Optimizing cancer screening with POMDPs

Abstract

Current clinical decision-making relies heavily both upon the experience of a physician and the recommendations of evidence-based practice guidelines, the latter often informed by population-level policies. Yet with the heightened complexity of patient care given newer types of data and longitudinal observations (e.g., from the electronic health record, EHR), as well as the goal of more individually-tailored healthcare, medical decision-making is increasingly complicated. This issue is particularly true in cancer with emergent techniques for early detection and personalized treatment. This research establishes an informatics-based framework to inform optimal cancer screening through sequential decision-making methods. This dissertation develops tools to formulate a partially observable Markov decision process (POMDP) model, enabling each component to be learned from a dataset: dynamic Bayesian networks (DBNs) are embedded in the POMDP learning process to estimate transition and observations probabilities; inverse reinforcement learning is used to learn a reward function from experts’ prior decisions, and risk prediction models are employed to compute individualized initial beliefs about disease state. The result is a comprehensive approach to implementing sequential decision making agents. These methods are validated using large datasets from lung and breast cancer screening efforts, demonstrating the potential to help tailor and improve early cancer prediction while reducing false positive tests.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View