Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Robust Estimation Methods and Causal Inference for Time-Series

Abstract

Intensive longitudinal data, defined as time-varying data collected frequently over time, holds immense promise to advance many healthcare and public health concerns. High quality use of time-series primarily depends on the efficient use of causal inference methodology, online machine learning and sequential decision-making. Today, causal inference is central to thestudy of the most impactful scientific questions. For long data streams and elaborate dependence, online estimation has become a paramount technique for learning in real-time, enabling estimation despite high computational cost. Finally, sequential decision-making is essential for practitioners and policy makers to learn when, in what context, and what exposures to assign to each person with the objective of optimizing desired outcome. For example, one might need to decide, with some confidence, when to stop administrating treatment if sufficient benefit is not observed, and what is the best alternative based on patient’s current characteristics and adherence. Investing in the development of methodological approaches for intensive longitudinal data is paramount for advancement of fields such as precision health, where we use data to learn which components of successful strategies are essential to their success, and how best to tailor personalized exposures to meet the specific needs and contexts of individuals, clinics, and communities. Careful considerations and new, modern statistical methods are necessary in order to establish causality, deal with dependence (across time and samples), and estimate relevant parts of the process without imposing unnecessary assumptions.

This dissertation focuses on development of robust non/semi-parametric methods for complex parameters involving time-series data. Divided into five chapters, content discussed entails new methodological approaches for (1) online ensemble machine learning and (2) (causal) sequential decision-making in time-dependent settings. Common themes through-out the chapters deal with (1) different dependence structures (time and/or network) in realistic statistical models and (2) leveraging fully personalized target parameters (single time-series, “N-of-1” approaches) vs. relying on multiple samples. Ideas of studying asymptotics in time, samples or both are also explored.

We start with the idea of developing a “N-of-1” online ensemble machine learning algorithm in Chapter 1, denoted Personalized Online Super Learner. In particular, Chapter 1 studies an Online Super Learner which learns relevant parts of the likelihood while taking into account the amount of data collected (including dynamic enrollment), stationarity of the time-series, and the mutual characteristics/network of a group of trajectories. Further exploring the “N-of-1” paradigm, we propose a causal approach which assigns treatment conditional on the current context of the patient in Chapters 2 and 3, defined as the conditional (or context-specific) causal effects. Let Y (t) denote the outcome, and Co(t) a fixed dimensional context at time t. A “N-of-1” statistical approach answers the following question: “Averaged over times t, given Co(t), what is the distribution of Y (t+s) had we intervened on treatment nodes between t and t + s, s > 0, on sample i?”. In Chapter 2, we propose a time-varying effect of interventions on multiple repeated nodes via context-specific average treatment effect in observation settings, and study the theoretical properties of the proposed estimator. In Chapter 3, we propose a method that learns an optimal treatment allocation for a single individual, adapting the randomization mechanism for future time-point experiments. We demonstrate that one can learn the optimal context defined rule based on a single sample, and thereby adjust the design at any point t with valid inference for the mean target parameter.

The high intensity exposure adaptation available in time-series data proves tremendous potential for infectious disease surveillance and control. For instance, due to the highly dynamic nature of most epidemic diseases, surveillance methods must adapt quickly in order to target individuals at the highest risk of infection. Instead of only considering dependence through time, sequential decision-making for infectious disease must account for the overall status of the epidemic in the population, including multiple trajectories with possible network dependence. In Chapter 4, we describe an adaptive surveillance design which optimizes testing allocation among a class of testing schemes based on the current status of the epidemic. While Chapter 4 focuses on adaptive monitoring for a closed community, the statistical problem is addressed within a model for the data-generating distribution that is completely nonparametric. As such, it represents a first step towards an important goal of developing adaptive sequential design for infectious disease surveillance in the general population underno assumptions on the dependence structure.

Finally, in order to develop data-driven, effective sequential interventions, it is crucial to learn new policies (series of treatment decisions) using existing data, and understand their long-term efficacy. In Chapter 5, we propose and analyze a novel double robust estimator for the “off-policy” evaluation problem in reinforcement learning. We show empirically that our estimator uniformly wins over existing off-policy evaluation methods, and characterize the asymptotic distribution and rate of convergence for the proposed estimator.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View