Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

New approaches to robustness and learning in data-driven portfolio optimization

Abstract

We develop two new approaches to robustness and learning in data-driven portfolio optimization, a problem that is well-known for sensitivity to model assumptions and data variability.

First, we consider the data-driven mean-CVaR problem. For this problem, we introduce and investigate performance-based regularization (PBR), a generalization of standard regularization techniques used in statistics and machine learning, and an alternative to worst-case approaches to improving solution robustness. We assume the available log-return data is iid, and detail the approach for two cases: nonparametric and parametric (the log-return distribution belongs in the elliptical family). We derive the asymptotic behavior of the nonparametric PBR solution, which leads to insight into the effect of penalization, and justification of the parametric PBR method. We also show via simulations that the PBR methods produce efficient frontiers that are, on average, closer to the population efficient frontier than the empirical approach to the mean-CVaR problem, with less variability.

Next, we consider portfolio optimization under parameter uncertainty, and propose optimizing a relative regret objective. Relative regret evaluates a portfolio by comparing its return to a family of benchmarks, where the benchmarks are the wealth of fictitious investors who invest optimally given knowledge of the model parameters, and is a natural objective when there is concern about parameter uncertainty or model ambiguity. We analyze this problem using convex duality, and show that it is equivalent to a Bayesian problem, where the Lagrange multipliers play the role of the prior distribution and the learning model involves Bayesian updating of these Lagrange multipliers/prior. This Bayesian problem is unusual in that the prior distribution is endogenously chosen by solving the dual optimization problem for the Lagrange multipliers, and the objective function involves the family of benchmarks from the relative regret problem. These results show that regret is a natural means by which robust decision making and learning can be combined.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View