Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

The Interplay between Sampling and Optimization

Abstract

We study the connections between optimization and sampling. In one direction, we study sampling algorithms from an optimization perspective. We will see how the Langevin MCMC algorithm can be viewed as a deterministic gradient descent in probability space, which enables us to do convergence analysis in KL divergence. We will also see how adding a momentum term improves the convergence rate of Langevin MCMC, much like acceleration in gradient descent. Finally, we will study the problem of sampling from non-logconcave distributions, which is roughly analogous to non-convex optimization.

Conversely, we will also study optimization algorithms from a sampling perspective. We will approximate stochastic gradient descent by a Langevin-like stochastic differential equation, and use this to explain some of its remarkable generalization properties.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View