Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Learning to Learn with Gradients

Abstract

Humans have a remarkable ability to learn new concepts from only a few examples and quickly adapt to unforeseen circumstances. To do so, they build upon their prior experience and prepare for the ability to adapt, allowing the combination of previous observations with small amounts of new evidence for fast learning. In most machine learning systems, however, there are distinct train and test phases: training consists of updating the model using data, and at test time, the model is deployed as a rigid decision-making engine. In this thesis, we discuss gradient-based algorithms for learning to learn, or meta-learning, which aim to endow machines with flexibility akin to that of humans. Instead of deploying a fixed, non-adaptable system, these meta-learning techniques explicitly train for the ability to quickly adapt so that, at test time, they can learn quickly when faced with new scenarios.

To study the problem of learning to learn, we first develop a clear and formal definition of the meta-learning problem, its terminology, and desirable properties of meta-learning algorithms. Building upon these foundations, we present a class of model-agnostic meta-learning methods that embed gradient-based optimization into the learner. Unlike prior approaches to learning to learn, this class of methods focus on acquiring a transferable representation rather than a good learning rule. As a result, these methods inherit a number of desirable properties from using a fixed optimization as the learning rule, while still maintaining full expressivity, since the learned representations can control the update rule.

We show how these methods can be extended for applications in motor control by combining elements of meta-learning with techniques for deep model-based reinforcement learning, imitation learning, and inverse reinforcement learning. By doing so, we build simulated agents that can adapt in dynamic environments, enable real robots to learn to manipulate new objects by watching a video of a human, and allow humans to convey goals to robots with only a few images. Finally, we conclude by discussing open questions and future directions in meta-learning, aiming to identify the key shortcomings and limiting assumptions of our existing approaches.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View