Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Real World Robot Learning: Learned Rewards, Offline Datasets and Skill Re-Use

Abstract

Robots that can operate in an open, unstructured environment and perform a wide range of tasks have been a long-standing goal of artificial intelligence. For such robots to operate effectively, they need the ability to a) perceive the world around them through general-purpose on-board sensors like cameras b) generalize to new situations c) improve their performance as they collect more data. In this thesis, we posit that deep reinforcement learning (deep RL) methods are well-positioned to overcome the aforementioned challenges, but are difficult to apply to real world domains like robotics. The central conjecture that we study in this work is the following: while dominant robot learning pipelines often rely on hand-engineering certain components (such as reward functions and physics simulators), we can overcome many bottlenecks of these pipelines via the adoption of a more data-driven perspective. We argue that, instead of hand-engineering reward functions, we should instead learn reward functions from data. Instead of learning mostly in a hand-designed simulation and then transferring learned policies to the real world, we should learn using real data, and re-use all past experience (as much as possible) to maintain sample efficiency. We show how this perspective change greatly simplifies robot learning, and demonstrate results on a variety of real world object manipulation tasks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View