Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Online Robotic Skill Learning and Adaptation: Integrating Real-time Motion Planning, Model Learning, and Control

Abstract

Robots are designed to perform tasks with precision and efficiency, often surpassing human capabilities. They not only enhance productivity in manufacturing but also work in areas inaccessible or hazardous to humans. In homes, robotic vacuum cleaners and assistants make daily life more convenient. In healthcare, they assist in complex surgeries and patient care. Moreover, the integration of machine learning and large models further equips robots with intelligence, enabling them to learn from their environments and make informed decisions without additional human commands. However, as those models or policies are learned offline, it is still an open question of whether the robots can reliably interact in the new scenarios online. To address this question, this dissertation delves into the development of efficient and robust methods that empower robots to learn and plan manipulation trajectories in real-time, and to self-adjust their skills based on live sensor feedback.

The core of this dissertation is anchored in three principal challenges:

Efficient Trajectory Optimization: Model predictive control (MPC) plays an important role in online robotic planning. In these scenarios, a robot will optimize its future actions based on the dynamics model. Therefore, the development of efficient formulations and algorithms for MPC and trajectory optimization is crucial for real-time robot learning and skill adaptation. In Part I of the dissertation, we introduce algorithms and formulations tailored for real-time robotic trajectory optimization. Our focus is on enhancing the computational speed of the optimization process while ensuring the resulting trajectory maintains high quality. These advancements can equip robots with the ability to swiftly plan and adjust their movements in novel environmental conditions.

Adaptive Model Learning for Deformable Object Manipulation: Building on the efficient MPC formulation presented in Part I, the second part of the dissertation shifts its focus to the real-time enhancement of robotic behaviors. This enhancement involves dynamically adjusting the robot's dynamics model based on real-world sensor data. Part II specifically addresses online model learning for the manipulation of deformable objects, such as ropes and fabrics. These tasks, characterized by their complex and non-linear state changes, present substantial challenges for model-based methods, yet are critically important for everyday applications. In this part of the dissertation, we introduce algorithms capable of learning the dynamics of these objects and effectively manipulating them. Our research emphasizes the integration of real-time sensory feedback with machine learning algorithms. This synergy is crucial for the continuous refinement and updating of the dynamics model. We illustrate how such real-time adjustments are vital for enabling robotic systems to modify their manipulation strategies effectively.

Real-time Control Policy Adaptation for Contact-Rich Manipulation Tasks: While the MPC and adaptive model learning can provide robots with more reliable trajectories, another crucial aspect for real-time execution is low-level force control, which calculates the motor torque for reliable environmental interaction. Part III of the dissertation concentrates on optimizing control policies for robotic systems engaged in contact-rich manipulation tasks such as assembly, pivoting, and screwing. The main emphasis is on the ability of robotic systems to dynamically adjust their control policies in response to force and torque feedback. This level of adaptation is critical for the successful completion of tasks requiring delicate and precise physical interactions. By doing so, we demonstrate the improvement of the robot's motion under massive contacts.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View