Skip to main content
eScholarship
Open Access Publications from the University of California

UC Merced

UC Merced Electronic Theses and Dissertations bannerUC Merced

Motion Capture Based Animation for Virtual Human Demonstrators: Modeling, Parameterization and Planning

Abstract

A huge collection of character animation techniques has been developed to date and impressive results have been achieved in the recent years. The main pursued approaches can be categorized as physics-based, algorithmic-based or data-based. High-quality animation today is still largely data-based and achieved through motion capture technologies. While great realism is achieved, current solutions still suffer from limited character control, limited ability to address cluttered environments, and disconnection from higher-level constraints and task-oriented specifications. This dissertation addresses these limitations and achieves an autonomous character that is able to demonstrate, instruct and deliver information to observers in a realistic and human-like way.

The first part of this thesis addresses motion synthesis with a simple example-based motion parameterization algorithm for satisfying generic spatial constraints at interactive frame rates. The approach directly optimizes blending weights for a consistent set of example motions, until the specified constraints are best met. An in-depth analysis is presented to compare the proposed approach with three other popular blending techniques, and the pros and cons of each method are uncovered. The algorithm has also been integrated in an immersive motion modeling platform, which enables programming of generic actions by direct demonstration of example motions.

In order to address actions in cluttered environments and maintain the realism of motion capture examples, the concept of exploring the blending space of example motions in then introduced. A bidirectional time-synchronized sampling-based planner with lazy collision evaluation is proposed for planning motion variations around obstacles while maintaining the original quality of the example motions. Coupled with a locomotion planner, it generates realistic whole-body motion in cluttered environments.

Finally, high-level specifications for demonstrative actions are addressed with the proposed whole-body PLACE planner. It is based on coordination models extracted from behavioral studies, where participants performed demonstrations involving locomotion and pointing in varied conditions. The planner achieves coordinated body positioning, locomotion, action execution and gaze synthesis, in order to engage observers in demonstrative scenarios.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View