Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Capturing and Animating Hand and Finger Motion for 3D Communicative Characters

Creative Commons 'BY' version 4.0 license
Abstract

The process of animating detailed motion for virtual characters is a difficult task and researchers and animators work tirelessly to bring life to these characters. Though many methods have been developed over the years to facilitate aspects of 3D character animation, creating realistic virtual humans is still a challenge. This is partly because of the way virtual characters move. People are highly sensitive to human motion and that sensitivity can influence how a person feels about a character they are viewing in a video or a movie. Though the motion of hands is on smaller scale than that of the full body, hand motions also contribute to how people feel about the "realness" of a character. This is especially true for communicative characters. Many people gesticulate when speaking and virtual characters should as well to appear natural. Also, there are many people who communicate using sign languages, gesture-based languages that use specific hand shapes and full-body body motions to convey complex thoughts and ideas. American Sign Language (ASL) is used in the United States. Characters that can naturally perform ASL would be beneficial to the many deaf Americans whose first language is ASL. Deaf adults who communicate primarily using ASL tend to read English at a middle school level. Therefore, a virtual signing character can be useful for many applications, such as computing, where much of the information is presented to the user as text or sound.

Optical marker motion capture is the industry standard for recording human motion to be applied to virtual characters. But this form of motion capture has many drawbacks, notably in its ability to capture detailed full body and hand motion simultaneously. A benefit of motion capture is its ability to record the rhythm and timing of a person’s motions. Timing contributes to how natural a virtual character appears and is also an important aspect of conversational hand motions.

We propose methods to capture and animate hand motion for the purposes of gestural communication and sign language. We have developed techniques to construct high dimensional hand animations from low-dimensional captures using tools such as nearest neighbor selection from a clustered set, principle component analysis, and locally weighted regression. These methods allow for simultaneous capture of the hands and full body of a communicative person. We also present a model to automatically produce natural timing and rhythm for the synthesis of ASL fingerspelling. The data driven model employs a naive Bayes classifier to predict the length of each letter hold and a simple linear regression to predict the length of each inter-letter transition. We analyze the results of this approach quantitatively and also qualitatively by performing a perceptual study. Our goal is to contribute to the ongoing research of creating compelling 3D characters for computer applications aimed at the sign language community.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View