Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Investigating the Design and Development of Multitouch Applications

Abstract

Multitouch is a ubiquitous input technique, used primarily in mobile devices such as phones and tablets. Larger multitouch displays have been mostly limited to tabletop research projects, but hardware manufacturers are also integrating multitouch into desktop workstations. Multitouch input has several key differences from mouse and keyboard input that make it a promising input technique. While the mouse is an indirect and primarily unimanual input device, multitouch often supports direct-touch input and encourages bimanual interaction. Multitouch also supports the use of all ten fingers as input, providing many more degrees of freedom of input than the 2D mouse cursor.

Building multitouch applications first requires understanding these differences. We present a pair of user studies that contribute to the understanding of the benefits of direct-touch and bimanual input afforded by multitouch input. We then discuss how we leverage these benefits to create multitouch gestures for a professional content-creation application run on a large multitouch display. The differences between multitouch and mouse input also greatly affect the needs of an application developer. We lastly present a declarative multitouch framework that helps developers build and manage gestures that require the coordination of multiple fingers.

In our first study, users select multiple targets with a mouse and with multitouch using one finger, two fingers (one from each hand), and any number of fingers. We find that the fastest multitouch interaction is about twice as fast as the mouse for selection. The direct-touch nature of multitouch accounts for 83% of the reduction in selection time. Bimanual interaction, using at least one finger on each hand, accounts for the remaining reduction. To further investigate bimanual interaction for making directional motions, we examine two-handed marking menus, bimanual techniques in which users make directional strokes to select menu items. We find that bimanually coordinating directional strokes is more difficult than making single strokes. But, with training, making strokes bimanually outperforms making strokes serially by 10-15%.

Our user studies demonstrate that users benefit from multitouch input. However, little work has been done to determine how to design multitouch applications that leverage these benefits for professional content-creation tasks. We investigate using multitouch input for a professional-level task at Pixar Animation Studios. We work with a professional set construction artist to design and develop Eden, a multitouch application for building virtual organic sets for computer-animated films. The experience of the artist suggests that Eden outperforms Maya, a mouse and keyboard system currently used by set construction artists. We present a set of design guidelines that enabled us to create a gesture set that is both easy for the artist to remember and easy for the artist to perform.

Eden demonstrates the viability of multitouch applications for improving real user workflows. However, multitouch applications are challenging to implement. Despite the differences between multitouch and mouse input, current multitouch frameworks follow the event-handling pattern of mouse-based frameworks. Tracking a single mouse cursor is relatively straightforward as mouse events are broken sequentially into the order in which they must occur: down, move, and up. For multitouch however, developers must meticulously track the proper sequence of touch events from multiple temporally overlapping touch streams using disparate event-handling callbacks. In addition, managing gesture sets can be tedious, as multiple gestures often begin with the same touch event sequence leading to gesture conflicts in which the user input is ambiguous. Thus, developers must perform extensive runtime testing to detect conflicts and then resolve them.

We simplify multitouch gesture creation and management with Proton, a framework that allows developers to declaratively specify a gesture as a regular expression of customizable touch event symbols. Proton provides automatic gesture matching and the static analysis of gesture conflicts. We also introduce gesture tablature, a graphical gesture notation that concisely describes the sequencing of multiple interleaved touch events over time. We demonstrate the expressiveness of Proton with four proof-of-concept applications. Finally, we present a user study that indicates that users can read and interpret gesture tablature over four times faster than event-handling pseudocode.

Multitouch applications require new design principles and tools for development. This dissertation addresses the challenges of designing gestures and interfaces that benefit from multiple parallel touch input and presents tools to help developers build and recognize these new multitouch gestures. This work serves to facilitate a wider adoption of multitouch interfaces. We conclude with several research directions for continuing the investigation of multitouch input.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View