Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Scalable High-Quality 3D Scanning

Abstract

Over the past decade, vendors across fields ranging from entertainment to retail to architecture have increasingly been shifting everyday consumable media from the 2D plane to the 3D world. Applications in these fields such as augmented/virtual reality, online shopping, and indoor scene reconstruction all have one crucial problem that needs to be addressed: the task of acquiring high-quality 3D models. Currently, designing novel 3D content involves a substantial human component; indeed, most 3D models were created via modeling software, e.g., 3DS Max, Maya, by a human with vast expertise. Although several research communities have explored automated 3D scanning over the course of several decades, the lack of an out-of-the-box solution has precluded this field from permeating the industry. Prior approaches to 3D scanning can largely be divided into image-based reconstruction (IBR) and active reconstruction (AR). While IBR methods produce reconstructions by taking in as input a collection of calibrated RGB images, AR methods actively project patterns onto observed scenes during reconstruction. IBR and AR techniques each have advantages and pitfalls. State of the art methods in 3D scanning typically fall purely within IBR or AR.

In this thesis, we outline a novel scanning approach which merges IBR and AR methods. First, we discuss how to physically construct and calibrate a 3D scanner using commodity hardware. We use this 3D scanner in the collection and release of the BigBIRD dataset, which serves as a key benchmark for the algorithms in this thesis and comprises of 600 12 MP images, 600 registered RGB-D point clouds, and several other processed data for each of 125 objects. Next, illustrating that the advantages and pitfalls within IBR and AR complement each other, we present a novel shape re- construction algorithm that capitalizes on the strengths of each approach, yielding less than 2 mm of RMS error during reconstruction. We then present a color reconstruction algorithm, with which we produce high quality 3D color meshes of scanned objects; we demonstrate that this color reconstruction algorithm outperforms the state of the art. Finally, with any growing dataset, large-scale data visualization becomes increasingly important. Although the 3D scanning world is not yet at the large scale level, it is only a matter of time before it is. In addition, there already exists an eclectic array of fields, including computer vision, particle physics, and botany, where large-scale visualization techniques can benefit research. We present a novel and highly flexible GPU-based nonlinear dimensionality reduction technique capable of visualizing datasets with tens of millions instances and millions of features. Our algorithm’s flexibility and speed yield an implementation that is an order of magnitude faster than state of the art implementations of stochastic neighbor embedding algorithms.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View