Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Utilizing Machine Learning for Filtering General Monte Carlo Noise

Abstract

Producing photorealistic images from a scene model requires computing a complex multidimensional integral of the scene function at every pixel of the image. Monte Carlo (MC) rendering systems approximate this integral by tracing light rays (samples) in the multidimensional space to evaluate the scene function. Although an approximation to this integral can be quickly evaluated with just a few samples, the inaccuracy of this estimate relative to the true value appears as unacceptable noise in the resulting image.

One way to mitigate this problem is to quickly render a noisy image with a few samples and then filter it as a post-process to generate an acceptable, noise-free result. This approach has been the subject of extensive research in recent years and many algorithms have been developed. However, the majority of these approaches use simple, heuristic rules to design the algorithm and, as a result, have various limitations.

We begin by studying how standard image denoising techniques can be applied to the problem of Monte Carlo rendering. To do this, we propose a way to use any standard image denoising method (e.g., BM3D) to remove noise from MC rendered images. We do this by estimating the amount of noise at each pixel of the image and apply a multilevel algorithm that denoises the image in a spatially-varying manner. We then show that although this approach works better than the previous color-based schemes, i.e., the methods that only use color information, it cannot handle complex scenes with severe noise. This is due to the fact that this algorithm does not utilize additional scene features such as world positions, shading normals, and texture values, which are available in MC rendering.

To address the filtering problem systematically, we then present a new way of analyzing the MC filtering approaches. We observe that the major challenge in all filtering techniques is filter parameter estimation. Our key contribution is to address this challenging problem with two machine learning approaches. Specifically, we first propose to estimate the optimal filter parameters at each pixel directly from the output of the MC renderer using a neural network. We train the network on a set of scenes by minimizing the error between the filtered and ground truth images. Second, we propose to find the optimal filter parameter sets in an error-minimization filtering approach to produce filtered results as close as possible to the ground truth. We optimize these candidate filter parameter sets on a set of training scenes by minimizing the error between the filtered and ground truth images.

We show that the proposed approaches outperform state-of-the-art methods in removing general MC noise. In this thesis, we present the first attempt to use machine learning for removing noise from MC rendered images. We believe this opens a new avenue for future work and we hope other researchers can build upon the ideas presented here to further advance the MC filtering field.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View