Towards Intelligent Computational Tools for Virtual Cinematography
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Towards Intelligent Computational Tools for Virtual Cinematography

Abstract

Virtual cinematography is a fundamental problem spanning a wide range of computer graphics applications. Software for animation, videogames, scientific visualization, and other applications frequently require computational tools capable of automatically determining how to control a camera to capture an image with desired properties.In this thesis, we propose an expressive, controllable, and efficient methodology to virtual cinematography automation that is compatible with both online and offline applications.

By identifying the minima of an unconstrained, continuous objective function matching some desired compositional behaviors, such as those common in live action photography or cinematography, a suitable camera pose or path can be automatically determined using standard search algorithms. With several constraints on function form, multiple objective functions can be combined into a single optimizable function in several ways, which can be further extended to model the smoothness of the discovered camera path with a deformable spline based on an active contour model.

These abstract mathematical techniques are supported by a novel domain-specific programming language, complete with a suite of program analysis and transformation tools capable of automatic differentiation, value range analysis, and program optimization for programs representing run-time specified objective functions, all of which is modularly integrated with Unreal Engine 4 for rendering and user input.

To make this system usable by mathematical non-experts, we explore two approaches.First, we provide a library of predefined objective functions corresponding to standard photographic and cinematographic compositional rules, complete with recipes for how to combine them to achieve common compositions. Second, we use NLP-derived machine learning techniques on a novel dataset containing annotations on ~1M frames from 60 feature films, to attempt to automatically learn objective functions corresponding to real-world compositions.

Finally, these virtual cinematographic techniques are shown to be capable of computing camera paths in either live or scripted scenes with practicable computational costs.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View