Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

3D Scene and Event Understanding by Joint Spatio-temporal Inference and Reasoning

Abstract

It is a challenging yet crucial task to have a comprehensive understanding of human activities and events in the 3D scene. This task involves many many mid-level vision tasks (e.g., detection, tracking, pose estimation, action/interaction recognition) and requires high-level understandings and reasoning about their relations. In this dissertation, we aim to propose a novel and general framework for both mid-level and high-level tasks under this track, towards a better solution for complex 3D scene and event understanding. Specifically, we aim to formulate problems with interpretable representations, enforce high-level constraints with domain knowledge guided grammar, learn models solving multiple tasks jointly, and infer based on spatial, temporal and casual information. We make three major contributions in this dissertation:

First, we introduce interpretable representations to incorporate high-level constraints defined by domain knowledge guided grammar. Specifically, we propose: i) Spatial and Temporal Attributed Parse Graph model (ST-APG) encoding compositionality and attribution for multi-view people tracking, enhancing trajectory associations across space and time, ii) Scene-centric Parse Graph to represent a coherent understanding of information obtained from cross-view scenes for multi-view knowledge fusion, iii) Fashion Grammar for constraining configurations of human appearance and clothing in human parsing, iv) Pose Grammar for describing physical and physiological relations among human body parts in human pose estimation, and v) Causal And-Or Graph (C-AOG) to represent the causal-effect relations between an object's fluent changes and involved activities in tracking interacting objects.

Second, we formulate multiple related tasks into a joint learning, inference and reasoning framework for mutual benefits and better configurations, instead of solving each task independently. Specially, we propose: i) a joint parsing framework for iteratively tracking people locations and estimating people attributes, ii) a joint inference framework modeled by deep neural networks for passing messages from direct, top-down and bottom-up directions in the task of human parsing, and iii) a joint reasoning framework to reason object's fluent changes and track the object in videos, iteratively searching for a feasible causal graph structure.

Third, we mitigate the problem of data scarcity and data-hungry model learning using a learning-by-synthesis framework. Given limited training samples, we consider either propagate supervisions to unpaired samples or synthesizing virtual samples that minimize discrepancies with the realistic data. Specifically, we develop a pose sample simulator to augment training samples in virtual camera views for the task of 3D pose estimation, which improves our model cross-view generalization ability.

There are several interesting properties regarding the proposed frameworks: i) a novel perspective for problem formulation on joint inference and reasoning on space, time and causality, ii) overcoming the drawbacks of lack of interpretability and data hunger for end-to-end deep learning methods. Experiments show that our joint inference and reasoning framework outperforms existing approaches on many tasks and obtains more interpretable results.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View