Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Towards Semi-Dense Indirect Visual-Inertial Odometry

Abstract

In this work, we focus on motion estimation in unknown environments using measurements provided by an inertial measurement unit (IMU) and a monocular camera. We are interested in estimating the trajectory of a moving platform, a problem typically termed visual-inertial odometry (VIO). Most of existing methods for vision-aided inertial localization rely on the detection and tracking of point features in the images. These approaches greatly reduce the amount of data to process in each image, and are thus suitable for application in resource-constrained systems. However these treatments inevitably discard information that is beneficial for motion estimation, since not all parts of the images are used. Therefore there has been growing interest in direct methods which rely on directly using image intensities for motion estimation. Although this approach makes it possible to use more pixel locations, it also suffers a number of shortcomings (e.g. non-Lambertian surface properties, and the dependence on camera photometric parameters). By contrast we are interested in approaches that rely on geometry of straight lines or image contours rather than raw image intensities (thus the proposed approaches are indirect methods). This enables our algorithms to operate in environments where point features are sparse, while circumventing the shortcomings of direct methods.

This thesis is divided into two main parts. We first propose a visual-inertial localization algorithm that employs lines as measurements, in addition to traditional point features. Specifically, a novel parameterization and measurement model for line features are proposed, and we show how line features can be used for self-calibration of the IMU and camera. Our results demonstrate that the proposed approach not only leads to improved localization accuracy in point-feature-poor environments, but also reduces calibration errors compared to the point-only approach.

We then propose a method for monocular visual-inertial odometry that utilizes image edges as measurements. We here relax the requirement for having straight lines, and do not employ any assumption on the geometry of the scene. This enables us to use measurements from all image areas with significant gradient. In addition, we have proposed a novel edge parameterization and measurement model that explicitly account for the fact that edge points can only provide useful information in the direction of the image gradient. Through both Monte-Carlo simulations, as well as results from real-world experiments, we demonstrate that the proposed edge-based approach to visual-inertial odometry is consistent, and outperforms the point-based one.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View