Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Deep Learning in Medical Image Analysis

Creative Commons 'BY' version 4.0 license
Abstract

Developing algorithms to better interpret images has been a fundamental problem in the field of medical imaging analysis. Recent advances in machine learning, especially deep convolutional neural networks (DCNNs), have demonstrated great improvement to the speed and accuracy of many medical image analysis tasks, such as image registration, anatomical structures/tissue segmentation, and computer-aided diagnosis. Despite previous progress, these problems still remain challenging due to the limited amount of labeled data, large anatomical variance among patients, etc.In this dissertation, we propose various approaches to address the aforementioned challenges in order to achieve better accuracy, higher efficiency, and use fewer labeled data. First, to address the difficulty of accurately detecting pulmonary nodules in its early stage, we propose a novel CAD framework that consists entirely of 3D DCNNs for detecting pulmonary nodules and reducing false positives in chest CT images. Second, to avoid training several deep learning models to solve nodule detection, false-positive reduction, and segmentation separately which may be suboptimal and resource-intensive, we propose NoduleNet to solve the three tasks jointly in a multi-task fashion. To avoid friction between different tasks and encourage feature diversification, we incorporate two major design tricks: 1) decoupled feature maps for nodule detection and false positive reduction, and 2) a segmentation refinement subnet for increasing the precision of nodule segmentation. Third, to address the limitation in scope and/or scale of previous works on organs-at-risk (OAR) delineation - with only a few OARs delineated and a limited number of samples tested, we propose a new deep learning model that can delineate a comprehensive set of 28 OARs in the head and neck area, trained with 215 CT samples collected and carefully annotated by experienced radiation oncologists with over ten years of experience. The accuracy of our model was compared to both previous state-of-the-art methods and a radiotherapy practitioner. Moreover, we deployed our deep learning model in actual RT planning of new patient cases, and evaluated the clinical utility of the model. Fourth, to reduce the information loss from cropping/downsampling 3D images due to limited GPU memory, we propose a new framework for combining 3D and 2D models, in which the segmentation is realized through high-resolution 2D convolutions, but guided by spatial contextual information extracted from a low-resolution 3D model. A self-attention mechanism is implemented to control which 3D features should be used to guide 2D segmentation. Last but not least, since DCNNs often require a large amount of data with manual annotation for training and are difficult to generalize to unseen classes, we propose a new few-shot segmentation framework RP-Net to address this issue. RP-Net has two important modules: 1) a context relation encoder (CRE) that uses correlation to capture local relation features between foreground and background regions, and 2) a recurrent mask refinement module that repeatedly uses the CRE and a prototypical network to recapture the change of context relationship and refine the segmentation mask iteratively.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View