Skip to content Skip to navigation
PhD Defense
11/25/2014 03:00 pm
CoRE A(Room 301)

Automatic and Interactive Segmentations Using Deformable and Graphical Models

Mustafa Gokhan Uzunbas, Rutgers University

Defense Committee: Dimitri Metaxas(advisor), Ahmed Elgammal, Kostas Bekris, Dinggang Shen (UNC-Chapel Hill)

Abstract

Image segmentation i.e. dividing an image into regions and categories is a classic yet still challenging problem. The key to success is to use/develop the right method for the right appli- cation. In this dissertation, we aim to develop automatic and interactive segmentation methods for different types of tissues that are acquired at different scales and resolutions from different medical imaging modalities such as Magnetic Resonance (MR), Computed Tomography (CT) and Electron Microscopy (EM) imaging.

First, we developed an automated segmentation method for segmenting multiple objects (organs) simultaneously from MR and CT images. We propose a hybrid method that takes advantage of two well known energy-minimization-based approaches combined in a unified framework. We validate this proposed method on two challenging problems of multiorgan segmentation; cardiac four-chamber segmentation from CT and knee joint bones segmentation from MR images. We compare our method with other existing techniques and show certain improvements and advantages.

Second, we developed a graph partitioning algorithm for characterizing neuronal tissue structurally and contextually from EM images. We propose a multistage decision mechanism that utilizes differential geometric properties of objects in a cellular processing context. Our results on 2D EM slices indicate that this proposed approach can successfully partition images into structured segments with minimal expert supervision and can potentially form a basis for a larger scale volumetric data interpretation. We compare our method with other proposed methods in a workshop challenge and show promising results.

Third, we developed an efficient learning-based method for segmentation of neuron struc- tures from 2D and 3D EM images. We propose a graphical-model-based framework to do inference on hierarchical merge-tree of image regions. In particular, we extract the hierarchy of regions in the low level, design 2D and 3D discriminative features to extract higher level information and utilize a Conditional Random Field based parameter learning on top of it. The effectiveness of the proposed method in 2D is demonstrated by comparing our method with other methods in a workshop challenge. Our method outperforms all participant methods ex- cept one. In 3D, we compare our method to existing methods and show that the accuracy of our results are comparable to state-of-the-art while being much more efficient.

Finally, we extended our learning-based inference algorithm to a proofreading framework for manual corrections of automatic segmentation results. We propose a very efficient and easy- to-use user interface for very large high resolution 3D EM images. In particular, we utilize the probabilistic confidence level of the graphical model to guide the user during interaction. We validate the effectiveness of this framework by robot simulations and demonstrate certain advantages compared to baseline methods.