Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

Image Analysis

SCI's imaging work addresses fundamental questions in 2D and 3D image processing, including filtering, segmentation, surface reconstruction, and shape analysis. In low-level image processing, this effort has produce new nonparametric methods for modeling image statistics, which have resulted in better algorithms for denoising and reconstruction. Work with particle systems has led to new methods for visualizing and analyzing 3D surfaces. Our work in image processing also includes applications of advanced computing to 3D images, which has resulted in new parallel algorithms and real-time implementations on graphics processing units (GPUs). Application areas include medical image analysis, biological image processing, defense, environmental monitoring, and oil and gas.


ross

Ross Whitaker

Segmentation
sarang

Sarang Joshi

Shape Statistics
Segmentation
Brain Atlasing
tolga

Tolga Tasdizen

Image Processing
Machine Learning
chris

Chris Johnson

Diffusion Tensor Analysis
shireen

Shireen Elhabian

Image Analysis
Computer Vision


Funded Research Projects:



Publications in Image Analysis:


Dendritic spine shape analysis using disjunctive normal shape models
M.U. Ghani, F. Mesadi, S..D Kanik, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493280

Analysis of dendritic spines is an essential task to understand the functional behavior of neurons. Their shape variations are known to be closely linked with neuronal activities. Spine shape analysis in particular, can assist neuroscientists to identify this relationship. A novel shape representation has been proposed recently, called Disjunctive Normal Shape Models (DNSM). DNSM is a parametric shape representation and has proven to be successful in several segmentation problems. In this paper, we apply this parametric shape representation as a feature extraction algorithm. Further, we propose a kernel density estimation (KDE) based classification approach for dendritic spine classification. We evaluate our proposed approach on a data set of 242 spines, and observe that it outperforms the classical morphological feature based approach for spine classification. Our probabilistic framework also provides a way to examine the separability of spine shape classes in the likelihood ratio space, which leads to further insights about the nature of the shape analysis problem in this context.



On comparison of manifold learning techniques for dendritic spine classification
M.U. Ghani, A.O. Argunsah, I. Israely, D. Unay, T. Tasdizen, M. Cetin. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493278

Dendritic spines are one of the key functional components of neurons. Their morphological changes are correlated with neuronal activity. Neuroscientists study spine shape variations to understand their relation with neuronal activity. Currently this analysis performed manually, the availability of reliable automated tools would assist neuroscientists and accelerate this research. Previously, morphological features based spine analysis has been performed and reported in the literature. In this paper, we explore the idea of using and comparing manifold learning techniques for classifying spine shapes. We start with automatically segmented data and construct our feature vector by stacking and concatenating the columns of images. Further, we apply unsupervised manifold learning algorithms and compare their performance in the context of dendritic spine classification. We achieved 85.95% accuracy on a dataset of 242 automatically segmented mushroom and stubby spines. We also observed that ISOMAP implicitly computes prominent features suitable for classification purposes.



Nonparametric joint shape and feature priors for segmentation of dendritic spines
E. Erdil, L. Rada, A.O. Argunsah, D. Unay, T. Tasdizen, M. Cetin. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, April, 2016.
DOI: 10.1109/isbi.2016.7493279

Multimodal shape density estimation is a challenging task in many biomedical image segmentation problems. Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. Such density estimates are only expressed in terms of distances between shapes which may not be sufficient for ensuring accurate segmentation when the observed intensities provide very little information about the object boundaries. In such scenarios, employing additional shape-dependent discriminative features as priors and exploiting both shape and feature priors can aid to the segmentation process. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors using Parzen density estimator. The joint prior density estimate is expressed in terms of distances between shapes and distances between features. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on dendritic spine segmentation in 2-photon microscopy images which involve a multimodal shape density.



MCMC Shape Sampling for Image Segmentation with Nonparametric Shape Priors
E. Erdil, M. Cetin, T. Tasdizen. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, June, 2016.
DOI: 10.1109/cvpr.2016.51

Segmenting images of low quality or with missing data is a challenging problem. Integrating statistical prior information about the shapes to be segmented can improve the segmentation results significantly. Most shape-based segmentation algorithms optimize an energy functional and find a point estimate for the object to be segmented. This does not provide a measure of the degree of confidence in that result, neither does it provide a picture of other probable solutions based on the data and the priors. With a statistical view, addressing these issues would involve the problem of characterizing the posterior densities of the shapes of the objects to be segmented. For such characterization, we propose a Markov chain Monte Carlo (MCMC) sampling-based image segmentation algorithm that uses statistical shape priors. In addition to better characterization of the statistical structure of the problem, such an approach would also have the potential to address issues with getting stuck at local optima, suffered by existing shape-based segmentation methods. Our approach is able to characterize the posterior probability density in the space of shapes through its samples, and to return multiple solutions, potentially from different modes of a multimodal probability density, which would be encountered, e.g., in segmenting objects from multiple shape classes. We present promising results on a variety of data sets. We also provide an extension for segmenting shapes of objects with parts that can go through independent shape variations. This extension involves the use of local shape priors on object parts and provides robustness to limitations in shape training data size.



Disjunctive Normal Unsupervised LDA for P300-based Brain-Computer Interfaces
M. Elwardy, T. Tasdizen, M. Cetin. In 2016 24th Signal Processing and Communication Application Conference (SIU), IEEE, May, 2016.
DOI: 10.1109/siu.2016.7496226

Can people use text-entry based brain-computer interface (BCI) systems and start a free spelling mode without any calibration session? Brain activities differ largely across people and across sessions for the same user. Thus, how can the text-entry system classify the desired character among the other characters in the P300-based BCI speller matrix? In this paper, we introduce a new unsupervised classifier for a P300-based BCI speller, which uses a disjunctive normal form representation to define an energy function involving a logistic sigmoid function for classification. Our proposed classifier updates the initialized random weights performing classification for the P300 signals from the recorded data exploiting the knowledge of the sequence of row/column highlights. To verify the effectiveness of the proposed method, we performed an experimental analysis on data from 7 healthy subjects, collected in our laboratory. We compare the proposed unsupervised method to a baseline supervised linear discriminant analysis (LDA) classifier and demonstrate its effectiveness.



Disjunctive normal level set: An efficient parametric implicit method
F. Mesadi, M. Cetin, T. Tasdizen. In 2016 IEEE International Conference on Image Processing (ICIP), IEEE, September, 2016.
DOI: 10.1109/icip.2016.7533171

Level set methods are widely used for image segmentation because of their capability to handle topological changes. In this paper, we propose a novel parametric level set method called Disjunctive Normal Level Set (DNLS), and apply it to both two phase (single object) and multiphase (multi-object) image segmentations. The DNLS is formed by union of polytopes which themselves are formed by intersections of half-spaces. The proposed level set framework has the following major advantages compared to other level set methods available in the literature. First, segmentation using DNLS converges much faster. Second, the DNLS level set function remains regular throughout its evolution. Third, the proposed multiphase version of the DNLS is less sensitive to initialization, and its computational cost and memory requirement remains almost constant as the number of objects to be simultaneously segmented grows. The experimental results show the potential of the proposed method.



Mutual exclusivity loss for semi-supervised deep learning
M. Sajjadi, M. Javanmardi, T. Tasdizen. In 2016 IEEE International Conference on Image Processing (ICIP), IEEE, September, 2016.

In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data.



SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation,
T. Liu, M. Zhang, M. Javanmardi , N. Ramesh, T. Tasdizen. In Lecture Notes in Computer Science, Vol. 9905, Springer International Publishing, pp. 144--159. 2016.
DOI: 10.1007/978-3-319-46448-0_9

Region-based methods have proven necessary for improving segmentation accuracy of neuronal structures in electron microscopy (EM) images. Most region-based segmentation methods use a scoring function to determine region merging. Such functions are usually learned with supervised algorithms that demand considerable ground truth data, which are costly to collect. We propose a semi-supervised approach that reduces this demand. Based on a merge tree structure, we develop a differentiable unsupervised loss term that enforces consistent predictions from the learned function. We then propose a Bayesian model that combines the supervised and the unsupervised information for probabilistic learning. The experimental results on three EM data sets demonstrate that by using a subset of only 3% to 7% of the entire ground truth data, our approach consistently performs close to the state-of-the-art supervised method with the full labeled data set, and significantly outperforms the supervised method with the same labeled subset.



Dendritic Spine Shape Analysis: A Clustering Perspective
M.U. Ghani, E. Erdil, S.D. Kanik, A.O. Argunsah, A. Hobbiss, I. Israely, D. Unay, T. Tasdizen, M. Cetin. In Lecture Notes in Computer Science, Springer International Publishing, pp. 256--273. 2016.
DOI: 10.1007/978-3-319-46604-0_19

Functional properties of neurons are strongly coupled with their morphology. Changes in neuronal activity alter morphological characteristics of dendritic spines. First step towards understanding the structure-function relationship is to group spines into main spine classes reported in the literature. Shape analysis of dendritic spines can help neuroscientists understand the underlying relationships. Due to unavailability of reliable automated tools, this  analysis is currently performed manually which is a time-intensive and subjective task. Several studies on spine shape classification have been reported in the literature, however, there is an on-going debate on whether distinct spine shape classes exist or whether spines should be modeled through a continuum of shape variations. Another challenge is the subjectivity and bias that is introduced due to the supervised nature of classification approaches. In this paper, we aim to address these issues by presenting a clustering perspective. In this context, clustering may serve both confirmation of known patterns and discovery of new ones. We perform cluster analysis on two-photon microscopic images of spines using morphological, shape, and appearance based features and gain insights into the spine  shape analysis problem. We use histogram of oriented gradients (HOG), disjunctive normal shape models (DNSM), morphological features, and intensity profile based features for cluster analysis. We use x-means to perform cluster analysis that selects the number of clusters automatically using the Bayesian information criterion (BIC). For all features, this analysis produces 4 clusters and we observe the formation of at least one cluster consisting of spines which are difficult to be assigned to a known class. This observation supports the argument of intermediate shape types.



Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
M. Sajjadi, M. Javanmardi, T. Tasdizen. In CoRR, Vol. abs/1606.04586, 2016.

Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.



Disjunctive Normal Networks
M. Sajjadi, S.M. Seyedhosseini, T. Tasdizen. In Neurocomputing, Vol. 218, Elsevier BV, pp. 276--285. Dec, 2016.
DOI: 10.1016/j.neucom.2016.08.047

Artificial neural networks are powerful pattern classifiers. They form the basis of the highly successful and popular Convolutional Networks which offer the state-of-the-art performance on several computer visions tasks. However, in many general and non-vision tasks, neural networks are surpassed by methods such as support vector machines and random forests that are also easier to use and faster to train. One reason is that the backpropagation algorithm, which is used to train artificial neural networks, usually starts from a random weight initialization which complicates the optimization process leading to long training times and increases the risk of stopping in a poor local minima. Several initialization schemes and pre-training methods have been proposed to improve the efficiency and performance of training a neural network. However, this problem arises from the architecture of neural networks. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which avoids poor local minima. We show that the proposed structure provides efficient coverage of the decision space which leads to state-of-the art classification accuracy and fast training times.



Image Segmentation Using Hierarchical Merge Tree
T. Liu, S.M. Seyedhosseini, T. Tasdizen. In IEEE Transactions on Image Processing, Vol. 25, No. 10, IEEE, pp. 4596--4607. Oct, 2016.
DOI: 10.1109/tip.2016.2592704

This paper investigates one of the most fundamental computer vision problems: image segmentation. We propose a supervised hierarchical approach to object-independent image segmentation. Starting with oversegmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes. We formulate the tree structure as a constrained conditional model to associate region merging with likelihoods predicted using an ensemble boundary classifier. Final segmentations can then be inferred by finding globally optimal solutions to the model efficiently. We also present an iterative training and testing algorithm that generates various tree structures and combines them to emphasize accurate boundaries by segmentation accumulation. Experiment results and comparisons with other recent methods on six public datasets demonstrate that our approach achieves the state-of-the-art region accuracy and is competitive in image segmentation without semantic priors.



Compressed sending for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study, Magnetic Resonance Imaging
S.K. Iyer, T. Tasdizen, N. Burgon, E. Kholmovski, N. Marrouche, G. Adluru, E.V.R. DiBella. In Magnetic Resonance Imaging, Vol. 34, No. 7, Elsevier BV, pp. 846--854. September, 2016.
DOI: 10.1016/j.mri.2016.03.002

Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5–15 min to acquire an undersampled (R = 1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~ R = 1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R ~ 3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R ~ 1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known.

We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R ~ 3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods.



Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI
S.K. Iyer, T. Tasdizen, D. Likhite, E.V.R. DiBella. In Medical Physics, Vol. 43, No. 4, Wiley-Blackwell, pp. 1969--1981. March, 2016.
DOI: 10.1118/1.4943643

Purpose:
Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data.

Methods:
The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints.

Results:
Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR.

Conclusions:
The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly.



Semantic Image Segmentation with Contextual Hierarchical Models
S.M. Seyedhosseini, T. Tasdizen. In IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 5, IEEE, pp. 951--964. May, 2016.
DOI: 10.1109/tpami.2015.2473846

Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).



Network modeling of Arctic melt ponds
M. Barjatia, T. Tasdizen, B. Song, K.M. Golden. In Cold Regions Science and Technology, Vol. 124, Elsevier BV, pp. 40--53. April, 2016.
DOI: 10.1016/j.coldregions.2015.11.019

The recent precipitous losses of summer Arctic sea ice have outpaced the projections of most climate models. A number of efforts to improve these models have focused in part on a more accurate accounting of sea ice albedo or reflectance. In late spring and summer, the albedo of the ice pack is determined primarily by melt ponds that form on the sea ice surface. The transition of pond configurations from isolated structures to interconnected networks is critical in allowing the lateral flow of melt water toward drainage features such as large brine channels, fractures, and seal holes, which can alter the albedo by removing the melt water. Moreover, highly connected ponds can influence the formation of fractures and leads during ice break-up. Here we develop algorithmic techniques for mapping photographic images of melt ponds onto discrete conductance networks which represent the geometry and connectedness of pond configurations. The effective conductivity of the networks is computed to approximate the ease of lateral flow. We implement an image processing algorithm with mathematical morphology operations to produce a conductance matrix representation of the melt ponds. Basic clustering and edge elimination, using undirected graphs, are then used to map the melt pond connections and reduce the conductance matrix to include only direct connections. The results for images taken during different times of the year are visually inspected and the number of mislabels is used to evaluate performance.



Evaluating Shape Alignment via Ensemble Visualization
M. Raj, M. Mirzargar, R. Kirby, R. Whitaker, J. Preston. In IEEE Computer Graphics and Applications, Vol. 36, No. 3, IEEE, pp. 60--71. May, 2016.
DOI: 10.1109/mcg.2015.70

The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. This work extends the contour boxplot technique to 3D and evaluates it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders. The authors demonstrate the efficacy of using the 3D contour boxplot ensemble visualization technique to analyze shape alignment and variability in atlas construction and analysis as a real-world application.



Development of Cortical Shape in the Human Brain from 6 to 24 Months of Age via a Novel Measure of Shape Complexity
S. Kim, I.Lyu, V. Fonov, C. Vachet, H. Hazlett, R. Smith, J. Piven, S. Dager, R. Mckinstry, J. Pruett, A. Evans, D. Collins, K. Botteron, R. Schultz, G. Gerig, M. Styner. In NeuroImage, Vol. 135, Elsevier, pp. 163--176. July, 2016.
DOI: 10.1016/j.neuroimage.2016.04.053

The quantification of local surface morphology in the human cortex is important for examining population differences as well as developmental changes in neurodegenerative or neurodevelopmental disorders. We propose a novel cortical shape measure, referred to as the 'shape complexity index' (SCI), that represents localized shape complexity as the difference between the observed distributions of local surface topology, as quantified by the shape index (SI) measure, to its best fitting simple topological model within a given neighborhood. We apply a relatively small, adaptive geodesic kernel to calculate the SCI. Due to the small size of the kernel, the proposed SCI measure captures fine differences of cortical shape. With this novel cortical feature, we aim to capture comparatively small local surface changes that capture a) the widening versus deepening of sulcal and gyral regions, as well as b) the emergence and development of secondary and tertiary sulci. Current cortical shape measures, such as the gyrification index (GI) or intrinsic curvature measures, investigate the cortical surface at a different scale and are less well suited to capture these particular cortical surface changes. In our experiments, the proposed SCI demonstrates higher complexity in the gyral/sulcal wall regions, lower complexity in wider gyral ridges and lowest complexity in wider sulcal fundus regions. In early postnatal brain development, our experiments show that SCI reveals a pattern of increased cortical shape complexity with age, as well as sexual dimorphisms in the insula, middle cingulate, parieto-occipital sulcal and Broca's regions. Overall, sex differences were greatest at 6months of age and were reduced at 24months, with the difference pattern switching from higher complexity in males at 6months to higher complexity in females at 24months. This is the first study of longitudinal, cortical complexity maturation and sex differences, in the early postnatal period from 6 to 24months of age with fine scale, cortical shape measures. These results provide information that complement previous studies of gyrification index in early brain development.



Image registration and segmentation in longitudinal MRI using temporal appearance modeling
Y. Gao, M. Zhang, K. Grewen, P. T. Fletcher, G. Gerig. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 629--632. April, 2016.
DOI: 10.1109/isbi.2016.7493346



Optimal parameter map estimation for shape representation: A generative approach
S. Elhabian, P. Agrawal, R. Whitaker. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, pp. 660--663. April, 2016.
DOI: 10.1109/isbi.2016.7493353

Probabilistic label maps are a useful tool for important medical image analysis tasks such as segmentation, shape analysis, and atlas building. Existing methods typically rely on blurred signed distance maps or smoothed label maps to model uncertainties and shape variabilities, which do not conform to any generative model or estimation process, and are therefore suboptimal. In this paper, we propose to learn probabilistic label maps using a generative model on given set of binary label maps. The proposed approach generalizes well on unseen data while simultaneously capturing the variability in the training samples. Efficiency of the proposed approach is demonstrated for consensus generation and shape-based clustering using synthetic datasets as well as left atrial segmentations from late-gadolinium enhancement MRI.