banner pubs

SCI Publications

2014


S. Elhabian, Y. Gur, J. Piven, M. Styner, I. Leppert, G.B. Pike, G. Gerig. “Subject-Motion Correction in HARDI Acquisitions: Choices and Consequences,” In Proceeding of the 2014 Joint Annual Meeting ISMRM-ESMRMB, pp. (accepted). 2014.
DOI: 10.3389/fneur.2014.00240

ABSTRACT

Unlike anatomical MRI where subject motion can most often be assessed by quick visual quality control, the detection, characterization and evaluation of the impact of motion in diffusion imaging are challenging issues due to the sensitivity of diffusion weighted imaging (DWI) to motion originating from vibration, cardiac pulsation, breathing and head movement. Post-acquisition motion correction is widely performed, e.g. using the open-source DTIprep software [1,2] or TORTOISE [3], but in particular in high angular resolution diffusion imaging (HARDI), users often do not fully understand the consequences of different types of correction schemes on the final analysis, and whether those choices may introduce confounding factors when comparing populations. Although there is excellent theoretical work on the number of directional DWI and its effect on the quality and crossing fiber resolution of orientation distribution functions (ODF), standard users lack clear guidelines and recommendations in practical settings. This research investigates motion correction using transformation and interpolation of affected DWI directions versus the exclusion of subsets of DWI’s, and its effects on diffusion measurements on the reconstructed fiber orientation diffusion functions and on the estimated fiber orientations. The various effects are systematically studied via a newly developed synthetic phantom and also on real HARDI data.



S. Elhabian, Y. Gur, J. Piven, M. Styner, I. Leppert, G. Bruce Pike, G. Gerig. “Motion is inevitable: The impact of motion correction schemes on hardi reconstructions,” In Proceedings of the MICCAI 2014 Workshop on Computational Diffusion MRI, September, 2014.

ABSTRACT

Diffusion weighted imaging (DWI) is known to be prone to artifacts related to motion originating from subject movement, cardiac pulsation and breathing, but also to mechanical issues such as table vibrations. Given the necessity for rigorous quality control and motion correction, users are often left to use simple heuristics to select correction schemes, but do not fully understand the consequences of such choices on the final analysis, moreover being at risk to introduce confounding factors in population studies. This paper reports work in progress towards a comprehensive evaluation framework of HARDI motion correction to support selection of optimal methods to correct for even subtle motion. We make use of human brain HARDI data from a well controlled motion experiment to simulate various degrees of motion corruption. Choices for correction include exclusion or registration of motion corrupted directions, with different choices of interpolation. The comparative evaluation is based on studying effects of motion correction on three different metrics commonly used when using DWI data, including similarity of fiber orientation distribution functions (fODFs), global brain connectivity via Graph Diffusion Distance (GDD), and reproducibility of prominent and anatomically defined fiber tracts. Effects of various settings are systematically explored and illustrated, leading to the somewhat surprising conclusion that a best choice is the alignment and interpolation of all DWI directions, not only directions considered as corrupted.



S.Y. Elhabian, Y. Gur, C. Vachet, J. Piven, M.A. Styner, I.R. Leppert, B. Pike, G. Gerig. “Subject-Motion Correction in HARDI Acquisitions: Choices and Consequences,” In Frontiers in Neurology - Brain Imaging Methods, 2014.
DOI: 10.3389/fneur.2014.00240

ABSTRACT

Diffusion-weighted imaging (DWI) is known to be prone to artifacts related to motion originating from subject movement, cardiac pulsation and breathing, but also to mechanical issues such as table vibrations. Given the necessity for rigorous quality control and motion correction, users are often left to use simple heuristics to select correction schemes, which involves simple qualitative viewing of the set of DWI data, or the selection of transformation parameter thresholds for detection of motion outliers. The scientific community offers strong theoretical and experimental work on noise reduction and orientation distribution function (ODF) reconstruction techniques for HARDI data, where postacquisition motion correction is widely performed, e.g., using the open-source DTIprep software (Oguz et al., 2014), FSL (the FMRIB Software Library) (Jenkinson et al., 2012) or TORTOISE ( Pierpaoli et al. , 2010). Nonetheless, effects and consequences of the selection of motion correction schemes on the final analysis, and the eventual risk of introducing confounding factors when comparing populations, are much less known and far beyond simple intuitive guessing. Hence, standard users lack clear guidelines and recommendations in practical settings. This paper reports a comprehensive evaluation framework to systematically assess the outcome of different motion correction choices commonly used by the scientific community on different DWI-derived measures. We make use of human brain HARDI data from a well-controlled motion experiment to simulate various degrees of motion corruption and noise contamination. Choices for correction include exclusion/scrubbing or registration of motion corrupted directions with different choices of interpolation, as well as the option of interpolation of all directions. The comparative evaluation is based on a study of the impact of motion correction using four metrics that quantify (1) similarity of fiber orientation distribution functions (fODFs), (2) deviation of local fiber orientations, (3) global brain connectivity via Graph Diffusion Distance (GDD) and (4) the reproducibility of prominent and anatomically defined fiber tracts. Effects of various motion correction choices are systematically explored and illustrated, leading to a general conclusion of discouraging users from setting ad-hoc thresholds on the estimated motion parameters beyond which volumes are claimed to be corrupted.



T. Etiene, D. Jonsson, T. Ropinski, C. Scheidegger, J.L.D. Comba, L. G. Nonato, R. M. Kirby, A. Ynnerman,, C. T. Silva. “Verifying Volume Rendering Using Discretization Error Analysis,” In IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, Vol. 20, No. 1, IEEE, pp. 140-154. January, 2014.

ABSTRACT

We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.



A. Faucett, T. Harman, T. Ameel. “Computational Determination of the Modified Vortex Shedding Frequency for a Rigid, Truncated, Wall-Mounted Cylinder in Cross Flow,” In Volume 10: Micro- and Nano-Systems Engineering and Packaging, Montreal, ASME International Mechanical Engineering Congress and Exposition (IMECE), International Conference on Computational Science, November, 2014.
DOI: 10.1115/imece2014-39064



J. Fishbaugh, M. Prastawa, G. Gerig, S. Durrleman. “Geodesic Regression of Image and Shape Data for Improved Modeling of 4D Trajectories,” In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

ABSTRACT

A variety of regression schemes have been proposed on images or shapes, although available methods do not handle them jointly. In this paper, we present a framework for joint image and shape regression which incorporates images as well as anatomical shape information in a consistent manner. Evolution is described by a generative model that is the analog of linear regression, which is fully characterized by baseline images and shapes (intercept) and initial momenta vectors (slope). Further, our framework adopts a control point parameterization of deformations, where the dimensionality of the deformation is determined by the complexity of anatomical changes in time rather than the sampling of the image and/or the geometric data. We derive a gradient descent algorithm which simultaneously estimates baseline images and shapes, location of control points, and momenta. Experiments on real medical data demonstrate that our framework effectively combines image and shape information, resulting in improved modeling of 4D (3D space + time) trajectories.



T. Fogal, F. Proch, A. Schiewe, O. Hasemann, A. Kempf, J. Krüger. “Freeprocessing: Transparent in situ visualization via data interception,” In Proceedings of the 14th Eurographics Conference on Parallel Graphics and Visualization, EGPGV, Eurographics Association, 2014.

ABSTRACT

In situ visualization has become a popular method for avoiding the slowest component of many visualization pipelines: reading data from disk. Most previous in situ work has focused on achieving visualization scalability on par with simulation codes, or on the data movement concerns that become prevalent at extreme scales. In this work, we consider in situ analysis with respect to ease of use and programmability. We describe an abstraction that opens up new applications for in situ visualization, and demonstrate that this abstraction and an expanded set of use cases can be realized without a performance cost.



Z. Fu, H.K. Dasari, M. Berzins, B. Thompson. “Parallel Breadth First Search on GPU Clusters,” SCI Technical Report, No. UUSCI-2014-002, SCI Institute, University of Utah, 2014.

ABSTRACT

Fast, scalable, low-cost, and low-power execution of parallel graph algorithms is important for a wide variety of commercial and public sector applications. Breadth First Search (BFS) imposes an extreme burden on memory bandwidth and network communications and has been proposed as a benchmark that may be used to evaluate current and future parallel computers. Hardware trends and manufacturing limits strongly imply that many core devices, such as NVIDIA® GPUs and the Intel® Xeon Phi®, will become central components of such future systems. GPUs are well known to deliver the highest FLOPS/watt and enjoy a very significant memory bandwidth advantage over CPU architectures. Recent work has demonstrated that GPUs can deliver high performance for parallel graph algorithms and, further, that it is possible to encapsulate that capability in a manner that hides the low level details of the GPU architecture and the CUDA language but preserves the high throughput of the GPU. We extend previous research on GPUs and on scalable graph processing on super-computers and demonstrate that a high-performance parallel graph machine can be created using commodity GPUs and networking hardware.

Keywords: GPU cluster, MPI, BFS, graph, parallel graph algorithm



Z. Fu, H.K. Dasari, M. Berzins, B. Thompson. “Parallel Breadth First Search on GPU Clusters,” In Proceedings of the IEEE BigData 2014 Conference, Washington DC, October, 2014.

ABSTRACT

Fast, scalable, low-cost, and low-power execution of parallel graph algorithms is important for a wide variety of commercial and public sector applications. Breadth First Search (BFS) imposes an extreme burden on memory bandwidth and network communications and has been proposed as a benchmark that may be used to evaluate current and future parallel computers. Hardware trends and manufacturing limits strongly imply that many core devices, such as NVIDIA® GPUs and the Intel® Xeon Phi®, will become central components of such future systems. GPUs are well known to deliver the highest FLOPS/watt and enjoy a very significant memory bandwidth advantage over CPU architectures. Recent work has demonstrated that GPUs can deliver high performance for parallel graph algorithms and, further, that it is possible to encapsulate that capability in a manner that hides the low level details of the GPU architecture and the CUDA language but preserves the high throughput of the GPU. We extend previous research on GPUs and on scalable graph processing on super-computers and demonstrate that a high-performance parallel graph machine can be created using commodity GPUs and networking hardware.

Keywords: GPU cluster, MPI, BFS, graph, parallel graph algorithm



Y. Gao, M. Prastawa, M. Styner, J. Piven, G. Gerig. “A Joint Framework for 4D Segmentation and Estimation of Smooth Temporal Appearance Changes,” In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. (accepted). 2014.

ABSTRACT

Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.



M.G. Genton, C.R. Johnson, K. Potter, G. Stenchikov, Y. Sun. “Surface boxplots,” In Stat Journal, Vol. 3, No. 1, pp. 1--11. 2014.

ABSTRACT

In this paper, we introduce a surface boxplot as a tool for visualization and exploratory analysis of samples of images. First, we use the notion of volume depth to order the images viewed as surfaces. In particular, we define the median image. We use an exact and fast algorithm for the ranking of the images. This allows us to detect potential outlying images that often contain interesting features not present in most of the images. Second, we build a graphical tool to visualize the surface boxplot and its various characteristics. A graph and histogram of the volume depth values allow us to identify images of interest. The code is available in the supporting information of this paper. We apply our surface boxplot to a sample of brain images and to a sample of climate model outputs.



T. Geymayer, M. Steinberger, A. Lex, M. Streit,, D. Schmalstieg. “Show me the Invisible: Visualizing Hidden Content,” In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI '14), CHI '14, ACM, pp. 3705--3714. 2014.
ISBN: 978-1-4503-2473-1
DOI: 10.1145/2556288.2557032

ABSTRACT

Content on computer screens is often inaccessible to users because it is hidden, e.g., occluded by other windows, outside the viewport, or overlooked. In search tasks, the efficient retrieval of sought content is important. Current software, however, only provides limited support to visualize hidden occurrences and rarely supports search synchronization crossing application boundaries. To remedy this situation, we introduce two novel visualization methods to guide users to hidden content. Our first method generates awareness for occluded or out-of-viewport content using see-through visualization. For content that is either outside the screen's viewport or for data sources not opened at all, our second method shows off-screen indicators and an on-demand smart preview. To reduce the chances of overlooking content, we use visual links, i.e., visible edges, to connect the visible content or the visible representations of the hidden content. We show the validity of our methods in a user study, which demonstrates that our technique enables a faster localization of hidden content compared to traditional search functionality and thereby assists users in information retrieval tasks.



S. Gratzl, N. Gehlenborg, A. Lex, H. Pfister, M. Streit. “Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets,” In IEEE Transactions on Visualization and Computer Graphics (InfoVis '14), Vol. 20, No. 12, pp. 2023--2032. 2014.
ISSN: 1077-2626
DOI: 10.1109/TVCG.2014.2346260

ABSTRACT

Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics.



K. Grewen, M. Burchinal, C. Vachet, S. Gouttard, J.H. Gilmore, W. Lin, J. Johns, M. Elam, G. Gerig. “Prenatal cocaine effects on brain structure in early infancy,” In NeuroImage, Vol. 101, pp. 114--123. November, 2014.
DOI: 10.1016/j.neuroimage.2014.06.070

ABSTRACT

Prenatal cocaine exposure (PCE) is related to subtle deficits in cognitive and behavioral function in infancy, childhood and adolescence. Very little is known about the effects of in utero PCE on early brain development that may contribute to these impairments. The purpose of this study was to examine brain structural differences in infants with and without PCE. We conducted MRI scans of newborns (mean age = 5 weeks) to determine cocaine's impact on early brain structural development. Subjects were three groups of infants: 33 with PCE co-morbid with other drugs, 46 drug-free controls and 40 with prenatal exposure to other drugs (nicotine, alcohol, marijuana, opiates, SSRIs) but without cocaine. Infants with PCE exhibited lesser total gray matter (GM) volume and greater total cerebral spinal fluid (CSF) volume compared with controls and infants with non-cocaine drug exposure. Analysis of regional volumes revealed that whole brain GM differences were driven primarily by lesser GM in prefrontal and frontal brain regions in infants with PCE, while more posterior regions (parietal, occipital) did not differ across groups. Greater CSF volumes in PCE infants were present in prefrontal, frontal and parietal but not occipital regions. Greatest differences (GM reduction, CSF enlargement) in PCE infants were observed in dorsal prefrontal cortex. Results suggest that PCE is associated with structural deficits in neonatal cortical gray matter, specifically in prefrontal and frontal regions involved in executive function and inhibitory control. Longitudinal study is required to determine whether these early differences persist and contribute to deficits in cognitive functions and enhanced risk for drug abuse seen at school age and in later life.



C.E. Gritton. “Ringing Instabilities in Particle Methods,” Note: M.S. in Computational Engineering and Science, advisor Martin Berzins, School of Computing, University of Utah, August, 2014.

ABSTRACT

Particle methods have been used in fields ranging from fluid dynamics to plasma physics. The Particle-In-Cell method and the family of methods that are an extension of it are a combination of both Lagrangian and Eularian methods. In this thesis, we present a brief survey of some of the methods and their key components. We show the different methods by which spatial derviates are computed. We propose a method of showing how the so-called "ringing instabilies" associated with particle methods arise and a means to remove them. We also propose that the underlying nodal scheme plays a key role in the stability of the method. Lastly, different particle methods are explored through numerical simulations and compared against an analytic solution.



Y. Gur, C.R. Johnson. “Generalized HARDI Invariants by Method of Tensor Contraction,” In Proceedings of the 2014 IEEE International Symposium on Biomedical Imaging (ISBI), pp. 718--721. April, 2014.

ABSTRACT

We propose a 3D object recognition technique to construct rotation invariant feature vectors for high angular resolution diffusion imaging (HARDI). This method uses the spherical harmonics (SH) expansion and is based on generating rank-1 contravariant tensors using the SH coefficients, and contracting them with covariant tensors to obtain invariants. The proposed technique enables the systematic construction of invariants for SH expansions of any order using simple mathematical operations. In addition, it allows construction of a large set of invariants, even for low order expansions, thus providing rich feature vectors for image analysis tasks such as classification and segmentation. In this paper, we use this technique to construct feature vectors for eighth-order fiber orientation distributions (FODs) reconstructed using constrained spherical deconvolution (CSD). Using simulated and in vivo brain data, we show that these invariants are robust to noise, enable voxel-wise classification, and capture meaningful information on the underlying white matter structure.

Keywords: Diffusion MRI, HARDI, invariants



C. Hamani, B.O. Amorim, A.L. Wheeler, M. Diwan, K. Driesslein, L. Covolan, C.R. Butson, J.N. Nobrega. “Deep brain stimulation in rats: Different targets induce similar antidepressant-like effects but influence different circuits,” In Neurobiology of Disease, Vol. 71, Elsevier Inc., pp. 205--214. August, 2014.
ISSN: 1095-953X
DOI: 10.1016/j.nbd.2014.08.007
PubMed ID: 25131446

ABSTRACT

Recent studies in patients with treatment-resistant depression have shown similar results with the use of deep brain stimulation (DBS) in the subcallosal cingulate gyrus (SCG), ventral capsule/ventral striatum (VC/VS) and nucleus accumbens (Acb). As these brain regions are interconnected, one hypothesis is that by stimulating these targets one would just be influencing different relays in the same circuitry. We investigate behavioural, immediate early gene expression, and functional connectivity changes in rats given DBS in homologous regions, namely the ventromedial prefrontal cortex (vmPFC), white matter fibers of the frontal region (WMF) and nucleus accumbens. We found that DBS delivered to the vmPFC, Acb but not WMF induced significant antidepressant-like effects in the FST (31\%, 44\%, and 17\% reduction in immobility compared to controls). Despite these findings, stimulation applied to these three targets induced distinct patterns of regional activity and functional connectivity. While animals given vmPFC DBS had increased cortical zif268 expression, changes after Acb stimulation were primarily observed in subcortical structures. In animals receiving WMF DBS, both cortical and subcortical structures at a distance from the target were influenced by stimulation. In regards to functional connectivity, DBS in all targets decreased intercorrelations among cortical areas. This is in contrast to the clear differences observed in subcortical connectivity, which was reduced after vmPFC DBS but increased in rats receiving Acb or WMF stimulation. In conclusion, results from our study suggest that, despite similar antidepressant-like effects, stimulation of the vmPFC, WMF and Acb induce distinct changes in regional brain activity and functional connectivity.

Keywords: Anterior capsule, Connectivity, Deep brain stimulation, Depression, Nucleus accumbens, Prefrontal cortex



C.D. Hansen, M. Chen, C.R. Johnson, A.E. Kaufman, H. Hagen (Eds.). “Scientific Visualization: Uncertainty, Multifield, Biomedical, and Scalable Visualization,” Mathematics and Visualization, Springer, 2014.
ISBN: 978-1-4471-6496-8



X. Hao, K. Zygmunt, R.T. Whitaker, P.T. Fletcher. “Improved Segmentation of White Matter Tracts with Adaptive Riemannian Metrics,” In Medical Image Analysis, Vol. 18, No. 1, pp. 161--175. Jan, 2014.
DOI: 10.1016/j.media.2013.10.007
PubMed ID: 24211814

ABSTRACT

We present a novel geodesic approach to segmentation of white matter tracts from diffusion tensor imaging (DTI). Compared to deterministic and stochastic tractography, geodesic approaches treat the geometry of the brain white matter as a manifold, often using the inverse tensor field as a Riemannian metric. The white matter pathways are then inferred from the resulting geodesics, which have the desirable property that they tend to follow the main eigenvectors of the tensors, yet still have the flexibility to deviate from these directions when it results in lower costs. While this makes such methods more robust to noise, the choice of Riemannian metric in these methods is ad hoc. A serious drawback of current geodesic methods is that geodesics tend to deviate from the major eigenvectors in high-curvature areas in order to achieve the shortest path. In this paper we propose a method for learning an adaptive Riemannian metric from the DTI data, where the resulting geodesics more closely follow the principal eigenvector of the diffusion tensors even in high-curvature regions. We also develop a way to automatically segment the white matter tracts based on the computed geodesics. We show the robustness of our method on simulated data with different noise levels. We also compare our method with tractography methods and geodesic approaches using other Riemannian metrics and demonstrate that the proposed method results in improved geodesics and segmentations using both synthetic and real DTI data.

Keywords: Conformal factor, Diffusion tensor imaging, Front-propagation, Geodesic, Riemannian manifold



J. Hinkle, P.T. Fletcher, S. Joshi . “Intrinsic Polynomials for Regression on Riemannian Manifolds,” In Journal of Mathematical Imaging and Vision, pp. 1-21. 2014.

ABSTRACT

We develop a framework for polynomial regression on Riemannian manifolds. Unlike recently developed spline models on Riemannian manifolds, Riemannian polynomials offer the ability to model parametric polynomials of all integer orders, odd and even. An intrinsic adjoint method is employed to compute variations of the matching functional, and polynomial regression is accomplished using a gradient-based optimization scheme. We apply our polynomial regression framework in the context of shape analysis in Kendall shape space as well as in diffeomorphic landmark space. Our algorithm is shown to be particularly convenient in Riemannian manifolds with additional symmetry, such as Lie groups and homogeneous spaces with right or left invariant metrics. As a particularly important example, we also apply polynomial regression to time-series imaging data using a right invariant Sobolev metric on the diffeomorphism group. The results show that Riemannian polynomials provide a practical model for parametric curve regression, while offering increased flexibility over geodesics.