SCIENTIFIC COMPUTING AND IMAGING INSTITUTE
at the University of Utah

An internationally recognized leader in visualization, scientific computing, and image analysis

SCI Publications

2015


Note: VisTrails: A scientific workflow management system. Scientific Computing and Imaging Institute (SCI), Download from: http://www.vistrails.org, 2015.



I. Wald, A. Knoll, G. P. Johnson, W. Usher, V. Pascucci, M. E. Papka. “CPU Ray Tracing Large Particle Data with Balanced P-k-d Trees,” In 2015 IEEE Scientific Visualization Conference, IEEE, Oct, 2015.
DOI: 10.1109/scivis.2015.7429492

ABSTRACT

We present a novel approach to rendering large particle data sets from molecular dynamics, astrophysics and other sources. We employ a new data structure adapted from the original balanced k-d tree, which allows for representation of data with trivial or no overhead. In the OSPRay visualization framework, we have developed an efficient CPU algorithm for traversing, classifying and ray tracing these data. Our approach is able to render up to billions of particles on a typical workstation, purely on the CPU, without any approximations or level-of-detail techniques, and optionally with attribute-based color mapping, dynamic range query, and advanced lighting models such as ambient occlusion and path tracing.



R. Whitaker, W. Thompson, J. Berger, B. Fischhof, M. Goodchild, M. Hegarty, C. Jermaine, K. S. McKinley, A. Pang, J. Wendelberger. “Workshop on Quantification, Communication, and Interpretation of Uncertainty in Simulation and Data Science,” Note: Computing Community Consortium, 2015.

ABSTRACT

Modern science, technology, and politics are all permeated by data that comes from people, measurements, or computational processes. While this data is often incomplete, corrupt, or lacking in sufficient accuracy and precision, explicit consideration of uncertainty is rarely part of the computational and decision making pipeline. The CCC Workshop on Quantification, Communication, and Interpretation of Uncertainty in Simulation and Data Science explored this problem, identifying significant shortcomings in the ways we currently process, present, and interpret uncertain data. Specific recommendations on a research agenda for the future were made in four areas: uncertainty quantification in large-scale computational simulations, uncertainty quantification in data science, software support for uncertainty computation, and better integration of uncertainty quantification and communication to stakeholders.



J. J. Wolff, G. Gerig, J. D. Lewis, T. Soda, M. A. Styner, C. Vachet, K. N. Botteron, J. T. Elison, S. R. Dager, A. M. Estes, H. C. Hazlett, R. T. Schultz, L. Zwaigenbaum, J. Piven. “Altered corpus callosum morphology associated with autism over the first 2 years of life,” In Brain, 2015.
DOI: 10.1093/brain/awv118

ABSTRACT

Numerous brain imaging studies indicate that the corpus callosum is smaller in older children and adults with autism spectrum disorder. However, there are no published studies examining the morphological development of this connective pathway in infants at-risk for the disorder. Magnetic resonance imaging data were collected from 270 infants at high familial risk for autism spectrum disorder and 108 low-risk controls at 6, 12 and 24 months of age, with 83% of infants contributing two or more data points. Fifty-seven children met criteria for ASD based on clinical-best estimate diagnosis at age 2 years. Corpora callosa were measured for area, length and thickness by automated segmentation. We found significantly increased corpus callosum area and thickness in children with autism spectrum disorder starting at 6 months of age. These differences were particularly robust in the anterior corpus callosum at the 6 and 12 month time points. Regression analysis indicated that radial diffusivity in this region, measured by diffusion tensor imaging, inversely predicted thickness. Measures of area and thickness in the first year of life were correlated with repetitive behaviours at age 2 years. In contrast to work from older children and adults, our findings suggest that the corpus callosum may be larger in infants who go on to develop autism spectrum disorder. This result was apparent with or without adjustment for total brain volume. Although we did not see a significant interaction between group and age, cross-sectional data indicated that area and thickness differences diminish by age 2 years. Regression data incorporating diffusion tensor imaging suggest that microstructural properties of callosal white matter, which includes myelination and axon composition, may explain group differences in morphology.



M. Zhang, P. T. Fletcher. “Finite-Dimensional Lie Algebras for Fast Diffeomorphic Image Registration,” In Information Processing in Medical Imaging (IPMI), 2015.

ABSTRACT

This paper presents a fast geodesic shooting algorithm for diffeomorphic image registration. We first introduce a novel finite-dimensional Lie algebra structure on the space of bandlimited velocity fields. We then show that this space can effectively represent initial velocities for diffeomorphic image registration at much lower dimensions than typically used, with little to no loss in registration accuracy. We then leverage the fact that the geodesic evolution equations, as well as the adjoint Jacobi field equations needed for gradient descent methods, can be computed entirely in this finite-dimensional Lie algebra. The result is a geodesic shooting method for large deformation metric mapping (LDDMM) that is dramatically faster and less memory intensive than state-of-the-art methods. We demonstrate the effectiveness of our model to register 3D brain images and compare its registration accuracy, runtime, and memory consumption with leading LDDMM methods. We also show how our algorithm breaks through the prohibitive time and memory requirements of diffeomorphic atlas building.



M. Zhang, P. T. Fletcher. “Bayesian Principal Geodesic Analysis for Estimating Intrinsic Diffeomorphic Image Variability,” In Medical Image Analysis (accepted), 2015.

ABSTRACT

In this paper, we present a generative Bayesian approach for estimating the low-dimensional latent space of diffeomorphic shape variability in a population of images. We develop a latent variable model for principal geodesic analysis (PGA) that provides a probabilistic framework for factor analysis in the space of diffeomorphisms. A sparsity prior in the model results in automatic selection of the number of relevant dimensions by driving unnecessary principal geodesics to zero. To infer model parameters, including the image atlas, principal geodesic deformations, and the effective dimensionality, we introduce an expectation maximization (EM) algorithm. We evaluate our proposed model on 2D synthetic data and the 3D OASIS brain database of magnetic resonance images, and show that the automatically selected latent dimensions from our model are able to reconstruct unobserved testing images with lower error than both linear principal component analysis (LPCA) in the image space and tangent space principal component analysis (TPCA) in the diffeomorphism space.



M. Zhang, H. Shao, P. T. Fletcher. “A Mixture Model for Automatic Diffeomorphic Multi-Atlas Building,” In MICCAI Workshop, Springer, 2015.

ABSTRACT

Computing image atlases that are representative of a dataset
is an important first step for statistical analysis of images. Most current approaches estimate a single atlas to represent the average of a large population of images, however, a single atlas is not sufficiently expressive to capture distributions of images with multiple modes. In this paper, we present a mixture model for building diffeomorphic multi-atlases that can represent sub-populations without knowing the category of each observed data point. In our probabilistic model, we treat diffeomorphic image transformations as latent variables, and integrate them out using a Monte Carlo Expectation Maximization (MCEM) algorithm via Hamiltonian Monte Carlo (HMC) sampling. A key benefit of our model is that the mixture modeling inference procedure results in an automatic clustering of the dataset. Using 2D synthetic data generated from known parameters, we demonstrate the ability of our model to successfully recover the multi-atlas and automatically cluster the dataset. We also show the effectiveness of the proposed method in a multi-atlas estimation problem for 3D brain images.


2014


G. Adluru, Y. Gur, J. Anderson, L. Richards, N. Adluru, E. DiBella. “Assessment of white matter microstructure in stroke patients using NODDI,” In Proceedings of the 2014 IEEE Int. Conf. Engineering and Biology Society (EMBC), 2014.

ABSTRACT

Diffusion weighted imaging (DWI) is widely used to study changes in white matter following stroke. In various studies employing diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI) modalities, it has been shown that fractional anisotropy (FA), mean diffusivity (MD), and generalized FA (GFA) can be used as measures of white matter tract integrity in stroke patients. However, these measures may be non-specific, as they do not directly delineate changes in tissue microstructure. Multi-compartment models overcome this limitation by modeling DWI data using a set of indices that are directly related to white matter microstructure. One of these models which is gaining popularity, is neurite orientation dispersion and density imaging (NODDI). This model uses conventional single or multi-shell HARDI data to describe fiber orientation dispersion as well as densities of different tissue types in the imaging voxel. In this paper, we apply for the first time the NODDI model to 4-shell HARDI stroke data. By computing NODDI indices over the entire brain in two stroke patients, and comparing tissue regions in ipsilesional and contralesional hemispheres, we demonstrate that NODDI modeling provides specific information on tissue microstructural changes. We also introduce an information theoretic analysis framework to investigate the non-local effects of stroke in the white matter. Our initial results suggest that the NODDI indices might be more specific markers of white matter reorganization following stroke than other measures previously used in studies of stroke recovery.



S.P. Awate, R.T. Whitaker. “Multiatlas Segmentation as Nonparametric Regression,” In IEEE Trans Med Imaging, April, 2014.
PubMed ID: 24802528

ABSTRACT

This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and labelfusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.



S.P. Awate, Y.-Y. Yu, R.T. Whitaker. “Kernel Principal Geodesic Analysis,” In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), Springer LNAI, 2014.

ABSTRACT

Kernel principal component analysis (kPCA) has been proposed as a dimensionality-reduction technique that achieves nonlinear, low-dimensional representations of data via the mapping to kernel feature space. Conventionally, kPCA relies on Euclidean statistics in kernel feature space. However, Euclidean analysis can make kPCA inefficient or incorrect for many popular kernels that map input points to a hypersphere in kernel feature space. To address this problem, this paper proposes a novel adaptation of kPCA, namely kernel principal geodesic analysis (kPGA), for hyperspherical statistical analysis in kernel feature space. This paper proposes tools for statistical analyses on the Riemannian manifold of the Hilbert sphere in the reproducing kernel Hilbert space, including algorithms for computing the sample weighted Karcher mean and eigen analysis of the sample weighted Karcher covariance. It then applies these tools to propose novel methods for (i)~dimensionality reduction and (ii)~clustering using mixture-model fitting. The results, on simulated and real-world data, show that kPGA-based methods perform favorably relative to their kPCA-based analogs.



H. Bhatia, V. Pascucci, R.M. Kirby, P.-T. Bremer. “Extracting Features from Time-Dependent Vector Fields Using Internal Reference Frames,” In Computer Graphics Forum, Vol. 33, No. 3, pp. 21--30. June, 2014.
DOI: 10.1111/cgf.12358

ABSTRACT

Extracting features from complex, time-dependent flow fields remains a significant challenge despite substantial research efforts, especially because most flow features of interest are defined with respect to a given reference frame. Pathline-based techniques, such as the FTLE field, are complex to implement and resource intensive, whereas scalar transforms, such as λ2, often produce artifacts and require somewhat arbitrary thresholds. Both approaches aim to analyze the flow in a more suitable frame, yet neither technique explicitly constructs one.

This paper introduces a new data-driven technique to compute internal reference frames for large-scale complex flows. More general than uniformly moving frames, these frames can transform unsteady fields, which otherwise require substantial processing of resources, into a sequence of individual snapshots that can be analyzed using the large body of steady-flow analysis techniques. Our approach is simple, theoretically well-founded, and uses an embarrassingly parallel algorithm for structured as well as unstructured data. Using several case studies from fluid flow and turbulent combustion, we demonstrate that internal frames are distinguished, result in temporally coherent structures, and can extract well-known as well as notoriously elusive features one snapshot at a time.



H. Bhatia, A. Gyulassy, H. Wang, P.-T. Bremer, V. Pascucci . “Robust Detection of Singularities in Vector Fields,” In Topological Methods in Data Analysis and Visualization III, Mathematics and Visualization, Springer International Publishing, pp. 3--18. March, 2014.
DOI: 10.1007/978-3-319-04099-8_1

ABSTRACT

Recent advances in computational science enable the creation of massive datasets of ever increasing resolution and complexity. Dealing effectively with such data requires new analysis techniques that are provably robust and that generate reproducible results on any machine. In this context, combinatorial methods become particularly attractive, as they are not sensitive to numerical instabilities or the details of a particular implementation. We introduce a robust method for detecting singularities in vector fields. We establish, in combinatorial terms, necessary and sufficient conditions for the existence of a critical point in a cell of a simplicial mesh for a large class of interpolation functions. These conditions are entirely local and lead to a provably consistent and practical algorithm to identify cells containing singularities.



H. Bhatia, V. Pascucci, P.-T. Bremer. “The Natural Helmholtz-Hodge Decomposition For Open-Boundary Flow Analysis,” In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 99, pp. 1566--1578. 2014.
DOI: 10.1109/TVCG.2014.2312012

ABSTRACT

The Helmholtz-Hodge decomposition (HHD) describes a flow as the sum of an incompressible, an irrotational, and a harmonic flow, and is a fundamental tool for simulation and analysis. Unfortunately, for bounded domains, the HHD is not uniquely defined, and traditionally, boundary conditions are imposed to obtain a unique solution. However, in general, the boundary conditions used during the simulation may not be known and many simulations use open boundary conditions. In these cases, the flow imposed by traditional boundary conditions may not be compatible with the given data, which leads to sometimes drastic artifacts and distortions in all three components, hence producing unphysical results. Instead, this paper proposes the natural HHD, which is defined by separating the flow into internal and external components. Using a completely data-driven approach, the proposed technique obtains uniqueness without assuming boundary conditions a priori. As a result, it enables a reliable and artifact-free analysis for flows with open boundaries or unknown boundary conditions. Furthermore, our approach computes the HHD on a point-wise basis in contrast to the existing global techniques, and thus supports computing inexpensive local approximations for any subset of the domain. Finally, the technique is easy to implement for a variety of spatial discretizations and interpolated fields in both two and three dimensions.



A. Bigelow, S. Drucker, D. Fisher, M.D. Meyer. “Reflections on How Designers Design With Data,” In Proceedings of the ACM International Conference on Advanced Visual Interfaces (AVI), Note: Awarded Best Paper!, 2014.

ABSTRACT

In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.

Keywords: Visualization, infographics, design practice



J.J.E. Blauer, D. Swenson, K. Higuchi, G. Plank, R. Ranjan, N. Marrouche,, R.S. MacLeod. “Sensitivity and Specificity of Substrate Mapping: An In Silico Framework for the Evaluation of Electroanatomical Substrate Mapping Strategies,” In Journal of Cardiovascular Electrophysiology, In Journal of Cardiovascular Electrophysiology, Vol. 25, No. 7, Note: Featured on journal cover., pp. 774--780. May, 2014.

ABSTRACT

Background - Voltage mapping is an important tool for characterizing proarrhythmic electrophysiological substrate, yet it is subject to geometric factors that influence bipolar amplitudes and thus compromise performance. The aim of this study was to characterize the impact of catheter orientation on the ability of bipolar amplitudes to accurately discriminate between healthy and diseased tissues.

Methods and Results - We constructed a three-dimensional, in-silico, bidomain model of cardiac tissue containing transmural lesions of varying diameter. A planar excitation wave was stimulated and electrograms were sampled with a realistic catheter model at multiple positions and orientations. We carried out validation studies in animal experiments of acute ablation lesions mapped with a clinical mapping system. Bipolar electrograms sampled at higher inclination angles of the catheter with respect to the tissue demonstrated improvements in both sensitivity and specificity of lesion detection. Removing low voltage electrograms with concurrent activation of both electrodes, suggesting false attenuation of the bipolar electrogram due to alignment with the excitation wavefront, had little effect on the accuracy of voltage mapping.

Conclusions - Our results demonstrate possible mechanisms for the impact of catheter orientation on voltage mapping accuracy. Moreover, results from our simulations suggest that mapping accuracy may be improved by selectively controlling the inclination of the catheter to record at higher angles with respect to the tissue.

Keywords: arrhythmia, computer-based model, electroanatomical mapping, voltage mapping, bipolar electrogram



G.P. Bonneau, H.C. Hege, C.R. Johnson, M.M. Oliveira, K. Potter, P. Rheingans, T. Schultz. “Overview and State-of-the-Art of Uncertainty Visualization,” In Scientific Visualization: Uncertainty, Multifield, Biomedical, and Scalable Visualization, Edited by M. Chen and H. Hagen and C.D. Hansen and C.R. Johnson and A. Kauffman, Springer-Verlag, pp. 3--27. 2014.
ISBN: 978-1-4471-6496-8
ISSN: 1612-3786
DOI: 10.1007/978-1-4471-6497-5_1

ABSTRACT

The goal of visualization is to effectively and accurately communicate data. Visualization research has often overlooked the errors and uncertainty which accompany the scientific process and describe key characteristics used to fully understand the data. The lack of these representations can be attributed, in part, to the inherent difficulty in defining, characterizing, and controlling this uncertainty, and in part, to the difficulty in including additional visual metaphors in a well designed, potent display. However, the exclusion of this information cripples the use of visualization as a decision making tool due to the fact that the display is no longer a true representation of the data. This systematic omission of uncertainty commands fundamental research within the visualization community to address, integrate, and expect uncertainty information. In this chapter, we outline sources and models of uncertainty, give an overview of the state-of-the-art, provide general guidelines, outline small exemplary applications, and finally, discuss open problems in uncertainty visualization.



“Topological Methods in Data Analysis and Visualization III,” Edited by Peer-Timo Bremer and Ingrid Hotz and Valerio Pascucci and Ronald Peikert, Springer International Publishing, 2014.
ISBN: 978-3-319-04099-8



J. Bronson, J.A. Levine, R.T. Whitaker. “Lattice cleaving: a multimaterial tetrahedral meshing algorithm with guarantees,” In IEEE Transactions on Visualization and Computer Graphics (TVCG), pp. 223--237. 2014.
DOI: 10.1109/TVCG.2013.115
PubMed ID: 24356365

ABSTRACT

We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, to reduce element counts in regions of homogeneity. Additionally, we provide proofs showing that both element quality and geometric fidelity are bounded using this approach.



M.S. Okun, S.S. Wu, S. Fayad, H. Ward, D. Bowers, C. Rosado, L. Bowen, C. Jacobson, C.R. Butson, K.D. Foote. “Acute and Chronic Mood and Apathy Outcomes from a Randomized Study of Unilateral STN and GPi DBS,” In PLoS ONE, Vol. 9, No. 12, pp. e114140. December, 2014.

ABSTRACT

Objective: To study mood and behavioral effects of unilateral and staged bilateral subthalamic nucleus (STN) and globus pallidus internus (GPi) deep brain stimulation (DBS) for Parkinson's disease (PD).

Background: There are numerous reports of mood changes following DBS, however, most have focused on bilateral simultaneous STN implants with rapid and aggressive post-operative medication reduction.

Methods: A standardized evaluation was applied to a subset of patients undergoing STN and GPi DBS and who were also enrolled in the NIH COMPARE study. The Unified Parkinson Disease Rating Scale (UPDRS III), the Hamilton depression (HAM-D) and anxiety rating scales (HAM-A), the Yale-Brown obsessive-compulsive rating scale (YBOCS), the Apathy Scale (AS), and the Young mania rating scale (YMRS) were used. The scales were repeated at acute and chronic intervals. A post-operative strategy of non-aggressive medication reduction was employed.

Results: Thirty patients were randomized and underwent unilateral DBS (16 STN, 14 GPi). There were no baseline differences. The GPi group had a higher mean dopaminergic dosage at 1-year, however the between group difference in changes from baseline to 1-year was not significant. There were no differences between groups in mood and motor outcomes. When combining STN and GPi groups, the HAM-A scores worsened at 2-months, 4-months, 6-months and 1-year when compared with baseline; the HAM-D and YMRS scores worsened at 4-months, 6-months and 1-year; and the UPDRS Motor scores improved at 4-months and 1-year. Psychiatric diagnoses (DSM-IV) did not change. No between group differences were observed in the cohort of bilateral cases.

Conclusions: There were few changes in mood and behavior with STN or GPi DBS. The approach of staging STN or GPi DBS without aggressive medication reduction could be a viable option for managing PD surgical candidates. A study of bilateral DBS and of medication reduction will be required to better understand risks and benefits of a bilateral approach.



B. Chapman, H. Calandra, S. Crivelli, J. Dongarra, J. Hittinger, C.R. Johnson, S.A. Lathrop, V. Sarkar, E. Stahlberg, J.S. Vetter, D. Williams. “ASCAC Workforce Subcommittee Letter,” Note: Office of Scientific and Technical Information, DOE ASCAC Committee Report, July, 2014.
DOI: 10.2172/1222711

ABSTRACT

Simulation and computing are essential to much of the research conducted at the DOE national laboratories. Experts in the ASCR-relevant Computing Sciences, which encompass a range of disciplines including Computer Science, Applied Mathematics, Statistics and domain sciences, are an essential element of the workforce in nearly all of the DOE national laboratories. This report seeks to identify the gaps and challenges facing DOE with respect to this workforce.

The DOE laboratories provided the committee with information on disciplines in which they experienced workforce gaps. For the larger laboratories, the majority of the cited workforce gaps were in the Computing Sciences. Since this category spans multiple disciplines, it was difficult to obtain comprehensive information on workforce gaps in the available timeframe. Nevertheless, five multi-purpose laboratories provided additional relevant data on recent hiring and retention.

Data on academic coursework was reviewed. Studies on multidisciplinary education in Computational Science and Engineering (CS&E) revealed that, while the number of CS&E courses offered is growing, the overall availability is low and the coursework fails to provide skills for applying CS&E to real-world applications. The number of graduates in different fields within Computer Science (CS) and Computer Engineering (CE) was also reviewed, which confirmed that specialization in DOE areas of interest is less common than in many other areas.

Projections of industry needs and employment figures (mostly for CS and CE) were examined. They indicate a high and increasing demand for graduates in all areas of computing, with little unemployment. This situation will be exacerbated by large numbers of retirees in the coming decade. Further, relatively few US students study toward higher degrees in the Computing Sciences, and those who do are predominantly white and male. As a result of this demographic imbalance, foreign nationals are an increasing fraction of the graduate population and we fail to benefit from including women and underrepresented minorities.

There is already a program that supports graduate education that is tailored to the needs of the DOE laboratories. The Computational Science Graduate Fellowship (CSGF) enables graduates to pursue a multidisciplinary program of education that is coupled with practical experience at the laboratories. It has been demonstrated to be highly effective in both its educational goals and in its ability to supply talent to the laboratories. However, its current size and scope are too limited to solve the workforce problems identified. The committee felt strongly that this proven program should be extended to increase its ability to support the DOE mission.

Since no single program can eliminate the workforce gap, existing recruitment efforts by the laboratories were examined. It was found that the laboratories already make considerable effort to recruit in this area. Although some challenges, such as the inability to match industry compensation, cannot be directly addressed, DOE could develop a roadmap to increase the impact of individual laboratory efforts, to enhance the suitability of existing educational opportunities, to increase the attractiveness of the laboratories, and to attract and sustain a full spectrum of human talent, which includes women and underrepresented minorities.