Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2019


P. R. Atkins, Y. Shin, P. Agrawal, S. Y. Elhabian, R. T. Whitaker, J. A. Weiss, S. K. Aoki, C. L. Peters, A. E. Anderson. “Which Two-dimensional Radiographic Measurements of Cam Femoroacetabular Impingement Best Describe the Three-dimensional Shape of the Proximal Femur?,” In Clinical Orthopaedics and Related Research, Vol. 477, No. 1, 2019.

ABSTRACT

BACKGROUND:

Many two-dimensional (2-D) radiographic views are used to help diagnose cam femoroacetabular impingement (FAI), but there is little consensus as to which view or combination of views is most effective at visualizing the magnitude and extent of the cam lesion (ie, severity). Previous studies have used a single image from a sequence of CT or MR images to serve as a reference standard with which to evaluate the ability of 2-D radiographic views and associated measurements to describe the severity of the cam lesion. However, single images from CT or MRI data may fail to capture the apex of the cam lesion. Thus, it may be more appropriate to use measurements of three-dimensional (3-D) surface reconstructions from CT or MRI data to serve as an anatomic reference standard when evaluating radiographic views and associated measurements used in the diagnosis of cam FAI.

QUESTIONS/PURPOSES:

The purpose of this study was to use digitally reconstructed radiographs and 3-D statistical shape modeling to (1) determine the correlation between 2-D radiographic measurements of cam FAI and 3-D metrics of proximal femoral shape; and 2) identify the combination of radiographic measurements from plain film projections that were most effective at predicting the 3-D shape of the proximal femur.

METHODS:

This study leveraged previously acquired CT images of the femur from a convenience sample of 37 patients (34 males; mean age, 27 years, range, 16-47 years; mean body mass index [BMI], 24.6 kg/m, range, 19.0-30.2 kg/m) diagnosed with cam FAI imaged between February 2005 and January 2016. Patients were diagnosed with cam FAI based on a culmination of clinical examinations, history of hip pain, and imaging findings. The control group consisted of 59 morphologically normal control participants (36 males; mean age, 29 years, range, 15-55 years; mean BMI, 24.4 kg/m, range, 16.3-38.6 kg/m) imaged between April 2008 and September 2014. Of these controls, 30 were cadaveric femurs and 29 were living participants. All controls were screened for evidence of femoral deformities using radiographs. In addition, living control participants had no history of hip pain or previous surgery to the hip or lower limbs. CT images were acquired for each participant and the surface of the proximal femur was segmented and reconstructed. Surfaces were input to our statistical shape modeling pipeline, which objectively calculated 3-D shape scores that described the overall shape of the entire proximal femur and of the region of the femur where the cam lesion is typically located. Digital reconstructions for eight plain film views (AP, Meyer lateral, 45° Dunn, modified 45° Dunn, frog-leg lateral, Espié frog-leg, 90° Dunn, and cross-table lateral) were generated from CT data. For each view, measurements of the α angle and head-neck offset were obtained by two researchers (intraobserver correlation coefficients of 0.80-0.94 for the α angle and 0.42-0.80 for the head-neck offset measurements). The relationships between radiographic measurements from each view and the 3-D shape scores (for the entire proximal femur and for the region specific to the cam lesion) were assessed with linear correlation. Additionally, partial least squares regression was used to determine which combination of views and measurements was the most effective at predicting 3-D shape scores.

RESULTS:

Three-dimensional shape scores were most strongly correlated with α angle on the cross-table view when considering the entire proximal femur (r = -0.568; p < 0.001) and on the Meyer lateral view when considering the region of the cam lesion (r = -0.669; p < 0.001). Partial least squares regression demonstrated that measurements from the Meyer lateral and 90° Dunn radiographs produced the optimized regression model for predicting shape scores for the proximal femur (R = 0.405, root mean squared error of prediction [RMSEP] = 1.549) and the region of the cam lesion (R = 0.525, RMSEP = 1.150). Interestingly, views with larger differences in the α angle and head-neck offset between control and cam FAI groups did not have the strongest correlations with 3-D shape.

CONCLUSIONS:

Considered together, radiographic measurements from the Meyer lateral and 90° Dunn views provided the most effective predictions of 3-D shape of the proximal femur and the region of the cam lesion as determined using shape modeling metrics.

CLINICAL RELEVANCE:

Our results suggest that clinicians should consider using the Meyer lateral and 90° Dunn views to evaluate patients in whom cam FAI is suspected. However, the α angle and head-neck offset measurements from these and other plain film views could describe no more than half of the overall variation in the shape of the proximal femur and cam lesion. Thus, caution should be exercised when evaluating femoral head anatomy using the α angle and head-neck offset measurements from plain film radiographs. Given these findings, we believe there is merit in pursuing research that aims to develop the framework necessary to integrate statistical shape modeling into clinical evaluation, because this could aid in the diagnosis of cam FAI.



M. Berzins. “Time Integration Errors and Energy Conservation Properties of the Stormer Verlet Method Applied to MPM,” In Proceedings of VI International Conference on Particle-based Methods – Fundamentals and Applications, Barcelona, Edited by E. O ̃ nate, M. Bischoff, D.R.J. Owen, P. Wriggers & T. Zohdi, PARTICLES 2019, pp. 555-566. October, 2019.
ISBN: 978-84-121101-1-1

ABSTRACT

The success of the Material Point Method (MPM) in solving many challenging problems nevertheless raises some open questions regarding the fundamental properties of the method such as the energy conservation since being addressed by Bardenhagen and by Love and Sulsky. Similarly while low order symplectic time integration techniques are used with MPM, higher order methods have not been used. For this reason the Stormer Verlet method, a popular and widely-used symplectic method is applied to MPM. Both the time integration error and the energy conservation properties of this method applied to MPM are considered. The method is shown to have locally third order accuracy of energy conservation in time. This is in contrast to the locally second order accuracy in energy conservation of the methods that are used in many MPM calculations. This third accuracy accuracy is demonstrated both locally and globally on a standard MPM test example.



R. Bhalodia, S. Y. Elhabian, L. Kavan, R. T. Whitaker. “A Cooperative Autoencoder for Population-Based Regularization of CNN Image Registration,” In Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, In Medical Image Computing and Computer Assisted Intervention -- MICCAI 2019, Springer International Publishing, pp. 391--400. 2019.

ABSTRACT

Spatial transformations are enablers in a variety of medical image analysis applications that entail aligning images to a common coordinate systems. Population analysis of such transformations is expected to capture the underlying image and shape variations, and hence these transformations are required to produce anatomically feasible correspondences. This is usually enforced through some smoothness-based generic metric or regularization of the deformation field. Alternatively, population-based regularization has been shown to produce anatomically accurate correspondences in cases where anatomically unaware (i.e., data independent) regularization fail. Recently, deep networks have been used to generate spatial transformations in an unsupervised manner, and, once trained, these networks are computationally faster and as accurate as conventional, optimization-based registration methods. However, the deformation fields produced by these networks require smoothness penalties, just as the conventional registration methods, and ignores population-level statistics of the transformations. Here, we propose a novel neural network architecture that simultaneously learns and uses the population-level statistics of the spatial transformations to regularize the neural networks for unsupervised image registration. This regularization is in the form of a bottleneck autoencoder, which learns and adapts to the population of transformations required to align input images by encoding the transformations to a low dimensional manifold. The proposed architecture produces deformation fields that describe the population-level features and associated correspondences in an anatomically relevant manner and are statistically compact relative to the state-of-the-art approaches while maintaining computational efficiency. We demonstrate the efficacy of the proposed architecture on synthetic data sets, as well as 2D and 3D medical data.



D. Clark, K. Johnson, C. Butson, C. Lebel, D. Gobbi, R. Ramasubbu, Z. Kiss. “White matter tracts activated by successful subgenual cingulate deep brain stimulation,” In European Neuropsychopharmacology, Vol. 29, Elsevier, pp. S532--S533. 2019.
DOI: 10.1016/j.euroneuro.2018.11.789



G. Duffley, D. N. Anderson, J. Vorwerk, A. C. Dorval, C. R. Butson. “Evaluation of methodologies for computing the deep brain stimulation volume of tissue activated,” In Journal of Neural Engineering, Aug, 2019.
DOI: 10.1088/1741-2552/ab3c95

ABSTRACT

Computational models are a popular tool for predicting the effects of deep brain stimulation (DBS) on neural tissue. One commonly used model, the volume of tissue activated (VTA), is computed using multiple methodologies. We quantified differences in the VTAs generated by five methodologies: the traditional axon model method, the electric field norm, and three activating function based approaches - the activating function at each grid point in the tangential direction (AF-Tan) or in the maximally activating direction (AF-3D), and the maximum activating function along the entire length of a tangential fiber (AF-Max).

Approach: We computed the VTA using each method across multiple stimulation settings. The resulting volumes were compared for similarity, and the methodologies were analyzed for their differences in behavior.

Main Results: Activation threshold values for both the electric field norm and the activating function vary with regards to electrode configuration, pulse width, and frequency. All methods produced highly similar volumes for monopolar stimulation. For bipolar electrode configurations, only the maximum activating function along the tangential axon method, AF-Max, produced similar volumes to those produced by the axon model method. Further analysis revealed that both of these methods are biased by their exclusive use of tangential fiber orientations. In contrast, the activating function in the maximally activating direction method, AF-3D, produces a VTA that is free of axon orientation and projection bias.

Significance: Simulating tangentially oriented axons, the standard approach of computing the VTA, is too computationally expensive for widespread implementation and yields results biased by the assumption of tangential fiber orientation. In this work, we show that a computationally efficient method based on the activating function, AF-Max, reliably reproduces the VTAs generated by direct axon modeling. Further, we propose another method, AF-3D as a potentially superior model for representing generic neural tissue activation.



Anupama Goparaju. “Evaluation and validation of off-the-shelf statistical shape modeling tools in clinical applications,” School of Computing, University of Utah, 2019.

ABSTRACT

Statistical shape modeling (SSM) enables quantitative analysis of anatomical shapes. SSM is widely used in biology and medicine to model anatomies and their shape variability within populations. The technological advancements of in vivo images have led to the development of various open-source tools that can automate statistical analysis of shapes. These tools are based on different modeling approaches and assumptions to accomplish the same objective. However, little work has been done in the systematic evaluation and validation of SSM tools in clinical applications that rely on morphometric quantifications.

In this thesis, we emphasize and demonstrate the importance of evaluation and validation of SSM tools in different clinical applications. To this end, SSM can help in assessing patient-specific anatomical information by relating it to population-level morphometrics. The clinical needs that are driven by patient-specific information, such as implant design and selection, motion tracking, surgical planning, and lesion screening, are considered for analysis. The shape models for these clinical needs are analyzed from three widely used, state-of-the-art SSM tools, namely, ShapeWorks, Deformetrica, and SPHARM-PDM. The shape models are evaluated and validated using intrinsic and extrinsic assessments. The evaluation and validation experiments show that SSM tools display different levels of consistencies in performance. ShapeWorks and Deformetrica models are more consistent compared to models from SPHARM-PDM due to the group-wise approach of estimating shape correspondences. Furthermore, ShapeWorks and Deformetrica shape models capture clinically relevant population-level variability compared to SPHARM-PDM models.

In this thesis, a literature survey is performed to identify the existing studies that have performed evaluation and validation of SSM tools in clinical applications. The need for such an assessment is showcased using a proof-of-concept experiment. A clinical application-driven validation framework is proposed to compare the performance of SSM tools. The framework is tested on clinical needs to provide insights about the deployment of SSM in real clinical scenarios.



A. Gyulassy, P.-T. Bremer, V. Pascucci. “Shared-Memory Parallel Computation of Morse-Smale Complexes with Improved Accuracy,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 25, No. 1, IEEE, pp. 1183--1192. Jan, 2019.
DOI: 10.1109/tvcg.2018.2864848

ABSTRACT

Topological techniques have proven to be a powerful tool in the analysis and visualization of large-scale scientific data. In particular, the Morse-Smale complex and its various components provide a rich framework for robust feature definition and computation. Consequently, there now exist a number of approaches to compute Morse-Smale complexes for large-scale data in parallel. However, existing techniques are based on discrete concepts which produce the correct topological structure but are known to introduce grid artifacts in the resulting geometry. Here, we present a new approach that combines parallel streamline computation with combinatorial methods to construct a high-quality discrete Morse-Smale complex. In addition to being invariant to the orientation of the underlying grid, this algorithm allows users to selectively build a subset of features using high-quality geometry. In particular, a user may specifically select which ascending/descending manifolds are reconstructed with improved accuracy, focusing computational effort where it matters for subsequent analysis. This approach computes Morse-Smale complexes for larger data than previously feasible with significant speedups. We demonstrate and validate our approach using several examples from a variety of different scientific domains, and evaluate the performance of our method.



M. Han, I. Wald, W. Usher, Q. Wu, F. Wang, V. Pascicci, C. D. Hansen, C. R. Johnson. “Ray Tracing Generalized Tube Primitives: Method and Applications,” In Computer Graphics Forum, Vol. 38, No. 3, John Wiley & Sons Ltd., 2019.

ABSTRACT

We present a general high-performance technique for ray tracing generalized tube primitives. Our technique efficiently supports tube primitives with fixed and varying radii, general acyclic graph structures with bifurcations, and correct transparency with interior surface removal. Such tube primitives are widely used in scientific visualization to represent diffusion tensor imaging tractographies, neuron morphologies, and scalar or vector fields of 3D flow. We implement our approach within the OSPRay ray tracing framework, and evaluate it on a range of interactive visualization use cases of fixed- and varying-radius streamlines, pathlines, complex neuron morphologies, and brain tractographies. Our proposed approach provides interactive, high-quality rendering, with low memory overhead.



A. B. Hanson, R. N. Lee, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV. “Quantifying Impurity Effects on the Surface Morphology of α-U3O8,” In Analytical Chemistry, 2019.
DOI: doi:10.1021/acs.analchem.9b02013

ABSTRACT

The morphological effect of impurities on α-U3O8 has been investigated. This study provides the first evidence that the presence of impurities can alter nuclear material morphology, and these changes can be quantified to aid in revealing processing history. Four elements: Ca, Mg, V, and Zr were implemented in the uranyl peroxide synthesis route and studied individually within the α-U3O8. Six total replicates were synthesized, and replicates 1–3 were filtered and washed with Millipore water (18.2 MΩ) to remove any residual nitrates. Replicates 4–6 were filtered but not washed to determine the amount of impurities removed during washing. Inductively coupled plasma mass spectrometry (ICP-MS) was employed at key points during the synthesis to quantify incorporation of the impurity. Each sample was characterized using powder X-ray diffraction (p-XRD), high-resolution scanning electron microscopy (HRSEM), and SEM with energy dispersive X-ray spectroscopy (SEM-EDS). p-XRD was utilized to evaluate any crystallographic changes due to the impurities; HRSEM imagery was analyzed with Morphological Analysis for MAterials (MAMA) software and machine learning classification for quantification of the morphology; and SEM-EDS was utilized to locate the impurity within the α-U3O8. All samples were found to be quantifiably distinguishable, further demonstrating the utility of quantitative morphology as a signature for the processing history of nuclear material.



S. T. Heffernan, N. Ly, B. J. Mower, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV. “Identifying surface morphological characteristics to differentiate between mixtures of U3O8 synthesized from ammonium diuranate and uranyl peroxide,” In Radiochimica Acta, 2019.

ABSTRACT

In the present study, surface morphological differences of mixtures of triuranium octoxide (U3O8), synthesized from uranyl peroxide (UO4) and ammonium diuranate (ADU), were investigated. The purity of each sample was verified using powder X-ray diffractometry (p-XRD), and scanning electron microscopy (SEM) images were collected to identify unique morphological features. The U3O8 from ADU and UO4 was found to be unique. Qualitatively, both particles have similar features being primarily circular in shape. Using the morphological analysis of materials (MAMA) software, particle shape and size were quantified. UO4 was found to produce U3O8 particles three times the area of those produced from ADU. With the starting morphologies quantified, U3O8 samples from ADU and UO4 were physically mixed in known quantities. SEM images were collected of the mixed samples, and the MAMA software was used to quantify particle attributes. As U3O8 particles from ADU were unique from UO4, the composition of the mixtures could be quantified using SEM imaging coupled with particle analysis. This provides a novel means of quantifying processing histories of mixtures of uranium oxides. Machine learning was also used to help further quantify characteristics in the image database through direct classification and particle segmentation using deep learning techniques based on Convolutional Neural Networks (CNN). It demonstrates that these techniques can distinguish the mixtures with high accuracy as well as showing significant differences in morphology between the mixtures. Results from this study demonstrate the power of quantitative morphological analysis for determining the processing history of nuclear materials.



D. Hoang, P. Klacansky, H. Bhatia, P.-T. Bremer, P. Lindstrom, V. Pascucci. “A Study of the Trade-off Between Reducing Precision and Reducing Resolution for Data Analysis and Visualization,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 25, No. 1, IEEE, pp. 1193--1203. Jan, 2019.
DOI: 10.1109/tvcg.2018.2864853

ABSTRACT

There currently exist two dominant strategies to reduce data sizes in analysis and visualization: reducing the precision of the data, e.g., through quantization, or reducing its resolution, e.g., by subsampling. Both have advantages and disadvantages and both face fundamental limits at which the reduced information ceases to be useful. The paper explores the additional gains that could be achieved by combining both strategies. In particular, we present a common framework that allows us to study the trade-off in reducing precision and/or resolution in a principled manner. We represent data reduction schemes as progressive streams of bits and study how various bit orderings such as by resolution, by precision, etc., impact the resulting approximation error across a variety of data sets as well as analysis tasks. Furthermore, we compute streams that are optimized for different tasks to serve as lower bounds on the achievable error. Scientific data management systems can use the results presented in this paper as guidance on how to store and stream data to make efficient use of the limited storage and bandwidth in practice.



J. K. Holmen, B. Peterson, A. Humphrey, D. Sunderland, O. H. Diaz-Ibarra, J. N. Thornock, M. Berzins. “Portably Improving Uintah's Readiness for Exascale Systems Through the Use of Kokkos,” SCI Institute, 2019.

ABSTRACT

Uncertainty and diversity in future HPC systems, including those for exascale, makes portable codebases desirable. To ease future ports, the Uintah Computational Framework has adopted the Kokkos C++ Performance Portability Library. This paper describes infrastructure advancements and performance improvements using partitioning functionality recently added to Kokkos within Uintah's MPI+Kokkos hybrid parallelism approach. Results are presented for two challenging calculations that have been refactored to support Kokkos::OpenMP and Kokkos::Cuda back-ends. These results demonstrate performance improvements up to (i) 2.66x when refactoring for portability, (ii) 81.59x when adding loop-level parallelism via Kokkos back-ends, and (iii) 2.63x when more eciently using a node. Good strong-scaling characteristics to 442,368 threads across 1728 Knights Landing processors are also shown. These improvements have been achieved with little added overhead (sub-millisecond, consuming up to 0.18% of per-timestep time). Kokkos adoption and refactoring lessons are also discussed.



J. K. Holmen, B. Peterson, M. Berzins. “An Approach for Indirectly Adopting a Performance Portability Layer in Large Legacy Codes,” In 2nd International Workshop on Performance, Portability, and Productivity in HPC (P3HPC), In conjunction with SC19, 2019.

ABSTRACT

Diversity among supported architectures in current and emerging high performance computing systems, including those for exascale, makes portable codebases desirable. Portability of a codebase can be improved using a performance portability layer to provide access to multiple underlying programming models through a single interface. Direct adoption of a performance portability layer, however, poses challenges for large pre-existing software frameworks that may need to preserve legacy code and/or adopt other programming models in the future. This paper describes an approach for indirect adoption that introduces a framework-specific portability layer between the application developer and the adopted performance portability layer to help improve legacy code support and long-term portability for future architectures and programming models. This intermediate layer uses loop-level, application-level, and build-level components to ease adoption of a performance portability layer in large legacy codebases. Results are shown for two challenging case studies using this approach to make portable use of OpenMP and CUDA via Kokkos in an asynchronous many-task runtime system, Uintah. These results show performance improvements up to 2.7x when refactoring for portability and 2.6x when more efficiently using a node. Good strong-scaling to 442,368 threads across 1,728 Knights Landing processors are also shown using MPI+Kokkos at scale.



Alan Humphrey. “Scalable Asynchronous Many-Task Runtime Solutions to Globally Coupled Problems,” School of Computing, University of Utah, 2019.

ABSTRACT

Thermal radiation is an important physical process and a key mechanism in a class of challenging engineering and research problems. The principal exascale-candidate application motivating this research is a large eddy simulation (LES) aimed at predicting the performance of a commercial, 1200 MWe ultra-super critical (USC) coal boiler, with radiation as the dominant mode of heat transfer. Scalable modeling of radiation is currently one of the most challenging problems in large-scale simulations, due to the global, all-to-all physical and resulting computational connectivity. Fundamentally, radiation models impose global data dependencies, requiring each compute node in a distributed memory system to send data to, and receive data from, potentially every other node. This process can be prohibitively expensive on large distributed memory systems due to pervasive all-to-all message passing interface (MPI) communication. Correctness is also difficult to achieve when coordinating global communication of this kind. Asynchronous many-task (AMT) runtime systems are a possible leading alternative to mitigate programming challenges at the runtime system-level, sheltering the application developer from the complexities introduced by future architectures. However, large-scale parallel applications with complex global data dependencies, such as in radiation modeling, pose significant scalability challenges themselves, even for a highly tuned AMT runtime. The principal aims of this research are to demonstrate how the Uintah AMT runtime can be adapted, making it possible for complex multiphysics applications with radiation to scale on current petascale and emerging exascale architectures. For Uintah, which uses a directed acyclic graph to represent the computation and associated data dependencies, these aims are achieved through: 1) the use of an AMT runtime; 2) adapting and leveraging Uintah’s adaptive mesh refinement support to dramatically reduce computation, communication volume, and nodal memory footprint for radiation calculations; and 3) automating the all-to-all communication at the runtime level through a task graph dependency analysis phase designed to efficiently manage data dependencies inherent in globally coupled problems.



A. Humphrey, M. Berzins. “An Evaluation of An Asynchronous Task Based Dataflow Approach For Uintah,” In 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Vol. 2, pp. 652-657. July, 2019.
ISSN: 0730-3157
DOI: 10.1109/COMPSAC.2019.10282

ABSTRACT

The challenge of running complex physics code on the largest computers available has led to dataflow paradigms being explored. While such approaches are often applied at smaller scales, the challenge of extreme-scale data flow computing remains. The Uintah dataflow framework has consistently used dataflow computing at the largest scales on complex physics applications. At present Uintah contains two main dataflow models. Both are based upon asynchronous communication. One uses a static graph-based approach with asynchronous communication and the other uses a more dynamic approach that was introduced almost a decade ago. Subsequent changes within the Uintah runtime system combined with many more large scale experiments, has necessitated a reevaluation of these two approaches, comparing them in the context of large scale problems. While the static approach has worked well for some large-scale simulations, the dynamic approach is seen to offer performance improvements over the static case for a challenging fluid-structure interaction problem at large scale that involves fluid flow and a moving solid represented using particle method on an adaptive mesh.



K. A. Johnson, P. T. Fletcher, D. Servello, A. Bona, M. Porta, J. L. Ostrem, E. Bardinet, M. Welter, A. M. Lozano, J. C. Baldermann, J. Kuhn, D. Huys, T. Foltynie, M. Hariz, E. M. Joyce, L. Zrinzo, Z. Kefalopoulou, J. Zhang, F. Meng, C. Zhang, Z. Ling, X. Xu, X. Yu, A. YJM Smeets, L. Ackermans, V. Visser-Vandewalle, A. Y. Mogilner, M. H. Pourfar, L. Almeida, A. Gunduz, W. Hu, K. D. Foote, M. S. Okun, C. R. Butson. “Image-based analysis and long-term clinical outcomes of deep brain stimulation for Tourette syndrome: a multisite study,” In Journal of Neurology, Neurosurgery & Psychiatry, BMJ Publishing Group, 2019.
DOI: 10.1136/jnnp-2019-320379

ABSTRACT

BACKGROUND:
Deep brain stimulation (DBS) can be an effective therapy for tics and comorbidities in select cases of severe, treatment-refractory Tourette syndrome (TS). Clinical responses remain variable across patients, which may be attributed to differences in the location of the neuroanatomical regions being stimulated. We evaluated active contact locations and regions of stimulation across a large cohort of patients with TS in an effort to guide future targeting.

METHODS:
We collected retrospective clinical data and imaging from 13 international sites on 123 patients. We assessed the effects of DBS over time in 110 patients who were implanted in the centromedial (CM) thalamus (n=51), globus pallidus internus (GPi) (n=47), nucleus accumbens/anterior limb of the internal capsule (n=4) or a combination of targets (n=8). Contact locations (n=70 patients) and volumes of tissue activated (n=63 patients) were coregistered to create probabilistic stimulation atlases.
RESULTS:
Tics and obsessive-compulsive behaviour (OCB) significantly improved over time (p<0.01), and there were no significant differences across brain targets (p>0.05). The median time was 13 months to reach a 40% improvement in tics, and there were no significant differences across targets (p=0.84), presence of OCB (p=0.09) or age at implantation (p=0.08). Active contacts were generally clustered near the target nuclei, with some variability that may reflect differences in targeting protocols, lead models and contact configurations. There were regions within and surrounding GPi and CM thalamus that improved tics for some patients but were ineffective for others. Regions within, superior or medial to GPi were associated with a greater improvement in OCB than regions inferior to GPi.
CONCLUSION:
The results collectively indicate that DBS may improve tics and OCB, the effects may develop over several months, and stimulation locations relative to structural anatomy alone may not predict response. This study was the first to visualise and evaluate the regions of stimulation across a large cohort of patients with TS to generate new hypotheses about potential targets for improving tics and comorbidities.



R.B. Lanfredi, J.D. Schroeder, C. Vachet, T. Tasdizen. “Adversarial regression training for visualizing the progression of chronic obstructive pulmonary disease with chest x-rays,” In Arxiv, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019.

ABSTRACT

Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available athttps://github.com/ricbl/vrgan.



B. Peterson. “Portable and Performant GPU/Heterogeneous Asynchronous Many-task Runtime System,” Subtitled “Ph.D. Dissertation,” University of Utah, School of Computing, Dec, 2019.

ABSTRACT

Asynchronous many-task (AMT) runtimes are maturing as a model for computing simulations on a diverse range of architectures at large-scale. The Uintah AMT framework is driven by a philosophy of maintaining an application layer distinct from the underlying runtime while operating on an adaptive mesh grid. This model has enabled task devel-opers to focus on writing task code while minimizing their interaction with MPI transfers, halo processing, data stores, coherency of simulation variables, and proper ordering of task execution. Further, Uintah is implementing an architecture portable solution by utilizing the Kokkos programming portability layer so that application tasks can be written in one codebase and performantly executed on CPUs, GPUs, Intel Xeon Phis, and other future architectures.

Of these architectures, it is perhaps Nvidia GPUs that introduce the greatest usability and portability challenges for AMT runtimes. Specifically, Nvidia GPUs require code to adhere to a proprietary programming model, use separate high capacity memory, utilize asynchrony of data movement and execution, and partition execution units among many streaming multiprocessors. Numerous novel solutions to both Uintah and Kokkos are required to abstract these GPU features into an AMT runtime while preserving an appli-cation layer and enabling portability.

The focus of this AMT research is largely split into two main parts, performance and portability. Runtime performance comes from 1) minimizing runtime overhead when preparing simulation variables for tasks prior to execution, and 2) executing a hetero-geneous mixture of tasks to keep compute node processing units busy. Preparation of simulation variables, especially halo processing, receives significant emphasis as Uintah’s target problems heavily rely on local and global halos. In addition, this work covers automated data movement of simulation variables between host and GPU memory as well as distributing tasks throughout a GPU for execution.

Portability is a productivity necessity as application developers struggle to maintain three sets of code per task, namely code for single CPU core execution, CUDA code for GPU tasks, and a third set of code for Xeon Phi parallel execution. Programming portability layers, such as Kokkos, provide a framework for this portability, however, Kokkos itself requires modifications to support GPU execution of finer grained tasks typical of AMT runtimes like Uintah. Currently, Kokkos GPU parallel loop execution is bulk-synchronous. This research demonstrates a model for portable loops that is asynchronous, nonblocking, and performant. Additionally, integrating GPU portability into Uintah required additional modifications to aid the application developer in avoiding Kokkos specific details.

This research concludes by demonstrating a GPU-enabled AMT runtime that is both performant and portable. Further, application developers are not burdened with additional architecture specific requirements. Results are demonstrated using production task codebases written for CPUs, GPUs, and Kokkos portability and executed in GPU homogeneous and CPU/GPU heterogeneous environments.



A. Prakosa, H.J. Arevalo, D. Deng, P.M. Boyle, P.P. Nikolov, H. Ashikaga, J.E. Blauer, E. Ghafoori, C.J. Park, R.C. Blake III, F.T. Han, R.S. MacLeod, H.R. Halperin, D.J. Callans, R. Ranjan, J. Chrispin, S. Nazarian,, N.A. Trayanova. “Personalized Virtual-heart Technology for Guiding the Ablation of Infarct-related Ventricular Tachycardia,” In Nature Biomedical Engineering, Vol. 2, pp. 732–740. 2019.
DOI: doi.org/10.1038/s41551-018-0282-2

ABSTRACT

Ventricular tachycardia (VT), which can lead to sudden cardiac death, occurs frequently in patients with myocardial infarction. Catheter-based radio-frequency ablation of cardiac tissue has achieved only modest efficacy, owing to the inaccurate identification of ablation targets by current electrical mapping techniques, which can lead to extensive lesions and to a prolonged, poorly tolerated procedure. Here, we show that personalized virtual-heart technology based on cardiac imaging and computational modelling can identify optimal infarct-related VT ablation targets in retrospective animal (five swine) and human studies (21 patients), as well as in a prospective feasibility study (five patients). We first assessed, using retrospective studies (one of which included a proportion of clinical images with artefacts), the capability of the technology to determine the minimum-size ablation targets for eradicating all VTs. In the prospective study, VT sites predicted by the technology were targeted directly, without relying on prior electrical mapping. The approach could improve infarct-related VT ablation guidance, where accurate identification of patient-specific optimal targets could be achieved on a personalized virtual heart before the clinical procedure.



D. Sahasrabudhe, M. Berzins, J. Schmidt. “Node failure resiliency for Uintah without checkpointing,” In Concurrency and Computation: Practice and Experience, pp. e5340. 2019.
DOI: doi:10.1002/cpe.5340

ABSTRACT

The frequency of failures in upcoming exascale supercomputers may well be greater than at present due to many-core architectures if component failure rates remain unchanged. This potential increase in failure frequency coupled with I/O challenges at exascale may prove problematic for current resiliency approaches such as checkpoint restarting, although the use of fast intermediate memory may help. Algorithm-Based Fault Tolerance (ABFT) using Adaptive Mesh Refinement (AMR) is one resiliency approach used to address these challenges. For adaptive mesh codes, a coarse mesh version of the solution may be used to restore the fine mesh solution. This paper addresses the implementation of the ABFT approach within the Uintah software framework: both at a software level within Uintah and in the data reconstruction method used for the recovery of lost data. This method has two problems: inaccuracies introduced during the reconstruction propagate forward in time, and the physical consistency of variables such as positivity or boundedness may be violated during interpolation. These challenges can be addressed by the combination of two techniques: 1. a fault-tolerant MPI implementation to recover from runtime node failures, and 2. high-order interpolation schemes to preserve the physical solution and reconstruct lost data. The approach considered here uses a "Limited Essentially Non-Oscillatory" (LENO) scheme along with AMR to rebuild the lost data without checkpointing using Uintah. Experiments were carried out using a fault-tolerant MPI - ULFM to recover from runtime failure, and LENO to recover data on patches belonging to failed ranks, while the simulation was continued to the end. Results show that this ABFT approach is up to 10x faster than the traditional checkpointing method. The new interpolation approach is more accurate than linear interpolation and not subject to the overshoots found in other interpolation methods.