Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2019


Alan Humphrey. “Scalable Asynchronous Many-Task Runtime Solutions to Globally Coupled Problems,” School of Computing, University of Utah, 2019.

ABSTRACT

Thermal radiation is an important physical process and a key mechanism in a class of challenging engineering and research problems. The principal exascale-candidate application motivating this research is a large eddy simulation (LES) aimed at predicting the performance of a commercial, 1200 MWe ultra-super critical (USC) coal boiler, with radiation as the dominant mode of heat transfer. Scalable modeling of radiation is currently one of the most challenging problems in large-scale simulations, due to the global, all-to-all physical and resulting computational connectivity. Fundamentally, radiation models impose global data dependencies, requiring each compute node in a distributed memory system to send data to, and receive data from, potentially every other node. This process can be prohibitively expensive on large distributed memory systems due to pervasive all-to-all message passing interface (MPI) communication. Correctness is also difficult to achieve when coordinating global communication of this kind. Asynchronous many-task (AMT) runtime systems are a possible leading alternative to mitigate programming challenges at the runtime system-level, sheltering the application developer from the complexities introduced by future architectures. However, large-scale parallel applications with complex global data dependencies, such as in radiation modeling, pose significant scalability challenges themselves, even for a highly tuned AMT runtime. The principal aims of this research are to demonstrate how the Uintah AMT runtime can be adapted, making it possible for complex multiphysics applications with radiation to scale on current petascale and emerging exascale architectures. For Uintah, which uses a directed acyclic graph to represent the computation and associated data dependencies, these aims are achieved through: 1) the use of an AMT runtime; 2) adapting and leveraging Uintah’s adaptive mesh refinement support to dramatically reduce computation, communication volume, and nodal memory footprint for radiation calculations; and 3) automating the all-to-all communication at the runtime level through a task graph dependency analysis phase designed to efficiently manage data dependencies inherent in globally coupled problems.



A. Humphrey, M. Berzins. “An Evaluation of An Asynchronous Task Based Dataflow Approach For Uintah,” In 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Vol. 2, pp. 652-657. July, 2019.
ISSN: 0730-3157
DOI: 10.1109/COMPSAC.2019.10282

ABSTRACT

The challenge of running complex physics code on the largest computers available has led to dataflow paradigms being explored. While such approaches are often applied at smaller scales, the challenge of extreme-scale data flow computing remains. The Uintah dataflow framework has consistently used dataflow computing at the largest scales on complex physics applications. At present Uintah contains two main dataflow models. Both are based upon asynchronous communication. One uses a static graph-based approach with asynchronous communication and the other uses a more dynamic approach that was introduced almost a decade ago. Subsequent changes within the Uintah runtime system combined with many more large scale experiments, has necessitated a reevaluation of these two approaches, comparing them in the context of large scale problems. While the static approach has worked well for some large-scale simulations, the dynamic approach is seen to offer performance improvements over the static case for a challenging fluid-structure interaction problem at large scale that involves fluid flow and a moving solid represented using particle method on an adaptive mesh.



K. A. Johnson, P. T. Fletcher, D. Servello, A. Bona, M. Porta, J. L. Ostrem, E. Bardinet, M. Welter, A. M. Lozano, J. C. Baldermann, J. Kuhn, D. Huys, T. Foltynie, M. Hariz, E. M. Joyce, L. Zrinzo, Z. Kefalopoulou, J. Zhang, F. Meng, C. Zhang, Z. Ling, X. Xu, X. Yu, A. YJM Smeets, L. Ackermans, V. Visser-Vandewalle, A. Y. Mogilner, M. H. Pourfar, L. Almeida, A. Gunduz, W. Hu, K. D. Foote, M. S. Okun, C. R. Butson. “Image-based analysis and long-term clinical outcomes of deep brain stimulation for Tourette syndrome: a multisite study,” In Journal of Neurology, Neurosurgery & Psychiatry, BMJ Publishing Group, 2019.
DOI: 10.1136/jnnp-2019-320379

ABSTRACT

BACKGROUND:
Deep brain stimulation (DBS) can be an effective therapy for tics and comorbidities in select cases of severe, treatment-refractory Tourette syndrome (TS). Clinical responses remain variable across patients, which may be attributed to differences in the location of the neuroanatomical regions being stimulated. We evaluated active contact locations and regions of stimulation across a large cohort of patients with TS in an effort to guide future targeting.

METHODS:
We collected retrospective clinical data and imaging from 13 international sites on 123 patients. We assessed the effects of DBS over time in 110 patients who were implanted in the centromedial (CM) thalamus (n=51), globus pallidus internus (GPi) (n=47), nucleus accumbens/anterior limb of the internal capsule (n=4) or a combination of targets (n=8). Contact locations (n=70 patients) and volumes of tissue activated (n=63 patients) were coregistered to create probabilistic stimulation atlases.
RESULTS:
Tics and obsessive-compulsive behaviour (OCB) significantly improved over time (p<0.01), and there were no significant differences across brain targets (p>0.05). The median time was 13 months to reach a 40% improvement in tics, and there were no significant differences across targets (p=0.84), presence of OCB (p=0.09) or age at implantation (p=0.08). Active contacts were generally clustered near the target nuclei, with some variability that may reflect differences in targeting protocols, lead models and contact configurations. There were regions within and surrounding GPi and CM thalamus that improved tics for some patients but were ineffective for others. Regions within, superior or medial to GPi were associated with a greater improvement in OCB than regions inferior to GPi.
CONCLUSION:
The results collectively indicate that DBS may improve tics and OCB, the effects may develop over several months, and stimulation locations relative to structural anatomy alone may not predict response. This study was the first to visualise and evaluate the regions of stimulation across a large cohort of patients with TS to generate new hypotheses about potential targets for improving tics and comorbidities.



R.B. Lanfredi, J.D. Schroeder, C. Vachet, T. Tasdizen. “Adversarial regression training for visualizing the progression of chronic obstructive pulmonary disease with chest x-rays,” In Arxiv, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019.

ABSTRACT

Knowledge of what spatial elements of medical images deep learning methods use as evidence is important for model interpretability, trustiness, and validation. There is a lack of such techniques for models in regression tasks. We propose a method, called visualization for regression with a generative adversarial network (VR-GAN), for formulating adversarial training specifically for datasets containing regression target values characterizing disease severity. We use a conditional generative adversarial network where the generator attempts to learn to shift the output of a regressor through creating disease effect maps that are added to the original images. Meanwhile, the regressor is trained to predict the original regression value for the modified images. A model trained with this technique learns to provide visualization for how the image would appear at different stages of the disease. We analyze our method in a dataset of chest x-rays associated with pulmonary function tests, used for diagnosing chronic obstructive pulmonary disease (COPD). For validation, we compute the difference of two registered x-rays of the same patient at different time points and correlate it to the generated disease effect map. The proposed method outperforms a technique based on classification and provides realistic-looking images, making modifications to images following what radiologists usually observe for this disease. Implementation code is available athttps://github.com/ricbl/vrgan.



B. Peterson. “Portable and Performant GPU/Heterogeneous Asynchronous Many-task Runtime System,” Subtitled “Ph.D. Dissertation,” University of Utah, School of Computing, Dec, 2019.

ABSTRACT

Asynchronous many-task (AMT) runtimes are maturing as a model for computing simulations on a diverse range of architectures at large-scale. The Uintah AMT framework is driven by a philosophy of maintaining an application layer distinct from the underlying runtime while operating on an adaptive mesh grid. This model has enabled task devel-opers to focus on writing task code while minimizing their interaction with MPI transfers, halo processing, data stores, coherency of simulation variables, and proper ordering of task execution. Further, Uintah is implementing an architecture portable solution by utilizing the Kokkos programming portability layer so that application tasks can be written in one codebase and performantly executed on CPUs, GPUs, Intel Xeon Phis, and other future architectures.

Of these architectures, it is perhaps Nvidia GPUs that introduce the greatest usability and portability challenges for AMT runtimes. Specifically, Nvidia GPUs require code to adhere to a proprietary programming model, use separate high capacity memory, utilize asynchrony of data movement and execution, and partition execution units among many streaming multiprocessors. Numerous novel solutions to both Uintah and Kokkos are required to abstract these GPU features into an AMT runtime while preserving an appli-cation layer and enabling portability.

The focus of this AMT research is largely split into two main parts, performance and portability. Runtime performance comes from 1) minimizing runtime overhead when preparing simulation variables for tasks prior to execution, and 2) executing a hetero-geneous mixture of tasks to keep compute node processing units busy. Preparation of simulation variables, especially halo processing, receives significant emphasis as Uintah’s target problems heavily rely on local and global halos. In addition, this work covers automated data movement of simulation variables between host and GPU memory as well as distributing tasks throughout a GPU for execution.

Portability is a productivity necessity as application developers struggle to maintain three sets of code per task, namely code for single CPU core execution, CUDA code for GPU tasks, and a third set of code for Xeon Phi parallel execution. Programming portability layers, such as Kokkos, provide a framework for this portability, however, Kokkos itself requires modifications to support GPU execution of finer grained tasks typical of AMT runtimes like Uintah. Currently, Kokkos GPU parallel loop execution is bulk-synchronous. This research demonstrates a model for portable loops that is asynchronous, nonblocking, and performant. Additionally, integrating GPU portability into Uintah required additional modifications to aid the application developer in avoiding Kokkos specific details.

This research concludes by demonstrating a GPU-enabled AMT runtime that is both performant and portable. Further, application developers are not burdened with additional architecture specific requirements. Results are demonstrated using production task codebases written for CPUs, GPUs, and Kokkos portability and executed in GPU homogeneous and CPU/GPU heterogeneous environments.



A. Prakosa, H.J. Arevalo, D. Deng, P.M. Boyle, P.P. Nikolov, H. Ashikaga, J.E. Blauer, E. Ghafoori, C.J. Park, R.C. Blake III, F.T. Han, R.S. MacLeod, H.R. Halperin, D.J. Callans, R. Ranjan, J. Chrispin, S. Nazarian,, N.A. Trayanova. “Personalized Virtual-heart Technology for Guiding the Ablation of Infarct-related Ventricular Tachycardia,” In Nature Biomedical Engineering, Vol. 2, pp. 732–740. 2019.
DOI: doi.org/10.1038/s41551-018-0282-2

ABSTRACT

Ventricular tachycardia (VT), which can lead to sudden cardiac death, occurs frequently in patients with myocardial infarction. Catheter-based radio-frequency ablation of cardiac tissue has achieved only modest efficacy, owing to the inaccurate identification of ablation targets by current electrical mapping techniques, which can lead to extensive lesions and to a prolonged, poorly tolerated procedure. Here, we show that personalized virtual-heart technology based on cardiac imaging and computational modelling can identify optimal infarct-related VT ablation targets in retrospective animal (five swine) and human studies (21 patients), as well as in a prospective feasibility study (five patients). We first assessed, using retrospective studies (one of which included a proportion of clinical images with artefacts), the capability of the technology to determine the minimum-size ablation targets for eradicating all VTs. In the prospective study, VT sites predicted by the technology were targeted directly, without relying on prior electrical mapping. The approach could improve infarct-related VT ablation guidance, where accurate identification of patient-specific optimal targets could be achieved on a personalized virtual heart before the clinical procedure.



D. Sahasrabudhe, M. Berzins, J. Schmidt. “Node failure resiliency for Uintah without checkpointing,” In Concurrency and Computation: Practice and Experience, pp. e5340. 2019.
DOI: doi:10.1002/cpe.5340

ABSTRACT

The frequency of failures in upcoming exascale supercomputers may well be greater than at present due to many-core architectures if component failure rates remain unchanged. This potential increase in failure frequency coupled with I/O challenges at exascale may prove problematic for current resiliency approaches such as checkpoint restarting, although the use of fast intermediate memory may help. Algorithm-Based Fault Tolerance (ABFT) using Adaptive Mesh Refinement (AMR) is one resiliency approach used to address these challenges. For adaptive mesh codes, a coarse mesh version of the solution may be used to restore the fine mesh solution. This paper addresses the implementation of the ABFT approach within the Uintah software framework: both at a software level within Uintah and in the data reconstruction method used for the recovery of lost data. This method has two problems: inaccuracies introduced during the reconstruction propagate forward in time, and the physical consistency of variables such as positivity or boundedness may be violated during interpolation. These challenges can be addressed by the combination of two techniques: 1. a fault-tolerant MPI implementation to recover from runtime node failures, and 2. high-order interpolation schemes to preserve the physical solution and reconstruct lost data. The approach considered here uses a "Limited Essentially Non-Oscillatory" (LENO) scheme along with AMR to rebuild the lost data without checkpointing using Uintah. Experiments were carried out using a fault-tolerant MPI - ULFM to recover from runtime failure, and LENO to recover data on patches belonging to failed ranks, while the simulation was continued to the end. Results show that this ABFT approach is up to 10x faster than the traditional checkpointing method. The new interpolation approach is more accurate than linear interpolation and not subject to the overshoots found in other interpolation methods.



D. Sahasrabudhe, E. T. Phipps, S. Rajamanickam, M. Berzins. “A Portable SIMD Primitive using Kokkos for Heterogeneous Architectures,” In Sixth Workshop on Accelerator Programming Using Directives (WACCPD), 2019.

ABSTRACT

As computer architectures are rapidly evolving (e.g. those designed for exascale), multiple portability frameworks have been developed to avoid new architecture-specific development and tuning. However, portability frameworks depend on compilers for auto-vectorization and may lack support for explicit vectorization on heterogeneous platforms. Alternatively, programmers can use intrinsics-based primitives to achieve more efficient vectorization, but the lack of a gpu back-end for these primitives makes such code non-portable. A unified, portable, Single Instruction Multiple Data (simd) primitive proposed in this work, allows intrinsics-based vectorization on cpus and many-core architectures such as Intel Knights Landing (knl), and also facilitates Single Instruction Multiple Threads (simt) based execution on gpus. This unified primitive, coupled with the Kokkos portability ecosystem, makes it possible to develop explicitly vectorized code, which is portable across heterogeneous platforms. The new simd primitive is used on different architectures to test the performance boost against hard-to-auto-vectorize baseline, to measure the overhead against efficiently vectroized baseline, and to evaluate the new feature called the \logical vector length" (lvl). The simd primitive provides portability across cpus and gpus without any performance degradation being observed experimentally.



A. Sanderson, A. Humphrey, J. Schmidt, R. Sisneros,, M. Papka. “ In situ visualization of performance metrics in multiple domains,” In 2019 IEEE/ACM International Workshop on Programming and Performance Visualization Tools (ProTools), IEEE, Nov, 2019.
DOI: 10.1109/protools49597.2019.00014

ABSTRACT

As application scientists develop and deploy simulation codes on to leadership-class computing resources, there is a need to instrument these codes to better understand performance to efficiently utilize these resources. This instrumentation may come from independent third-party tools that generate and store performance metrics or from custom instrumentation tools built directly into the application. The metrics collected are then available for visual analysis, typically in the domain in which there were collected. In this paper, we introduce an approach to visualize and analyze the performance metrics in situ in the context of the machine, application, and communication domains (MAC model) using a single visualization tool. This visualization model provides a holistic view of the application performance in the context of the resources where it is executing.



G. S. Smith, K. A. Mills, G. M. Pontone, W. S. Anderson, K. M. Perepezko, J. Brasic, Y. Zhou, J. Brandt, C. R. Butson, D. P. Holt, W. B. Mathews, R. F. Dannals, D. F. Wong, Z. Mari. “Effect of STN DBS on vesicular monoamine transporter 2 and glucose metabolism in Parkinson's disease,” In Parkinsonism and Related Disorders, Elsevier, 2019.

ABSTRACT

Introduction

Deep brain stimulation (DBS) is an established treatment Parkinson's Disease (PD). Despite the improvement of motor symptoms in most patients by sub-thalamic nucleus (STN) DBS and its widespread use, the neurobiological mechanisms are not completely understood. The objective of the present study was to elucidate the effects of STN DBS in PD on the dopamine system and neural circuitry employing high-resolution positron emission tomography (PET) imaging. The hypotheses tested were that STN DBS would decrease striatal VMAT2, secondary to an increase in dopamine concentrations, and would decrease striatal cerebral metabolism and increase cortical metabolism.

Methods

PET imaging of the vesicular monoamine transporter (VMAT2) and cerebral glucose metabolism was performed prior to DBS surgery and after 4–6 months of STN stimulation in seven PD patients (mean age 67 ± 7).
Results

The patients demonstrated significant improvement in motor and neuropsychiatric symptoms after STN DBS. Decreased VMAT2 was observed in the caudate, putamen and associative striatum and in extra-striatal, cortical and limbic regions. Cerebral glucose metabolism was decreased in striatal sub-regions and increased in temporal and parietal cortices and the cerebellum. Decreased striatal VMAT2 was correlated with decreased striatal and increased cortical and limbic metabolism. Improvement of depressive symptoms was correlated with decreased VMAT2 in striatal and extra-striatal regions and with striatal decreases and cortical increases in metabolism.
Conclusions

The present results support further investigation of the role of VMAT2, and associated changes in neural circuitry in the improvement of motor and non-motor symptoms with STN DBS in PD.



Q. Tran, M. Berzins, W. Solowski. “An improved moving least squares method for the Material Point Method,” In Proceedings of the 2nd International Conference on the Material Point Method for Modelling Soil-Water-Structure Interaction (MPM 2019), 2019.

ABSTRACT

The paper presents an improved moving least squares reconstruction technique for the Material Point Method. The moving least squares reconstruction(MLS)can improve spatial accuracy in simulations involving large deformations. However, the MLS algorithm relies on computing the inverse of the moment matrix.This is both expensive and potentially unstable when there are not enough material points to reconstruct the high-order least squares function, which leads to a singular or an ill-conditioned matrix. The shown formulation can overcome this limitation while retain the same order of accuracy compared with the conventional moving least squares reconstruction.Numerical experiments demonstrate the improvements in the accuracy and comparison with the original Material Point Method and the Convected Particles Domain Interpolation method.



Q. A. Tran, W. Sołowski, M. Berzins, J. Guilkey. “A convected particle least square interpolation material point method,” In International Journal for Numerical Methods in Engineering, Wiley, October, 2019.

ABSTRACT

Applying the convected particle domain interpolation (CPDI) to the material point method has many advantages over the original material point method, including significantly improved accuracy. However, in the large deformation regime, the CPDI still may not retain the expected convergence rate. The paper proposes an enhanced CPDI formulation based on least square reconstruction technique. The convected particle least square interpolation (CPLS) material point method assumes the velocity field inside the material point domain as nonconstant. This velocity field in the material point domain is mapped to the background grid nodes with a moving least squares reconstruction. In this paper, we apply the improved moving least squares method to avoid the instability of the conventional moving least squares method due to a singular matrix. The proposed algorithm can improve convergence rate, as illustrated by numerical examples using the method of manufactured solutions.



W. Usher, I. Wald, J. Amstutz, J. Gunther, C. Brownlee, V. Pascucci. “Scalable Ray Tracing Using the Distributed FrameBuffer,” In Eurographics Conference on Visualization (EuroVis) 2019, Vol. 38, No. 3, 2019.

ABSTRACT

Image- and data-parallel rendering across multiple nodes on high-performance computing systems is widely used in visualization to provide higher frame rates, support large data sets, and render data in situ. Specifically for in situ visualization, reducing bottlenecks incurred by the visualization and compositing is of key concern to reduce the overall simulation runtime. Moreover, prior algorithms have been designed to support either image- or data-parallel rendering and impose restrictions on the data distribution, requiring different implementations for each configuration. In this paper, we introduce the Distributed FrameBuffer, an asynchronous image-processing framework for multi-node rendering. We demonstrate that our approach achieves performance superior to the state of the art for common use cases, while providing the flexibility to support a wide range of parallel rendering algorithms and data distributions. By building on this framework, we extend the open-source ray tracing library OSPRay with a data-distributed API, enabling its use in data-distributed and in situ visualization applications.



J. Vorwerk, A. Brock, D.N. Anderson, J.D. Rolston, C.R. Butson. “A Retrospective Evaluation of Automated Optimization of Deep Brain Stimulation Settings,” In Brain Stimulation, Vol. 12, No. 2, Elsevier, pp. e54--e55. March, 2019.
DOI: 10.1016/j.brs.2018.12.167



J. Vorwerk, Ü. Aydin, C.H. Wolters, C.R. Butson. “Influence of head tissue conductivity uncertainties on EEG dipole reconstruction,” In Frontiers in Neuroscience, 2019.
DOI: 10.3389/fnins.2019.00531

ABSTRACT

Reliable EEG source analysis depends on sufficiently detailed and accurate head models. In this study, we investigate how uncertainties inherent to the experimentally determined conductivity values of the different conductive compartments influence the results of EEG source analysis. In a single source scenario, the superficial and focal somatosensory P20/N20 component, we analyze the influence of varying conductivities on dipole reconstructions using a generalized polynomial chaos (gPC) approach. We find that in particular the conductivity uncertainties for skin and skull have a significant influence on the EEG inverse solution, leading to variations in source localization by several centimeters. The conductivity uncertainties for gray and white matter were found to have little influence on the source localization, but a strong influence on the strength and orientation of the reconstructed source, respectively. As the CSF conductivity is most accurately determined of all conductivities in a realistic head model, CSF conductivity uncertainties had a negligible influence on the source reconstruction. This small uncertainty is a further benefit of distinguishing the CSF in realistic volume conductor models.



J. Vorwerk, A. Brock, D.N. Anderson, J.D. Rolston, C.R. Butson. “A retrospective evaluation of automated optimization of deep brain stimulation parameters,” In Journal of Neural Engineering, 2019.
DOI: 10.1088/1741-2552/ab35b1

ABSTRACT

Objective: We performed a retrospective analysis of an optimization algorithm for the computation of patient-specific multipolar stimulation configurations employing multiple independent current/voltage sources. We evaluated whether the obtained stimulation configurations align with clinical data and whether the optimized stimulation configurations have the potential to lead to an equal or better stimulation of the target region as manual programming, while reducing the time required for programming sessions. Methods: For three patients (five electrodes) diagnosed with essential tremor, we derived optimized multipolar stimulation configurations using an approach that is suitable for the application in clinical practice. To evaluate the automatically derived stimulation settings, we compared them to the results of the monopolar review. Results: We observe a good agreement between the findings of the monopolar review and the optimized stimulation configurations, with the algorithm assigning the maximal voltage in the optimized multipolar pattern to the contact that was found to lead to the best therapeutic effect in the clinical monopolar review in all cases. Additionally, our simulation results predict that the optimized stimulation settings lead to the activation of an equal or larger volume fraction of the target compared to the manually determined settings in all cases. Conclusions: Our results demonstrate the feasibility of an automatic determination of optimal DBS configurations and motivate a further evaluation of the applied optimization algorithm.



J. Vorwerk, D. McCann, J. Krüger, C.R. Butson. “Interactive computation and visualization of deep brain stimulation effects using Duality,” In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Taylor & Francis, 2019.

ABSTRACT

Deep brain stimulation (DBS) is an established treatment for movement disorders such as Parkinson’s disease or essential tremor. Currently, the selection of optimal stimulation settings is performed by iteratively adjusting the stimulation parameters and is a time consuming procedure that requires multiple clinic visits of several hours. Recently, computational models to predict and visualize the effect of DBS have been developed with the goal to simplify and accelerate this procedure by providing visual guidance and such models have been made available also on mobile devices. However, currently available visualization software still either lacks mobility, i.e. it is running on desktop computers and no easily available in clinical praxis, or flexibility, as the simulations that are visualized on mobile devices have to be precomputed. The goal of the pipeline presented in this paper is to close this gap: Using Duality, a newly developed software for the interactive visualization of simulation results, we implemented a pipeline that allows to compute DBS simulations in near-real time and instantaneously visualize the result on a tablet computer. We carry out a performance analysis and present the results of a case study in which the pipeline was applied.



F. Wang, I. Wald, Q. Wu, W. Usher, C. R. Johnson. “CPU Isosurface Ray Tracing of Adaptive Mesh Refinement Data,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 25, No. 1, IEEE, pp. 1142-1151. Jan, 2019.
DOI: 10.1109/TVCG.2018.2864850

ABSTRACT

Adaptive mesh refinement (AMR) is a key technology for large-scale simulations that allows for adaptively changing the simulation mesh resolution, resulting in significant computational and storage savings. However, visualizing such AMR data poses a significant challenge due to the difficulties introduced by the hierarchical representation when reconstructing continuous field values. In this paper, we detail a comprehensive solution for interactive isosurface rendering of block-structured AMR data. We contribute a novel reconstruction strategy—the octant method—which is continuous, adaptive and simple to implement. Furthermore, we present a generally applicable hybrid implicit isosurface ray-tracing method, which provides better rendering quality and performance than the built-in sampling-based approach in OSPRay. Finally, we integrate our octant method and hybrid isosurface geometry into OSPRay as a module, providing the ability to create high-quality interactive visualizations combining volume and isosurface representations of BS-AMR data. We evaluate the rendering performance, memory consumption and quality of our method on two gigascale block-structured AMR datasets.



F. Wang, I. Wald,, C.R. Johnson. “Interactive Rendering of Large-Scale Volumes on Multi-Core CPUs ,” In 2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), pp. 27--36. 2019.
DOI: 10.1109/LDAV48142.2019.8944267

ABSTRACT

Recent advances in large-scale simulations have resulted in volume data of increasing size that stress the capabilities of off-the-shelf visualization tools. Users suffer from long data loading times, because large data must be read from disk into memory prior to rendering the first frame. In this work, we present a volume renderer that enables high-fidelity interactive visualization of large volumes on multi-core CPU architectures. Compared to existing CPU-based visualization frameworks, which take minutes or hours for data loading, our renderer allows users to get a data overview in seconds. Using a hierarchical representation of raw volumes and ray-guided streaming, we reduce the data loading time dramatically and improve the user's interactivity experience. We also examine system design choices with respect to performance and scalability. Specifically, we evaluate the hierarchy generation time, which has been ignored in most prior work, but which can become a significant bottleneck as data scales. Finally, we create a module on top of the OSPRay ray tracing framework that is ready to be integrated into general-purpose visualization frameworks such as Paraview.



A. Warner, J. Tate, B. Burton,, C.R. Johnson. “A High-Resolution Head and Brain Computer Model for Forward and Inverse EEG Simulation,” In bioRxiv, Cold Spring Harbor Laboratory, Feb, 2019.
DOI: 10.1101/552190

ABSTRACT

To conduct computational forward and inverse EEG studies of brain electrical activity, researchers must construct realistic head and brain computer models, which is both challenging and time consuming. The availability of realistic head models and corresponding imaging data is limited in terms of imaging modalities and patient diversity. In this paper, we describe a detailed head modeling pipeline and provide a high-resolution, multimodal, open-source, female head and brain model. The modeling pipeline specifically outlines image acquisition, preprocessing, registration, and segmentation; three-dimensional tetrahedral mesh generation; finite element EEG simulations; and visualization of the model and simulation results. The dataset includes both functional and structural images and EEG recordings from two high-resolution electrode configurations. The intermediate results and software components are also included in the dataset to facilitate modifications to the pipeline. This project will contribute to neuroscience research by providing a high-quality dataset that can be used for a variety of applications and a computational pipeline that may help researchers construct new head models more efficiently.