Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2021


D. Lange, E. Polanco, R. Judson-Torres, T. Zangle, A. Lex. “Loon: Using Exemplars to Visualize Large Scale Microscopy Data,” In OSF Preprints, 2021.

ABSTRACT

Which drug is most promising for a cancer patient? This is a question a new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, to adjust filters, and remove noise, and for the analysis of the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both, derived data such as growth rates, and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach of choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.



R. B. Lanfredi, M. Zhang, W. F. Auffermann, J. Chan, P. T. Duong, V. Srikumar, T. Drew, J. D. Schroeder, T. Tasdizen. “REFLACX, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays,” Subtitled “arXiv:2109.14187,” 2021.

ABSTRACT

Deep learning has shown recent success in classifying anomalies in chest x-rays, but datasets are still small compared to natural image datasets. Supervision of abnormality localization has been shown to improve trained models, partially compensating for dataset sizes. However, explicitly labeling these anomalies requires an expert and is very time-consuming. We propose a method for collecting implicit localization data using an eye tracker to capture gaze locations and a microphone to capture a dictation of a report, imitating the setup of a reading room, and potentially scalable for large datasets. The resulting REFLACX (Reports and Eye-Tracking Data for Localization of Abnormalities in Chest X-rays) dataset was labeled by five radiologists and contains 3,032 synchronized sets of eye-tracking data and timestamped report transcriptions. We also provide bounding boxes around lungs and heart and validation labels consisting of ellipses localizing abnormalities and image-level labels. Furthermore, a small subset of the data contains readings from all radiologists, allowing for the calculation of inter-rater scores.



E. Laughton, V. Zala, A. Narayan, R. M. Kirby, D. Moxey. “Fast Barycentric-Based Evaluation Over Spectral/hp Elements,” Subtitled “arXiv preprint arXiv:2103.03594,” 2021.

ABSTRACT

As the use of spectral/hp element methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendousattention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hp element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hp element library Nektar++.



Z. Li, H. Menon, K. Mohror, P. T. Bremer, Y. Livant, V. Pascucci. “Understanding a program's resiliency through error propagation,” In Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ACM, pp. 362-373. 2021.

ABSTRACT

Aggressive technology scaling trends have worsened the transient fault problem in high-performance computing (HPC) systems. Some faults are benign, but others can lead to silent data corruption (SDC), which represents a serious problem; a fault introducing an error that is not readily detected nto an HPC simulation. Due to the insidious nature of SDCs, researchers have worked to understand their impact on applications. Previous studies have relied on expensive fault injection campaigns with uniform sampling to provide overall SDC rates, but this solution does not provide any feedback on the code regions without samples.



Z. Liu, A. Narayan. “On the computation of recurrence coefficients for univariate orthogonal polynomials,” Subtitled “arXiv preprint arXiv:2101.11963,” 2021.

ABSTRACT

Associated to a finite measure on the real line with finite moments are recurrence coefficients in a three-term formula for orthogonal polynomials with respect to this measure. These recurrence coefficients are frequently inputs to modern computational tools that facilitate evaluation and manipulation of polynomials with respect to the measure, and such tasks are foundational in numerical approximation and quadrature. Although the recurrence coefficients for classical measures are known explicitly, those for nonclassical measures must typically be numerically computed. We survey and review existing approaches for computing these recurrence coefficients for univariate orthogonal polynomial families and propose a novel" predictor-corrector" algorithm for a general class of continuous measures. We combine the predictor-corrector scheme with a stabilized Lanczos procedure for a new hybrid algorithm that computes recurrence coefficients for a fairly wide class of measures that can have both continuous and discrete parts. We evaluate the new algorithms against existing methods in terms of accuracy and efficiency.



C. Ly, C. Nizinski, C. Vachet, L. McDonald, T. Tasdizen. “Learning to Estimate the Composition of a Mixture with Synthetic Data,” In Microscopy and Microanalysis, 2021.

ABSTRACT

Identifying the precise composition of a mixed material is important in various applications. For instance, in nuclear forensics analysis, knowing the process history of unknown or illicitly trafficked nuclear materials when they are discovered is desirable to prevent future losses or theft of material from the processing facilities. Motivated by this open problem, we describe a novel machine learning approach to determine the composition of a mixture from SEM images. In machine learning, the training data distribution should reflect the distribution of the data the model is expected to make predictions for, which can pose a hurdle. However, a key advantage of our proposed framework is that it requires reference images of pure material samples only. Removing the need for reference samples of various mixed material compositions reduces the time and monetary cost associated with reference sample preparation and imaging. Moreover, our proposed framework can determine the composition of a mixture composed of chemically similar materials, whereas other elemental analysis tools such as powder X-ray diffraction (p-XRD) have trouble doing so. For example, p-XRD is unable to discern mixtures composed of triuranium octoxide (U3O8) synthesized from different synthetic routes such as uranyl peroxide (UO4) and ammonium diuranate (ADU) [1]. In contrast, our proposed framework can easily determine the composition of uranium oxides mixture synthesized from different synthetic routes, as we illustrate in the experiments.



N. Marshak, P. Grosset, A. Knoll, J. P. Ahrens, C. R. Johnson. “Evaluation of GPU Volume Rendering in PyTorch Using Data-Parallel Primitives,” In Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2021.

ABSTRACT

Data-parallel programming (DPP) has attracted considerable interest from the visualization community, fostering major software initiatives such as VTK-m. However, there has been relatively little recent investigation of data-parallel APIs in higherlevel languages such as Python, which could help developers sidestep the need for low-level application programming in C++ and CUDA. Moreover, machine learning frameworks exposing data-parallel primitives, such as PyTorch and TensorFlow, have exploded in popularity, making them attractive platforms for parallel visualization and data analysis. In this work, we benchmark data-parallel primitives in PyTorch, and investigate its application to GPU volume rendering using two distinct DPP formulations: a parallel scan and reduce over the entire volume, and repeated application of data-parallel operators to an array of rays. We find that most relevant DPP primitives exhibit performance similar to a native CUDA library. However, our volume rendering implementation reveals that PyTorch is limited in expressiveness when compared to other DPP APIs. Furthermore, while render times are sufficient for an early ''proof of concept'', memory usage acutely limits scalability.



T. McDonald, R. Shrestha, X. Yi, H. Bhatia, D. Chen, D. Goswami, V. Pascucci, T. Turbyville, P‐T Bremer. “Leveraging Topological Events in Tracking Graphs for Understanding Particle Diffusion,” In Computer Graphics Forum, Vol. 40, No. 3, pp. 251-262. 2021.

ABSTRACT

Single particle tracking (SPT) of fluorescent molecules provides significant insights into the diffusion and relative motion of tagged proteins and other structures of interest in biology. However, despite the latest advances in high-resolution microscopy, individual particles are typically not distinguished from clusters of particles. This lack of resolution obscures potential evidence for how merging and splitting of particles affect their diffusion and any implications on the biological environment. The particle tracks are typically decomposed into individual segments at observed merge and split events, and analysis is performed without knowing the true count of particles in the resulting segments. Here, we address the challenges in analyzing particle tracks in the context of cancer biology. In particular, we study the tracks of KRAS protein, which is implicated in nearly 20% of all human cancers, and whose clustering and aggregation have been linked to the signaling pathway leading to uncontrolled cell growth. We present a new analysis approach for particle tracks by representing them as tracking graphs and using topological events – merging and splitting, to disambiguate the tracks. Using this analysis, we infer a lower bound on the count of particles as they cluster and create conditional distributions of diffusion speeds before and after merge and split events. Using thousands of time-steps of simulated and in-vitro SPT data, we demonstrate the efficacy of our method, as it offers the biologists a new, detailed look into the relationship between KRAS clustering and diffusion speeds.



R. A. Moore, A. Narayan. “Adaptive Density Tracking by Quadrature for Stochastic Differential Equations,” Subtitled “arXiv preprint arXiv:2105.08148,” 2021.

ABSTRACT

Density tracking by quadrature (DTQ) is a numerical procedure for computing solutions to Fokker-Planck equations that describe probability densities for stochastic differential equations (SDEs). In this paper, we extend upon existing tensorized DTQ procedures by utilizing a flexible quadrature rule that allows for unstructured, adaptive meshes. We propose and describe the procedure for -dimensions, and demonstrate that the resulting adaptive procedure is significantly more efficient than a tensorized approach. Although we consider two-dimensional examples, all our computational procedures are extendable to higher dimensional problems.



N. Morrical, J. Tremblay, Y. Lin, S. Tyree, S. Birchfield, V. Pascucci, I. Wald. “NViSII: A Scriptable Tool for Photorealistic Image Generation,” Subtitled “arXiv preprint arXiv:2105.13962,” 2021.

ABSTRACT

We present a Python-based renderer built on NVIDIA's OptiX ray tracing engine and the OptiX AI denoiser, designed to generate high-quality synthetic images for research in computer vision and deep learning. Our tool enables the description and manipulation of complex dynamic 3D scenes containing object meshes, materials, textures, lighting, volumetric data (e.g., smoke), and backgrounds. Metadata, such as 2D/3D bounding boxes, segmentation masks, depth maps, normal maps, material properties, and optical flow vectors, can also be generated. In this work, we discuss design goals, architecture, and performance. We demonstrate the use of data generated by path tracing for training an object detector and pose estimator, showing improved performance in sim-to-real transfer in situations that are difficult for traditional raster-based renderers. We offer this tool as an easy-to-use, performant, high-quality renderer for advancing research in synthetic data generation and deep learning.



A. Narayan, L. Yan, T. Zhou. “Optimal design for kernel interpolation: Applications to uncertainty quantification,” In Journal of Computational Physics, Vol. 430, Academic Press, pp. 110094. 2021.

ABSTRACT

The paper is concerned with classic kernel interpolation methods, in addition to approximation methods that are augmented by gradient measurements. To apply kernel interpolation using radial basis functions (RBFs) in a stable way, we propose a type of quasi-optimal interpolation points, searching from a large set of candidate points, using a procedure similar to designing Fekete points or power function maximizing points that use pivot from a Cholesky decomposition. The proposed quasi-optimal points results in smaller condition number, and thus mitigates the instability of the interpolation procedure when the number of points becomes large. Applications to parametric uncertainty quantification are presented, and it is shown that the proposed interpolation method can outperform sparse grid methods in many interesting cases. We also demonstrate the new procedure can be applied to constructing gradient-enhanced Gaussian process emulators.



Q. C Nguyen, J. M. Keralis, P. Dwivedi, A. E. Ng, M. Javanmardi, S. Khanna, Y. Huang, K. D. Brunisholz, A. Kumar, T. Tasdizen. “Leveraging 31 Million Google Street View Images to Characterize Built Environments and Examine County Health Outcomes ,” In Public Health Reports, Vol. 136, No. 2, SAGE Publications, pp. 201-211. 2021.
DOI: doi.org/10.1177/0033354920968799

ABSTRACT

Objectives
Built environments can affect health, but data in many geographic areas are limited. We used a big data source to create national indicators of neighborhood quality and assess their associations with health.

Methods
We leveraged computer vision and Google Street View images accessed from December 15, 2017, through July 17, 2018, to detect features of the built environment (presence of a crosswalk, non–single-family home, single-lane roads, and visible utility wires) for 2916 US counties. We used multivariate linear regression models to determine associations between features of the built environment and county-level health outcomes (prevalence of adult obesity, prevalence of diabetes, physical inactivity, frequent physical and mental distress, poor or fair self-rated health, and premature death [in years of potential life lost]).
Results
Compared with counties with the least number of crosswalks, counties with the most crosswalks were associated with decreases of 1.3%, 2.7%, and 1.3% of adult obesity, physical inactivity, and fair or poor self-rated health, respectively, and 477 fewer years of potential life lost before age 75 (per 100 000 population). The presence of non–single-family homes was associated with lower levels of all health outcomes except for premature death. The presence of single-lane roads was associated with an increase in physical inactivity, frequent physical distress, and fair or poor self-rated health. Visible utility wires were associated with increases in adult obesity, diabetes, physical and mental distress, and fair or poor self-rated health.
Conclusions
The use of computer vision and big data image sources makes possible national studies of the built environm



C. Nobre, D. Wootton, Z. T. Cutler, L. Harrison, H. Pfister, A. Lex. “reVISit: Looking Under the Hood of Interactive Visualization Studies,” In SIGCHI Conference on Human Factors in Computing Systems (CHI), ACM, pp. 1--12. 2021.
DOI: 10.31219/osf.io/cbw36

ABSTRACT

Quantifying user performance with metrics such as time and accuracy does not show the whole picture when researchers evaluate complex, interactive visualization tools. In such systems, performance is often influenced by different analysis strategies that statistical analysis methods cannot account for. To remedy this lack of nuance, we propose a novel analysis methodology for evaluating complex interactive visualizations at scale. We implement our analysis methods in reVISit, which enables analysts to explore participant interaction performance metrics and responses in the context of users' analysis strategies. Replays of participant sessions can aid in identifying usability problems during pilot studies and make individual analysis processes salient. To demonstrate the applicability of reVISit to visualization studies, we analyze participant data from two published crowdsourced studies. Our findings show that reVISit can be used to reveal and describe novel interaction patterns, to analyze performance differences between different analysis strategies, and to validate or challenge design decisions.



A. Nouri, P.E. Davis, P. Subedi, M. Parashar. “Scalable Graph Embedding LearningOn A Single GPU,” Subtitled “arXiv preprint arXiv:2110.06991,” 2021.

ABSTRACT

Graph embedding techniques have attracted growing interest since they convert the graph data into continuous and low-dimensional space. Effective graph analytic provides users a deeper understanding of what is behind the data and thus can benefit a variety of machine learning tasks. With the current scale of real-world applications, most graph analytic methods suffer high computation and space costs. These methods and systems can process a network with thousands to a few million nodes. However, scaling to large-scale networks remains a challenge. The complexity of training graph embedding system requires the use of existing accelerators such as GPU. In this paper, we introduce a hybrid CPU-GPU framework that addresses the challenges of learning embedding of large-scale graphs. The performance of our method is compared qualitatively and quantitatively with the existing embedding systems on common benchmarks. We also show that our system can scale training to datasets with an order of magnitude greater than a single machine's total memory capacity. The effectiveness of the learned embedding is evaluated within multiple downstream applications. The experimental results indicate the effectiveness of the learned embedding in terms of performance and accuracy.



M. Penwarden, S. Zhe, A. Narayan, R. M. Kirby. “Multifidelity Modeling for Physics-Informed Neural Networks (PINNs),” Subtitled “arXiv preprint arXiv:2106.13361,” 2021.

ABSTRACT

Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.



R. Pulch, A. Narayan, T. Stykel. “Sensitivity analysis of random linear differential–algebraic equations using system norms,” In Journal of Computational and Applied Mathematics, North-Holland, pp. 113666. 2021.

ABSTRACT

We consider linear dynamical systems composed of differential–algebraic equations (DAEs), where a quantity of interest (QoI) is assigned as output. Physical parameters of a system are modelled as random variables to quantify uncertainty, and we investigate a variance-based sensitivity analysis of the random QoI. Based on expansions via generalised polynomial chaos, the stochastic Galerkin method yields a new deterministic system of DAEs of high dimension. We define sensitivity measures by system norms, ie, the H∞-norm of the transfer function associated with the Galerkin system for different combinations of outputs. To ameliorate the enormous computational effort required to compute norms of high-dimensional systems, we apply balanced truncation, a particular method of model order reduction (MOR), to obtain a low-dimensional linear dynamical system that produces approximations of system norms …



Y. Qin, A. Narayan, K. Cheng, P. Wang. “An efficient method of calculating composition-dependent inter-diffusion coefficients based on compressed sensing method,” In Computational Materials Science, Vol. 188, Elsevier, pp. 110145. 2021.

ABSTRACT

Composition-dependent inter-diffusion coefficients are key parameters in many physical processes. Due to the under-determinedness of the governing diffusion equations, numerical methods either impose strict physical conditions on the samples or require a computationally onerous amount of data. To address such problems, we propose a novel inverse framework to recover the diffusion coefficients using a compressed sensing method, which in principle can be extended to alloy systems with arbitrary number of species. Comparing to conventional methods, the new approach does not impose any priori assumptions on the functional relationship between diffusion coefficients and concentrations, nor any preference on the locations of the samples, as long as it is in the diffused zone. It also requires much less data compared to least-squares approaches. Through a few numerical examples of ternary and quandary systems, we demonstrate the accuracy and robustness of the new method.



Y. Qin, I. Rodero, A. Simonet, C. Meertens, D. Reiner, J. Riley, M. Parashar. “Leveraging user access patterns and advanced cyberinfrastructure to accelerate data delivery from shared-use scientific observatories,” In Future Generation Computer Systems, North-Holland, pp. 14-27. 2021.
DOI: https://doi.org/10.1016/j.future.2021.03.004

ABSTRACT

With the growing number and increasing availability of shared-use instruments and observatories, observational data is becoming an essential part of application workflows and contributor to scientific discoveries in a range of disciplines. However, the corresponding growth in the number of users accessing these facilities coupled with the expansion in the scale and variety of the data, is making it challenging for these facilities to ensure their data can be accessed, integrated, and analyzed in a timely manner, and is resulting significant demands on their cyberinfrastructure (CI). In this paper, we present the design of a push-based data delivery framework that leverages emerging in-network capabilities, along with data pre-fetching techniques based on a hybrid data management model. Specifically, we analyze data access traces for two large-scale observatories, Ocean Observatories Initiative (OOI) and Geodetic Facility for the Advancement of Geoscience (GAGE), to identify typical user access patterns and to develop a model that can be used for data pre-fetching. Furthermore, we evaluate our data pre-fetching model and the proposed framework using a simulation of the Virtual Data Collaboratory (VDC) platform that provides in-network data staging and processing capabilities. The results demonstrate that the ability of the framework to significantly improve data delivery performance and reduce network traffic at the observatories’ facilities.



Y. Qin, I. Rodero, M. Parashar. “Facilitating Data Discovery for Large-scale Science Facilities using Knowledge Networks,” In 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), IEEE, pp. 651-660. 2021.
DOI: 10.1109/IPDPS49936.2021.00073

ABSTRACT

Large-scale multiuser scientific facilities, such as geographically distributed observatories, remote instruments, and experimental platforms, represent some of the largest national investments and can enable dramatic advances across many areas of science. Recent examples of such advances include the detection of gravitational waves and the imaging of a black hole’s event horizon. However, as the number of such facilities and their users grow, along with the complexity, diversity, and volumes of their data products, finding and accessing relevant data is becoming increasingly challenging, limiting the potential impact of facilities. These challenges are further amplified as scientists and application workflows increasingly try to integrate facilities’ data from diverse domains. In this paper, we leverage concepts underlying recommender systems, which are extremely effective in e-commerce, to address these data-discovery and data-access challenges for large-scale distributed scientific facilities. We first analyze data from facilities and identify and model user-query patterns in terms of facility location and spatial localities, domain-specific data models, and user associations. We then use this analysis to generate a knowledge graph and develop the collaborative knowledge-aware graph attention network (CKAT) recommendation model, which leverages graph neural networks (GNNs) to explicitly encode the collaborative signals through propagation and combine them with knowledge associations. Moreover, we integrate a knowledge-aware neural attention mechanism to enable the CKAT to pay more attention to key information while reducing irrelevant noise, thereby increasing the accuracy of the recommendations. We apply the proposed model on two real-world facility datasets and empirically demonstrate that the CKAT can effectively facilitate data discovery, significantly outperforming several compelling state-of-the-art baseline models.



A.S. Rababah, L.R. Bear, Y.S. Dogrusoz, W. Good, J. Bergquist, J. Stoks, R. MacLeod, K. Rjoob, M. Jennings, J. Mclaughlin, D. D. Finlay. “Reducing Line-of-block Artifacts in Cardiac Activation Maps Estimated Using ECG Imaging: A Comparison of Source Models and Estimation Methods,” In Computers in Biology and Medicine, Vol. 136, pp. 104666. 2021.

ABSTRACT

Electrocardiographic imaging is an imaging modality that has been introduced recently to help in visualizing the electrical activity of the heart and consequently guide the ablation therapy for ventricular arrhythmias. One of the main challenges of this modality is that the electrocardiographic signals recorded at the torso surface are contaminated with noise from different sources. Low amplitude leads are more affected by noise due to their low peak-to-peak amplitude. In this paper, we have studied 6 datasets from two torso tank experiments (Bordeaux and Utah experiments) to investigate the impact of removing or interpolating these low amplitude leads on the inverse reconstruction of cardiac electrical activity. Body surface potential maps used were calculated by using the full set of recorded leads, removing 1, 6, 11, 16, or 21 low amplitude leads, or interpolating 1, 6, 11, 16, or 21 low amplitude leads using one of the three interpolation methods – Laplacian interpolation, hybrid interpolation, or the inverse-forward interpolation. The epicardial potential maps and activation time maps were computed from these body surface potential maps and compared with those recorded directly from the heart surface in the torso tank experiments. There was no significant change in the potential maps and activation time maps after the removal of up to 11 low amplitude leads. Laplacian interpolation and hybrid interpolation improved the inverse reconstruction in some datasets and worsened it in the rest. The inverse forward interpolation of low amplitude leads improved it in two out of 6 datasets and at least remained the same in the other datasets. It was noticed that after doing the inverse-forward interpolation, the selected lambda value was closer to the optimum lambda value that gives the inverse solution best correlated with the recorded one.