Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2023


AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making. “S. Liu, H. Miao, Z. Li, M. Olson, V. Pascucci, P.T. Bremer,” Subtitled “arXiv preprint arXiv:2312.04494,” 2023.

ABSTRACT

With recent advances in multi-modal foundation models, the previously text-only large language models (LLM) have evolved to incorporate visual input, opening up unprecedented opportunities for various applications in visualization. Our work explores the utilization of the visual perception ability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs) that can interpret and accomplish user-defined visualization objectives through natural language. We propose the first framework for the design of AVAs and present several usage scenarios intended to demonstrate the general applicability of the proposed paradigm. The addition of visual perception allows AVAs to act as the virtual visualization assistant for domain experts who may lack the knowledge or expertise in fine-tuning visualization outputs. Our preliminary exploration and proof-of-concept agents suggest that this approach can be widely applicable whenever the choices of appropriate visualization parameters require the interpretation of previous visual output. Feedback from unstructured interviews with experts in AI research, medical visualization, and radiology has been incorporated, highlighting the practicality and potential of AVAs. Our study indicates that AVAs represent a general paradigm for designing intelligent visualization systems that can achieve high-level visualization goals, which pave the way for developing expert-level visualization agents in the future.



D. Long, W.W. Xing, A.S. Krishnapriyan, R.M. Kirby, S. Zhe, M.W. Mahoney. “Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels,” Subtitled “arXiv:2310.05387v1,” 2023.

ABSTRACT

Discovering governing equations from data is important to many scientific and engineering applications. Despite promising successes, existing methods are still challenged by data sparsity as well as noise issues, both of which are ubiquitous in practice. Moreover, state-of-the-art methods lack uncertainty quantification and/or are costly in training. To overcome these limitations, we propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS). We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises. We combine it with a Bayesian spike-and-slab prior — an ideal Bayesian sparse distribution — for effective operator selection and uncertainty quantification. We develop an expectation propagation expectation-maximization (EP-EM) algorithm for efficient posterior inference and function estimation. To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra methods to enable efficient computation and optimization. We show the significant advantages of KBASS on a list of benchmark ODE and PDE discovery tasks.



J. Luettgau, G. Scorzelli, V. Pascucci, M. Taufer. “Development of Large-Scale Scientific Cyberinfrastructure and the Growing Opportunity to Democratize Access to Platforms and Data,” In Distributed, Ambient and Pervasive Interactions, Springer Nature Switzerland, pp. 378--389. 2023.
ISBN: 978-3-031-34668-2
DOI: 10.1007/978-3-031-34668-2_25

ABSTRACT

As researchers across scientific domains rapidly adopt advanced scientific computing methodologies, access to advanced cyberinfrastructure (CI) becomes a critical requirement in scientific discovery. Lowering the entry barriers to CI is a crucial challenge in interdisciplinary sciences requiring frictionless software integration, data sharing from many distributed sites, and access to heterogeneous computing platforms. In this paper, we explore how the challenge is not merely a factor of availability and affordability of computing, network, and storage technologies but rather the result of insufficient interfaces with an increasingly heterogeneous mix of computing technologies and data sources. With more distributed computation and data, scientists, educators, and students must invest their time and effort in coordinating data access and movements, often penalizing their scientific research. Investments in the interfaces’ software stack are necessary to help scientists, educators, and students across domains take advantage of advanced computational methods. To this end, we propose developing a science data fabric as the standard scientific discovery interface that seamlessly manages data dependencies within scientific workflows and CI.



J. Luettgau, H. Martinez, G. Tarcea, G. Scorzelli, V. Pascucci, M. Taufer. “Studying Latency and Throughput Constraints for Geo-Distributed Data in the National Science Data Fabric,” In Proceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing, ACM, pp. 325–326. 2023.
DOI: 10.1145/3588195.3595948

ABSTRACT

The National Science Data Fabric (NSDF) is our solution to the problem of addressing the data-sharing needs of the growing data science community. NSDF is designed to make sharing data across geographically distributed sites easier for users who lack technical expertise and infrastructure. By developing an easy-to-install software stack, we promote the FAIR data-sharing principles in NSDF while leveraging existing high-speed data transfer infrastructures such as Globus and XRootD. This work shows how we leverage latency and throughput information between geo-distributed NSDF sites with NSDF entry points to optimize the automatic coordination of data placement and transfer across the data fabric, which can further improve the efficiency of data sharing.



C. Ly, C. Nizinski, A. Hagen, L. McDonald IV, T. Tasdizen. “Improving Robustness for Model Discerning Synthesis Process of Uranium Oxide with Unsupervised Domain Adaptation,” In Frontiers in Nuclear Engineering, 2023.

ABSTRACT

The quantitative characterization of surface structures captured in scanning electron microscopy (SEM) images has proven to be effective for discerning provenance of an unknown nuclear material. Recently, many works have taken advantage of the powerful performance of convolutional neural networks (CNNs) to provide faster and more consistent characterization of surface structures. However, one inherent limitation of CNNs is their degradation in performance when encountering discrepancy between training and test datasets, which limits their use widely.The common discrepancy in an SEM image dataset occurs at low-level image information due to user-bias in selecting acquisition parameters and microscopes from different manufacturers.Therefore, in this study, we present a domain adaptation framework to improve robustness of CNNs against the discrepancy in low-level image information. Furthermore, our proposed approach makes use of only unlabeled test samples to adapt a pretrained model, which is more suitable for nuclear forensics application for which obtaining both training and test datasets simultaneously is a challenge due to data sensitivity. Through extensive experiments, we demonstrate that our proposed approach effectively improves the performance of a model by at least 18% when encountering domain discrepancy, and can be deployed in many CNN architectures.



L.W. McDonald IV, K. Sentz, A. Hagen, B.W. Chung, T. Tasdizen, et. al.. “Review of Multi-Faceted Morphologic Signatures of Actinide Process Materials for Nuclear Forensic Science,” In Journal of Nuclear Materials, Elsevier, 2023.

ABSTRACT

Particle morphology is an emerging signature that has the potential to identify the processing history of unknown nuclear materials. Using readily available scanning electron microscopes (SEM), the morphology of nearly any solid material can be measured within hours. Coupled with robust image analysis and classification methods, the morphological features can be quantified and support identification of the processing history of unknown nuclear materials. The viability of this signature depends on developing databases of morphological features, coupled with a rapid data analysis and accurate classification process. With developed reference methods, datasets, and throughputs, morphological analysis can be applied within days to (i) interdicted bulk nuclear materials (gram to kilogram quantities), and (ii) trace amounts of nuclear materials detected on swipes or environmental samples. This review aims to develop validated and verified analytical strategies for morphological analysis relevant to nuclear forensics.



N. Morrical, S. Zellmann, A. Sahistan, P. Shriwise, V. Pascucci. “Attribute-Aware RBFs: Interactive Visualization of Time Series Particle Volumes Using RT Core Range Queries,” In IEEE Trans Vis Comput Graph, IEEE, 2023.
DOI: 10.1109/TVCG.2023.3327366

ABSTRACT

Supplemental material

Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.



H. Oh, R. Amici, G. Bomarito, S. Zhe, R. Kirby, J. Hochhalter. “Genetic Programming Based Symbolic Regression for Analytical Solutions to Differential Equations,” Subtitled “arXiv:2302.03175v1,” 2023.

ABSTRACT

In this paper, we present a machine learning method for the discovery of analytic solutions to differential equations. The method utilizes an inherently interpretable algorithm, genetic programming based symbolic regression. Unlike conventional accuracy measures in machine learning we demonstrate the ability to recover true analytic solutions, as opposed to a numerical approximation. The method is verified by assessing its ability to recover known analytic solutions for two separate differential equations. The developed method is compared to a conventional, purely data-driven genetic programming based symbolic regression algorithm. The reliability of successful evolution of the true solution, or an algebraic equivalent, is demonstrated.



H. Oh, R. Amici, G. Bomarito, S. Zhe, R.M. Kirby, J. Hochhalter. “Inherently interpretable machine learning solutions to differential equations,” In Engineering with Computers, 2023.

ABSTRACT

A machine learning method for the discovery of analytic solutions to differential equations is assessed. The method utilizes an inherently interpretable machine learning algorithm, genetic programming-based symbolic regression. An advantage of its interpretability is the output of symbolic expressions that can be used to assess error in algebraic terms, as opposed to purely numerical quantities. Therefore, models output by the developed method are verified by assessing its ability to recover known analytic solutions for two differential equations, as opposed to assessing numerical error. To demonstrate its improvement, the developed method is compared to a conventional, purely data-driven genetic programming-based symbolic regression algorithm. The reliability of successful evolution of the true solution, or an algebraic equivalent, is demonstrated.



B.A. Orkild, J.A. Bergquist, E.N. Paccione, M. Lange, E. Kwan, B. Hunt, R. MacLeod, A. Narayan, R. Ranjan. “A Grid Search of Fibrosis Thresholds for Uncertainty Quantification in Atrial Flutter Simulations,” In Computing in Cardiology, 2023.

ABSTRACT

Atypical Atrial Flutter (AAF) is the most common cardiac arrhythmia to develop following catheter ablation for atrial fibrillation. Patient-specific computational simulations of propagation have shown promise in prospectively predicting AAF reentrant circuits and providing useful insight to guide successful ablation procedures. These patient-specific models require a large number of inputs, each with an unknown amount of uncertainty. Uncertainty quantification (UQ) is a technique to assess how variability in a set of input parameters can affect the output of a model. However, modern UQ techniques, such as polynomial chaos expansion, require a well-defined output to map to the inputs. In this study, we aimed to explore the sensitivity of simulated reentry to the selection of fibrosis threshold in patient-specific AAF models. We utilized the image intensity ratio (IIR) method to set the fibrosis threshold in the LGE-MRI from a single patient with prior ablation. We found that the majority of changes to the duration of reentry occurred within an IIR range of 1.01 to 1.39, and that there was a large amount of variability in the resulting arrhythmia. This study serves as a starting point for future UQ studies to investigate the nonlinear relationship between fibrosis threshold and the resulting arrhythmia in AAF models.



T. A. J. Ouermi, R. M Kirby, M. Berzins. “HiPPIS A High-Order Positivity-Preserving Mapping Software for Structured Meshes,” In ACM Trans. Math. Softw, ACM, Nov, 2023.
ISSN: 0098-3500
DOI: 10.1145/3632291

ABSTRACT

Polynomial interpolation is an important component of many computational problems. In several of these computational problems, failure to preserve positivity when using polynomials to approximate or map data values between meshes can lead to negative unphysical quantities. Currently, most polynomial-based methods for enforcing positivity are based on splines and polynomial rescaling. The spline-based approaches build interpolants that are positive over the intervals in which they are defined and may require solving a minimization problem and/or system of equations. The linear polynomial rescaling methods allow for high-degree polynomials but enforce positivity only at limited locations (e.g., quadrature nodes). This work introduces open-source software (HiPPIS) for high-order data-bounded interpolation (DBI) and positivity-preserving interpolation (PPI) that addresses the limitations of both the spline and polynomial rescaling methods. HiPPIS is suitable for approximating and mapping physical quantities such as mass, density, and concentration between meshes while preserving positivity. This work provides Fortran and Matlab implementations of the DBI and PPI methods, presents an analysis of the mapping error in the context of PDEs, and uses several 1D and 2D numerical examples to demonstrate the benefits and limitations of HiPPIS.



M. Parashar, T. Kurc, H. Klie, M.F. Wheeler, J.H. Saltz, M. Jammoul, R. Dong. “Dynamic Data-Driven Application Systems for Reservoir Simulation-Based Optimization: Lessons Learned and Future Trends,” In Handbook of Dynamic Data Driven Applications Systems: Volume 2, Springer International Publishing, pp. 287--330. 2023.
DOI: 10.1007/978-3-031-27986-7_11

ABSTRACT

Since its introduction in the early 2000s, the Dynamic Data-Driven Applications Systems (DDDAS) paradigm has served as a powerful concept for continuously improving the quality of both models and data embedded in complex dynamical systems. The DDDAS unifying concept enables capabilities to integrate multiple sources and scales of data, mathematical and statistical algorithms, advanced software infrastructures, and diverse applications into a dynamic feedback loop. DDDAS has not only motivated notable scientific and engineering advances on multiple fronts, but it has been also invigorated by the latest technological achievements in artificial intelligence, cloud computing, augmented reality, robotics, edge computing, Internet of Things (IoT), and Big Data. Capabilities to handle more data in a much faster and smarter fashion is paving the road for expanding automation capabilities. The purpose of this chapter is to review the fundamental components that have shaped reservoir-simulation-based optimization in the context of DDDAS. The foundations of each component will be systematically reviewed, followed by a discussion on current and future trends oriented to highlight the outstanding challenges and opportunities of reservoir management problems under the DDDAS paradigm. Moreover, this chapter should be viewed as providing pathways for establishing a synergy between renewable energy and oil and gas industry with the advent of the DDDAS method.



M. Parashar, I. Altintas. “Toward Democratizing Access to Science Data: Introducing the National Data Platform,” In IEEE 19th International Conference on e-Science, IEEE, 2023.
DOI: 10.1109/e-Science58273.2023.10254930

ABSTRACT

Open and equitable access to scientific data is essential to addressing important scientific and societal grand challenges, and to research enterprise more broadly. This paper discusses the importance and urgency of open and equitable data access, and explores the barriers and challenges to such access. It then introduces the vision and architecture of the National Data Platform, a recently launched project aimed at catalyzing an open, equitable and extensible data ecosystem.



M. Parashar, T. deBlanc-Knowles, E. Gianchandani, L.E. Parker. “Strengthening and Democratizing Artificial Intelligence Research and Development,” In Computer, Vol. 56, No. 11, IEEE, pp. 85-90. 2023.
DOI: 10.1109/MC.2023.3284568

ABSTRACT

This article summarizes the vision, roadmap, and implementation plan for a National Artificial Intelligence Research Resource that aims to provide a widely accessible cyberinfrastructure for artificial intelligence R&D, with the overarching goal of bridging the resource–access divide.



M. Penwarden, S. Zhe, A. Narayan, R.M. Kirby. “A Metalearning Approach for Physics-Informed Neural Networks (PINNs): Application to Parameterized PDEs,” In Journal of Computational Physics, Elsevier, 2023.
DOI: https://doi.org/10.1016/j.jcp.2023.111912

ABSTRACT

Physics-informed neural networks (PINNs) as a means of discretizing partial differential equations (PDEs) are garnering much attention in the Computational Science and Engineering (CS&E) world. At least two challenges exist for PINNs at present: an understanding of accuracy and convergence characteristics with respect to tunable parameters and identification of optimization strategies that make PINNs as efficient as other computational science tools. The cost of PINNs training remains a major challenge of Physics-informed Machine Learning (PiML) – and, in fact, machine learning (ML) in general. This paper is meant to move towards addressing the latter through the study of PINNs on new tasks, for which parameterized PDEs provides a good testbed application as tasks can be easily defined in this context. Following the ML world, we introduce metalearning of PINNs with application to parameterized PDEs. By introducing metalearning and transfer learning concepts, we can greatly accelerate the PINNs optimization process. We present a survey of model-agnostic metalearning, and then discuss our model-aware metalearning applied to PINNs as well as implementation considerations and algorithmic complexity. We then test our approach on various canonical forward parameterized PDEs that have been presented in the emerging PINNs literature.



M. Penwarden, A.D. Jagtap, S. Zhe, G.E. Karniadakis, R.M. Kirby. “A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions,” Subtitled “arXiv:2302.14227v1,” 2023.

ABSTRACT

Physics-informed neural networks (PINNs) as a means of solving partial differential equations (PDE) have garnered much attention in the Computational Science and Engineering (CS&E) world. However, a recent topic of interest is exploring various training (i.e., optimization) challenges – in particular, arriving at poor local minima in the optimization landscape results in a PINN approximation giving an inferior, and sometimes trivial, solution when solving forward time-dependent PDEs with no data. This problem is also found in, and in some sense more difficult, with domain decomposition strategies such as temporal decomposition using XPINNs. To address this problem, we first enable a general categorization for previous causality methods, from which we identify a gap (e.g., opportunity) in the previous approaches. We then furnish examples and explanations for different training challenges, their cause, and how they relate to information propagation and temporal decomposition. We propose a solution to fill this gap by reframing these causality concepts into a generalized information propagation framework in which any prior method or combination of methods can be described. This framework is easily modifiable via user parameters in the open-source code accompanying this paper. Our unified framework moves toward reducing the number of PINN methods to consider and the reimplementation and retuning cost for thorough comparisons rather than increasing it. Using the idea of information propagation, we propose a new stacked-decomposition method that bridges the gap between time-marching PINNs and XPINNs. We also introduce significant computational speed-ups by using transfer learning concepts to initialize subnetworks in the domain and loss tolerance-based propagation for the subdomains. Finally, we formulate a new time-sweeping collocation point algorithm inspired by the previous PINNs causality literature, which our framework can still describe, and provides a significant computational speed-up via reduced-cost collocation point segmentation. The proposed methods overcome training challenges in PINNs and XPINNs for time-dependent PDEs by respecting the causality in multiple forms and improving scalability by limiting the computation required per optimization iteration. Finally, we provide numerical results for these methods on baseline PDE problems for which unmodified PINNs and XPINNs struggle to train.



C. Peters, T. Patel, W. Usher, C R. Johnson. “Ray Tracing Spherical Harmonics Glyphs,” In Vision, Modeling, and Visualization, The Eurographics Association, 2023.
DOI: 10.2312/vmv.20231223

ABSTRACT

Spherical harmonics glyphs are an established way to visualize high angular resolution diffusion imaging data. Starting from a unit sphere, each point on the surface is scaled according to the value of a linear combination of spherical harmonics basis functions. The resulting glyph visualizes an orientation distribution function. We present an efficient method to render these glyphs using ray tracing. Our method constructs a polynomial whose roots correspond to ray-glyph intersections. This polynomial has degree 2k + 2 for spherical harmonics bands 0, 2, . . . , k. We then find all intersections in an efficient and numerically stable fashion through polynomial root finding. Our formulation also gives rise to a simple formula for normal vectors of the glyph. Additionally, we compute a nearly exact axis-aligned bounding box to make ray tracing of these glyphs even more efficient. Since our method finds all intersections for arbitrary rays, it lets us perform sophisticated shading and uncertainty visualization. Compared to prior work, it is faster, more flexible and more accurate.



S. Petruzza, B. Summa, A. Gooch, C.M. Laney, T. Goulden, J. Schreiner, S. Callahan, V. Pascucci. “Interactive Visualization and Portable Image Blending of Massive Aerial Image Mosaics,” In IEEE International Conference on Big Data, IEEE, pp. 3365-3370. 2023.

ABSTRACT

Processing, managing and publishing the substantial volume of data collected through modern remote sensing technologies in a format that is easy for researchers - across broad skill levels and scientific domains - to view and use presents a formidable challenge. As a prime example, the massive scale of image mosaics produced by NEON’s Airborne Observation Platform (AOP), often several to hundreds of gigabytes in volume, demands efficient data management strategies. Additionally, these aerial mosaics frequently exhibit seams due to variations in lighting conditions during the data acquisition process. These seams undermine the integrity of subsequent scientific analyses, introducing distortions that hinder accurate interpretation of ecological patterns. Finally, one of NEON’s core objectives is to make these data broadly accessible to users, including those who are not yet versed in working with remote sensing data or who wish to view the datasets without needing to download and process them.In response to these challenges, we have developed a comprehensive data management pipeline that enables interactive access for analysis and visualization of NEON’s aerial mosaic collection. This pipeline automates data ingestion, conversion, and publication in a streamable format, facilitating seamless user interaction through web viewers and programming APIs. Moreover, we have implemented a portable blending algorithm aimed at eliminating these problematic seams from large aerial mosaics. This algorithm, grounded in the Conjugate Gradient (CG) method, has been implemented both in CUDA and using the modern SYCL programming model for enhanced portability across diverse computing platforms.Experimental results demonstrate scalable performance across both CPU and GPU architectures. This work not only addresses the challenges of large aerial data management and seam removal but also opens avenues for more accurate and comprehensive scientific investigations within the NEON ecosystem.



S. Pirola, A. Arzani, C. Chiastra, F. Sturla. “Editorial: Image-based computational approaches for personalized cardiovascular medicine: improving clinical applicability and reliability through medical imaging and experimental data,” In Frontiers in Medical Technology, Vol. 5, 2023.
DOI: 10.3389/fmedt.2023.1222837



S. Saha, W. Gazi, R. Mohammed, T. Rapstine, H. Powers, R. Whitaker. “Multi-task Training as Regularization Strategy for Seismic Image Segmentation,” In IEEE Geoscience and Remote Sensing Letters, Vol. 20, IEEE, pp. 1--5. 2023.
DOI: 10.1109/LGRS.2023.3328837

ABSTRACT

This letter proposes multitask learning as a regularization method for segmentation tasks in seismic images. We examine application-specific auxiliary tasks, such as the estimation/detection of horizons, dip angle, and amplitude that geophysicists consider relevant for identification of channels (a geological feature), which is currently done through painstaking outlining by qualified experts. We show that multitask training helps in better generalization on test datasets with very similar and different structure/statistics. In such settings, we also show that multitask learning performs better on unseen datasets relative to the baseline.