Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2021


W. W. Xing, A. A. Shah, P. Wang, S. Zhe, Q. Fu, R. M. Kirby. “Residual Gaussian process: A tractable nonparametric Bayesian emulator for multi-fidelity simulations,” In Applied Mathematical Modelling, Vol. 97, Elsevier, pp. 36-56. 2021.

ABSTRACT

Challenges in multi-fidelity modelling relate to accuracy, uncertainty estimation and high-dimensionality. A novel additive structure is introduced in which the highest fidelity solution is written as a sum of the lowest fidelity solution and residuals between the solutions at successive fidelity levels, with Gaussian process priors placed over the low fidelity solution and each of the residuals. The resulting model is equipped with a closed-form solution for the predictive posterior, making it applicable to advanced, high-dimensional tasks that require uncertainty estimation. Its advantages are demonstrated on univariate benchmarks and on three challenging multivariate problems. It is shown how active learning can be used to enhance the model, especially with a limited computational budget. Furthermore, error bounds are derived for the mean prediction in the univariate case.



W. W. Xing, R. M. Kirby, S. Zhe. “Deep coregionalization for the emulation of simulation-based spatial-temporal fields,” In Journal of Computational Physics, Academic Press, pp. 109984. 2021.

ABSTRACT

Data-driven surrogate models are widely used for applications such as design optimization and uncertainty quantification, where repeated evaluations of an expensive simulator are required. For most partial differential equation (PDE) simulations, the outputs of interest are often spatial or spatial-temporal fields, leading to very high-dimensional outputs. Despite the success of existing data-driven surrogates for high-dimensional outputs, most methods require a significant number of samples to cover the response surface in order to achieve a reasonable degree of accuracy. This demand makes the idea of surrogate models less attractive considering the high-computational cost to generate the data. To address this issue, we exploit the multifidelity nature of a PDE simulation and introduce deep coregionalization, a Bayesian nonparametric autoregressive framework for efficient emulation of spatial-temporal fields. To effectively extract the output correlations in the context of multifidelity data, we develop a novel dimension reduction technique, residual principal component analysis. Our model can simultaneously capture the rich output correlations and the fidelity correlations and make high-fidelity predictions with only a small number of expensive, high-fidelity simulation samples. We show the advantages of our model in three canonical PDE models and a fluid dynamics problem. The results show that the proposed method can not only approximate simulation results with significantly less cost (by bout 10%-25%) but also further improve model accuracy.



Y. Xu, V. Keshavarzzadeh, R. M. Kirby, A. Narayan. “A bandit-learning approach to multifidelity approximation,” Subtitled “arXiv preprint arXiv:2103.15342,” 2021.

ABSTRACT

Multifidelity approximation is an important technique in scientific computation and simulation. In this paper, we introduce a bandit-learning approach for leveraging data of varying fidelities to achieve precise estimates of the parameters of interest. Under a linear model assumption, we formulate a multifidelity approximation as a modified stochastic bandit, and analyze the loss for a class of policies that uniformly explore each model before exploiting. Utilizing the estimated conditional mean-squared error, we propose a consistent algorithm, adaptive Explore-Then-Commit (AETC), and establish a corresponding trajectory-wise optimality result. These results are then extended to the case of vector-valued responses, where we demonstrate that the algorithm is efficient without the need to worry about estimating high-dimensional parameters. The main advantage of our approach is that we require neither hierarchical model structure nor\textit a priori knowledge of statistical information (eg, correlations) about or between models. Instead, the AETC algorithm requires only knowledge of which model is a trusted high-fidelity model, along with (relative) computational cost estimates of querying each model. Numerical experiments are provided at the end to support our theoretical findings.



V. Zala, R. M. Kirby, A. Narayan. “Structure-preserving Nonlinear Filtering for Continuous and Discontinuous Galerkin Spectral/hp Element Methods,” Subtitled “arXiv preprint arXiv:2106.08316,” 2021.

ABSTRACT

Finite element simulations have been used to solve various partial differential equations (PDEs) that model physical, chemical, and biological phenomena. The resulting discretized solutions to PDEs often do not satisfy requisite physical properties, such as positivity or monotonicity. Such invalid solutions pose both modeling challenges, since the physical interpretation of simulation results is not possible, and computational challenges, since such properties may be required to advance the scheme. We, therefore, consider the problem of computing solutions that preserve these structural solution properties, which we enforce as additional constraints on the solution. We consider in particular the class of convex constraints, which includes positivity and monotonicity. By embedding such constraints as a postprocessing convex optimization procedure, we can compute solutions that satisfy general types of convex constraints. For certain types of constraints (including positivity and monotonicity), the optimization is a filter, i.e., a norm-decreasing operation. We provide a variety of tests on one-dimensional time-dependent PDEs that demonstrate the method's efficacy, and we empirically show that rates of convergence are unaffected by the inclusion of the constraints.


2020


T. A. J. Ouermi, R. M. Kirby, M. Berzins. “Numerical Testing of a New Positivity-Preserving Interpolation Algorithm,” Subtitled “arXiv,” 2020.

ABSTRACT

An important component of a number of computational modeling algorithms is an interpolation method that preserves the positivity of the function being interpolated. This report describes the numerical testing of a new positivity-preserving algorithm that is designed to be used when interpolating from a solution defined on one grid to different spatial grid. The motivating application is a numerical weather prediction (NWP) code that uses spectral elements as the discretization choice for its dynamics core and Cartesian product meshes for the evaluation of its physics routines. This combination of spectral elements, which use nonuniformly spaced quadrature/collocation points, and uniformly-spaced Cartesian meshes combined with the desire to maintain positivity when moving between these necessitates our work. This new approach is evaluated against several typical algorithms in use on a range of test problems in one or more space dimensions. The results obtained show that the new method is competitive in terms of observed accuracy while at the same time preserving the underlying positivity of the functions being interpolated.


2018


Y. He, M. Razi, C. Forestiere, L. Dal Negro, R.M. Kirby. “Uncertainty quantification guided robust design for nanoparticles' morphology,” In Computer Methods in Applied Mechanics and Engineering, Elsevier BV, pp. 578--593. July, 2018.
DOI: 10.1016/j.cma.2018.03.027

ABSTRACT

The automatic inverse design of three-dimensional plasmonic nanoparticles enables scientists and engineers to explore a wide design space and to maximize a device's performance. However, due to the large uncertainty in the nanofabrication process, we may not be able to obtain a deterministic value of the objective, and the objective may vary dramatically with respect to a small variation in uncertain parameters. Therefore, we take into account the uncertainty in simulations and adopt a classical robust design model for a robust design. In addition, we propose an efficient numerical procedure for the robust design to reduce the computational cost of the process caused by the consideration of the uncertainty. Specifically, we use a global sensitivity analysis method to identify the important random variables and consider the non-important ones as deterministic, and consequently reduce the dimension of the stochastic space. In addition, we apply the generalized polynomial chaos expansion method for constructing computationally cheaper surrogate models to approximate and replace the full simulations. This efficient robust design procedure is performed by varying the particles' material among the most commonly used plasmonic materials such as gold, silver, and aluminum, to obtain different robust optimal shapes for the best enhancement of electric fields.



A. Jallepalli, J. Docampo-Sánchez, J.K. Ryan, R. Haimes, R.M. Kirby. “On the treatment of field quantities and elemental continuity in fem solutions,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 24, No. 1, IEEE, pp. 903--912. Jan, 2018.
DOI: 10.1109/tvcg.2017.2744058

ABSTRACT

As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only C0 continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.



V. Keshavarzzadeh, R.M. Kirby, A. Narayan. “Numerical integration in multiple dimensions with designed quadrature,” In CoRR, 2018.

ABSTRACT

We present a systematic computational framework for generating positive quadrature rules in multiple dimensions on general geometries. A direct moment-matching formulation that enforces exact integration on polynomial subspaces yields nonlinear conditions and geometric constraints on nodes and weights. We use penalty methods to address the geometric constraints, and subsequently solve a quadratic minimization problem via the Gauss-Newton method. Our analysis provides guidance on requisite sizes of quadrature rules for a given polynomial subspace, and furnishes useful user-end stability bounds on error in the quadrature rule in the case when the polynomial moment conditions are violated by a small amount due to, e.g., finite precision limitations or stagnation of the optimization procedure. We present several numerical examples investigating optimal low-degree quadrature rules, Lebesgue constants, and 100-dimensional quadrature. Our capstone examples compare our quadrature approach to popular alternatives, such as sparse grids and quasi-Monte Carlo methods, for problems in linear elasticity and topology optimization.



T.A.J, Ouermi, R. M. Kirby,, M. Berzins. “Performance Optimization Strategies for WRF Physics Schemes Used in Weather Modeling,” In International Journal of Networking and Computing, Vol. 8, No. 2, IJNC , pp. 301--327. 2018.
DOI: 10.15803/ijnc.8.2_301

ABSTRACT

Performance optimization in the petascale era and beyond in the exascale era has and will require modifications of legacy codes to take advantage of new architectures with large core counts and SIMD units. The Numerical Weather Prediction (NWP) physics codes considered here are optimized using thread-local structures of arrays (SOA). High-level and low-level optimization strategies are applied to the WRF Single-Moment 6-Class Microphysics Scheme (WSM6) and Global Forecast System (GFS) physics codes used in the NEPTUNE forecast code. By building on previous work optimizing WSM6 on the Intel Knights Landing (KNL), it is shown how to further optimize WMS6 and GFS physics, and GFS radiation on Intel KNL, Haswell, and potentially on future micro-architectures with many cores and SIMD vector units. The optimization techniques used herein employ thread-local structures of arrays (SOA), an OpenMP directive, OMP SIMD, and minor code transformations to enable better utilization of SIMD units, increase parallelism, improve locality, and reduce memory traffic. The optimized versions of WSM6, GFS physics, GFS radiation run 70, 27, and 23 faster (respectively) on KNL and 26, 18 and 30 faster (respectively) on Haswell than their respective original serial versions. Although this work targets WRF physics schemes, the findings are transferable to other performance optimization contexts and provide insight into the optimization of codes with complex physical models for present and near-future architectures with many core and vector units.



Y. Yu, R.M. Kirby, G.E. Karniadakis. “Spectral Element and hp Methods,” In Encyclopedia of Computational Mechanics Second Edition, John Wiley & Sons, Ltd, pp. 1--43. 2018.

ABSTRACT

Spectral/hp element methods provide high‐order discretization, which is essential in the longtime integration of advection–diffusion systems and for capturing dynamic instabilities in solids. In this chapter, we review the main formulations for simulations of incompressible and compressible viscous flows as well as for solid mechanics and present several examples with some emphasis on fluid–structure interactions and interfaces. The first generation of (nodal) spectral elements was limited to relatively simple geometries and smooth solutions. However, the new generation of spectral/hp elements, consisting of both nodal and modal forms, can handle very complex geometries using unstructured grids and can capture strong shocks by employing discontinuous Galerkin methods. New flexible formulations allow simulations of multiphysics problems including extremely complex geometries and multiphase flows. Several implementation strategies have also been developed on the basis of multilevel parallel algorithms that allow dynamic p ‐refinement at constant wall clock time. After three decades of intense developments, spectral element and hp methods are mature and efficient to be used effectively in applications of industrial complexity. They provide the capabilities that standard finite element and finite volume methods do, but, in addition, they exhibit high‐order accuracy and error control.



V. Zala, V. Shankar, S.P. Sastry, R.M. Kirby. “Curvilinear Mesh Adaptation Using Radial Basis Function Interpolation and Smoothing,” In Journal of Scientific Computing, Springer Nature, pp. 1--22. April, 2018.
DOI: 10.1007/s10915-018-0711-0

ABSTRACT

We present a new iterative technique based on radial basis function (RBF) interpolation and smoothing for the generation and smoothing of curvilinear meshes from straight-sided or other curvilinear meshes. Our technique approximates the coordinate deformation maps in both the interior and boundary of the curvilinear output mesh by using only scattered nodes on the boundary of the input mesh as data sites in an interpolation problem. Our technique produces high-quality meshes in the deformed domain even when the deformation maps are singular due to a new iterative algorithm based on modification of the RBF shape parameter. Due to the use of RBF interpolation, our technique is applicable to both 2D and 3D curvilinear mesh generation without significant modification.


2017


A. Bhaduri, Y. He, M.D. Shields, L. Graham-Brady, R.M. Kirby. “Stochastic collocation approach with adaptive mesh refinement for parametric uncertainty analysis,” In CoRR, 2017.

ABSTRACT

Presence of a high-dimensional stochastic parameter space with discontinuities poses major computational challenges in analyzing and quantifying the effects of the uncertainties in a physical system. In this paper, we propose a stochastic collocation method with adaptive mesh refinement (SCAMR) to deal with high dimensional stochastic systems with discontinuities. Specifically, the proposed approach uses generalized polynomial chaos (gPC) expansion with Legendre polynomial basis and solves for the gPC coefficients using the least squares method. It also implements an adaptive mesh (element) refinement strategy which checks for abrupt variations in the output based on the second order gPC approximation error to track discontinuities or non-smoothness. In addition, the proposed method involves a criterion for checking possible dimensionality reduction and consequently, the decomposition of the full-dimensional problem to a number of lower-dimensional subproblems. Specifically, this criterion checks all the existing interactions between input dimensions of a specific problem based on the high-dimensional model representation (HDMR) method, and therefore automatically provides the subproblems which only involve interacting dimensions. The efficiency of the approach is demonstrated using both smooth and non-smooth function examples with input dimensions up to 300, and the approach is compared against other existing algorithms.



J. Docampo-Sánchez, J.K. Ryan, M. Mirzargar, R.M. Kirby. “Multi-Dimensional Filtering: Reducing the Dimension Through Rotation Read More: https://epubs.siam.org/doi/abs/10.1137/16M1097845,” In SIAM Journal on Scientific Computing, Vol. 39, No. 5, SIAM, pp. A2179--A2200. Jan, 2017.
DOI: 10.1137/16m1097845

ABSTRACT

Over the past few decades there has been a strong effort toward the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters for discontinuous Galerkin (DG) methods, designed to increase the smoothness and improve the convergence rate of the DG solution through this postprocessor. These advantages can be exploited during flow visualization, for example, by applying the SIAC filter to DG data before streamline computations [M. Steffen, S. Curtis, R. M. Kirby, and J. K. Ryan, IEEE Trans. Vis. Comput. Graphics, 14 (2008), pp. 680--692]. However, introducing these filters in engineering applications can be challenging since a tensor product filter grows in support size as the field dimension increases, becoming computationally expensive. As an alternative, [D. Walfisch, J. K. Ryan, R. M. Kirby, and R. Haimes, J. Sci. Comput., 38 (2009), pp. 164--184] proposed a univariate filter implemented along the streamline curves. Until now, this technique remained a numerical experiment. In this paper we introduce the line SIAC filter and explore how the orientation, structure, and filter size affect the order of accuracy and global errors. We present theoretical error estimates showing how line filtering preserves the properties of traditional tensor product filtering, including smoothness and improvement in the convergence rate. Furthermore, numerical experiments are included, exhibiting how these filters achieve the same accuracy at significantly lower computational costs, becoming an attractive tool for the scientific visualization community.



C. Gritton, J. Guilkey, J. Hooper, D. Bedrov, R. M. Kirby, M. Berzins. “Using the material point method to model chemical/mechanical coupling in the deformation of a silicon anode,” In Modelling and Simulation in Materials Science and Engineering, Vol. 25, No. 4, pp. 045005. 2017.

ABSTRACT

The lithiation and delithiation of a silicon battery anode is modeled using the material point method (MPM). The main challenges in modeling this process using the MPM is to simulate stress dependent diffusion coupled with concentration dependent stress within a material that undergoes large deformations. MPM is chosen as the numerical method of choice because of its ability to handle large deformations. A method for modeling diffusion within MPM is described. A stress dependent model for diffusivity and three different constitutive models that fully couple the equations for stress with the equations for diffusion are considered. Verifications tests for the accuracy of the numerical implementations of the models and validation tests with experimental results show the accuracy of the approach. The application of the fully coupled stress diffusion model implemented in MPM is applied to modeling the lithiation and delithiation of silicon nanopillars.



M. Mirzargar, A. Jallepalli, J.K. Ryan, R.M. Kirby. “Hexagonal Smoothness-Increasing Accuracy-Conserving Filtering,” In Journal of Scientific Computing, Vol. 73, No. 2-3, Springer Nature, pp. 1072--1093. Aug, 2017.
DOI: 10.1007/s10915-017-0517-5

ABSTRACT

Discontinuous Galerkin (DG) methods are a popular class of numerical techniques to solve partial differential equations due to their higher order of accuracy. However, the inter-element discontinuity of a DG solution hinders its utility in various applications, including visualization and feature extraction. This shortcoming can be alleviated by postprocessing of DG solutions to increase the inter-element smoothness. A class of postprocessing techniques proposed to increase the inter-element smoothness is SIAC filtering. In addition to increasing the inter-element continuity, SIAC filtering also raises the convergence rate from order k+1 to order 2k+1. Since the introduction of SIAC filtering for univariate hyperbolic equations by Cockburn et al. (Math Comput 72(242):577–606, 2003), many generalizations of SIAC filtering have been proposed. Recently, the idea of dimensionality reduction through rotation has been the focus of studies in which a univariate SIAC kernel has been used to postprocess a two-dimensional DG solution (Docampo-Sánchez et al. in Multi-dimensional filtering: reducing the dimension through rotation, 2016. arXiv preprint arXiv:1610.02317). However, the scope of theoretical development of multidimensional SIAC filters has never gone beyond the usage of tensor product multidimensional B-splines or the reduction of the filter dimension. In this paper, we define a new SIAC filter called hexagonal SIAC (HSIAC) that uses a nonseparable class of two-dimensional spline functions called hex splines. In addition to relaxing the separability assumption, the proposed HSIAC filter provides more symmetry to its tensor-product counterpart. We prove that the superconvergence property holds for a specific class of structured triangular meshes using HSIAC filtering and provide numerical results to demonstrate and validate our theoretical results.



M. Mirzargar, R.T. Whitaker, R.M. Kirby. “Exploration of Heterogeneous Data Using Robust Similarity,” In CoRR, 2017.

ABSTRACT

Heterogeneous data pose serious challenges to data analysis tasks, including exploration and visualization. Current techniques often utilize dimensionality reductions, aggregation, or conversion to numerical values to analyze heterogeneous data. However, the effectiveness of such techniques to find subtle structures such as the presence of multiple modes or detection of outliers is hindered by the challenge to find the proper subspaces or prior knowledge to reveal the structures. In this paper, we propose a generic similarity-based exploration technique that is applicable to a wide variety of datatypes and their combinations, including heterogeneous ensembles. The proposed concept of similarity has a close connection to statistical analysis and can be deployed for summarization, revealing fine structures such as the presence of multiple modes, and detection of anomalies or outliers. We then propose a visual encoding framework that enables the exploration of a heterogeneous dataset in different levels of detail and provides insightful information about both global and local structures. We demonstrate the utility of the proposed technique using various real datasets, including ensemble data.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “OpenMP 4 Fortran Modernization of WSM6 for KNL,” In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC17, No. 12, ACM, pp. 12:1--12:8. 2017.
ISBN: 978-1-4503-5272-7
DOI: 10.1145/3093338.3093387

ABSTRACT

Parallel code portability in the petascale era requires modifying existing codes to support new architectures with large core counts and SIMD vector units. OpenMP is a well established and increasingly supported vehicle for portable parallelization. As architectures mature and compiler OpenMP implementations evolve, best practices for code modernization change as well. In this paper, we examine the impact of newer OpenMP features (in particular OMP SIMD) on the Intel Xeon Phi Knights Landing (KNL) architecture, applied in optimizing loops in the single moment 6-class microphysics module (WSM6) in the US Navy's NEPTUNE code. We find that with functioning OMP SIMD constructs, low thread invocation overhead on KNL and reduced penalty for unaligned access compared to previous architectures, one can leverage OpenMP 4 to achieve reasonable scalability with relatively minor reorganization of a production physics code.



T.A.J. Ouermi, A. Knoll, R.M. Kirby, M. Berzins. “Optimization Strategies for WRF Single-Moment 6-Class Microphysics Scheme (WSM6) on Intel Microarchitectures,” In Proceedings of the fifth international symposium on computing and networking (CANDAR 17). Awarded Best Paper , IEEE, 2017.

ABSTRACT

Optimizations in the petascale era require modifications of existing codes to take advantage of new architectures with large core counts and SIMD vector units. This paper examines high-level and low-level optimization strategies for numerical weather prediction (NWP) codes. These strategies employ thread-local structures of arrays (SOA) and an OpenMP directive such as OMP SIMD. These optimization approaches are applied to the Weather Research Forecasting single-moment 6-class microphysics schemes (WSM6) in the US Navy NEPTUNE system. The results of this study indicate that the high-level approach with SOA and low-level OMP SIMD improves thread and vector parallelism by increasing data and temporal locality. The modified version of WSM6 runs 70x faster than the original serial code. This improvement is about 23.3x faster than the performance achieved by Ouermi et al., and 14.9x faster than the performance achieved by Michalakes et al.



M. Rautenhaus, M. Böttinger, S. Siemen, R. Hoffman, R.M. Kirby, M. Mirzargar, N. Rober, R. Westermann. “Visualization in Meteorology---A Survey of Techniques and Tools for Data Analysis Tasks,” In IEEE Transactions on Visualization and Computer Graphics, IEEE, pp. 1--1. 2017.
DOI: 10.1109/tvcg.2017.2779501

ABSTRACT

This article surveys the history and current state of the art of visualization in meteorology, focusing on visualization techniques and tools used for meteorological data analysis. We examine characteristics of meteorological data and analysis tasks, describe the development of computer graphics methods for visualization in meteorology from the 1960s to today, and visit the state of the art of visualization techniques and tools in operational weather forecasting and atmospheric research. We approach the topic from both the visualization and the meteorological side, showing visualization techniques commonly used in meteorological practice, and surveying recent studies in visualization research aimed at meteorological applications. Our overview covers visualization techniques from the fields of display design, 3D visualization, flow dynamics, feature-based visualization, comparative visualization and data fusion, uncertainty and ensemble visualization, interactive visual analysis, efficient rendering, and scalability and reproducibility. We discuss demands and challenges for visualization research targeting meteorological data analysis, highlighting aspects in demonstration of benefit, interactive visual analysis, seamless visualization, ensemble visualization, 3D visualization, and technical issues.


2015


T. Etiene, R.M. Kirby, C. Silva. “An Introduction to Verification of Visualization Techniques,” Morgan & Claypool Publishers, 2015.