SCI Publications
2021
M. K. Ballard, R. Amici, V. Shankar, L. A. Ferguson, M. Braginsky, R. M. Kirby.
Towards an Extrinsic, CG-XFEM Approach Based on Hierarchical Enrichments for Modeling Progressive Fracture, Subtitled arXiv preprint arXiv:2104.14704, 2021.
We propose an extrinsic, continuous-Galerkin (CG), extended finite element method (XFEM) that generalizes the work of Hansbo and Hansbo to allow multiple Heaviside enrichments within a single element in a hierarchical manner. This approach enables complex, evolving XFEM surfaces in 3D that cannot be captured using existing CG-XFEM approaches. We describe an implementation of the method for 3D static elasticity with linearized strain for modeling open cracks as a salient step towards modeling progressive fracture. The implementation includes a description of the finite element model, hybrid implicit/explicit representation of enrichments, numerical integration method, and novel degree-of-freedom (DoF) enumeration algorithm. This algorithm supports an arbitrary number of enrichments within an element, while simultaneously maintaining a CG solution across elements. Additionally, our approach easily allows an implementation suitable for distributed computing systems. Enabled by the DoF enumeration algorithm, the proposed method lays the groundwork for a computational tool that efficiently models progressive fracture. To facilitate a discussion of the complex enrichment hierarchies, we develop enrichment diagrams to succinctly describe and visualize the relationships between the enrichments (and the fields they create) within an element. This also provides a unified language for discussing extrinsic XFEM methods in the literature. We compare several methods, relying on the enrichment diagrams to highlight their nuanced differences.
D. Balouek-Thomert, I. Rodero, M. Parashar.
Evaluating policy-driven adaptation on the Edge-to-Cloud Continuum, In IEEE/ACM HPC for Urgent Decision Making (UrgentHPC), pp. 11-20. 2021.
DOI: 10.1109/UrgentHPC54802.2021.00007
Developing data-driven applications requires developers and service providers to orchestrate data-to-discovery pipelines across distributed data sources and computing units. Realizing such pipelines poses two major challenges: programming analytics that reacts at runtime to unforeseen events, and adaptation of the resources and computing paths between the edge and the cloud. While these concerns are interdependent, they must be separated during the design process of the application and the deployment operations of the infrastructure. This work proposes a system stack for the adaptation of distributed analytics across the computing continuum. We implemented this software stack to evaluate its ability to continually balance the computation or data movement’s cost with the value of operations to the application objectives. Using a disaster response application, we observe that the system can select appropriate configurations while managing trade-offs between user-defined constraints, quality of results, and resource utilization. The evaluation shows that our model is able to adapt to variations in the data input size, bandwidth, and CPU capacities with minimal deadline violations (close to 10%). This constitutes encouraging results to benefit and facilitate the creation of ad-hoc computing paths for urgent science and time-critical decision-making.
D. Balouek-Thomert, P. Silva, K. Fauvel, A. Costan, G. Antoniu, M. Parashar.
MDSC: Modelling Distributed Stream Processing across the Edge-to-Cloud Continuum, In DML-ICC 2021 workshop (held in conjunction with UCC 2021), December, 2021.
The growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application -early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how?
M. Berzins.
Symplectic Time Integration Methods for the Material Point Method, Experiments, Analysis and Order Reduction, In WCCM-ECCOMAS2020 virtual Conference, Note: Minor typographical correction in March 2024, January, 2021.
The provision of appropriate time integration methods for the Material Point Method (MPM) involves considering stability, accuracy and energy conservation. A class of methods that addresses many of these issues are the widely-used symplectic time integration methods. Such methods have good conservation properties and have the potential to achieve high accuracy. In this work we build on the work in [5] and consider high order methods for the time integration of the Material Point Method. The results of practical experiments show that while high order methods in both space and time have good accuracy initially, unless the problem has relatively little particle movement then the accuracy of the methods for later time is closer to that of low order methods. A theoretical analysis explains these results as being similar to the stage error found in Runge Kutta methods, though in this case the stage error arises from the MPM differentiations and interpolations from particles to grid and back again, particularly in cases in which there are many grid crossings.
J. A. Bergquist, W. W. Good, B. Zenger, J. D. Tate, L. C. Rupp, R. S. MacLeod.
The Electrocardiographic Forward Problem: A Benchmark Study, In Computers in Biology and Medicine, Vol. 134, Pergamon, pp. 104476. 2021.
DOI: https://doi.org/10.1016/j.compbiomed.2021.104476
Background
Electrocardiographic forward problems are crucial components for noninvasive electrocardiographic imaging (ECGI) that compute torso potentials from cardiac source measurements. Forward problems have few sources of error as they are physically well posed and supported by mature numerical and computational techniques. However, the residual errors reported from experimental validation studies between forward computed and measured torso signals remain surprisingly high.
Objective
To test the hypothesis that incomplete cardiac source sampling, especially above the atrioventricular (AV) plane is a major contributor to forward solution errors.
Methods
We used a modified Langendorff preparation suspended in a human-shaped electrolytic torso-tank and a novel pericardiac-cage recording array to thoroughly sample the cardiac potentials. With this carefully controlled experimental preparation, we minimized possible sources of error, including geometric error and torso inhomogeneities. We progressively removed recorded signals from above the atrioventricular plane to determine how the forward-computed torso-tank potentials were affected by incomplete source sampling.
Results
We studied 240 beats total recorded from three different activation sequence types (sinus, and posterior and anterior left-ventricular free-wall pacing) in each of two experiments. With complete sampling by the cage electrodes, all correlation metrics between computed and measured torso-tank potentials were above 0.93 (maximum 0.99). The mean root-mean-squared error across all beat types was also low, less than or equal to 0.10 mV. A precipitous drop in forward solution accuracy was observed when we included only cage measurements below the AV plane.
Conclusion
First, our forward computed potentials using complete cardiac source measurements set a benchmark for similar studies. Second, this study validates the importance of complete cardiac source sampling above the AV plane to produce accurate forward computed torso potentials. Testing ECGI systems and techniques with these more complete and highly accurate datasets will improve inverse techniques and noninvasive detection of cardiac electrical abnormalities.
M. Berzins.
Time Stepping with Space and Time Errors and Stability of the Material Point Method, In Proceedings of VII International Conference on Particle-Based Methods, PARTICLES 2021, Edited by P. Wriggers, M. Bischoff, E. Onate, M. Bischoff, A. Duster & T. Zohdi, 2021.
The choice of the time step for the Material Point Method (MPM) is often addressed by using a simple stability criterion, such as the speed of sound or a CFL condition. Recently there have been several advances in understanding the stability of MPM. These range from non-linear stability analysis, through to Von Neumann type approaches. While in many instances this works well it is important to understand how this relates to the overall errors present in the method. Although it has been observed that spatial errors may dominate temporal ones at stable time steps, recent work has made more precise the sources and forms of the different MPM errors. This now makes it possible to understand how the different errors and the stability analysis are connected. At the same time this also requires simple computable estimates of the different errors in the material point method. The use of simple estimates of these errors makes it possible to connect some of the errors introduced with the stability criteria used. A number of simple computational experiments are used to illustrate the theoretical results.
R. Bhalodia, S. Elhabian, L. Kavan, R. Whitaker.
Leveraging Unsupervised Image Registration for Discovery of Landmark Shape Descriptor, In Medical Image Analysis, Elsevier, pp. 102157. 2021.
In current biological and medical research, statistical shape modeling (SSM) provides an essential framework for the characterization of anatomy/morphology. Such analysis is often driven by the identification of a relatively small number of geometrically consistent features found across the samples of a population. These features can subsequently provide information about the population shape variation. Dense correspondence models can provide ease of computation and yield an interpretable low-dimensional shape descriptor when followed by dimensionality reduction. However, automatic methods for obtaining such correspondences usually require image segmentation followed by significant preprocessing, which is taxing in terms of both computation as well as human resources. In many cases, the segmentation and subsequent processing require manual guidance and anatomy specific domain expertise. This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis. We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well. We also propose a regularization term that allows for robust optimization of the neural network and ensures that the landmarks uniformly span the image domain. The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images. In addition, we also propose two variants on the training loss function that allows for prior shape information to be integrated into the model. We apply this framework on several 2D and 3D datasets to obtain their shape descriptors. We analyze these shape descriptors in their efficacy of capturing shape information by performing different shape-driven applications depending on the data ranging from shape clustering to severity prediction to outcome diagnosis.
R. Bhalodia, S. Elhabian, J. Adams, W. Tao, L. Kavan, R. Whitaker.
DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models, Subtitled arXiv preprint arXiv:2110.07152, 2021.
Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images. SSM requires consistent shape representation across samples in shape cohort. Establishing this representation entails a processing pipeline that includes anatomy segmentation, re-sampling, registration, and non-linear optimization. These shape representations are then used to extract low-dimensional shape descriptors that facilitate subsequent analyses in different applications. However, the current process of obtaining these shape descriptors from imaging data relies on human and computational resources, requiring domain expertise for segmenting anatomies of interest. Moreover, this same taxing pipeline needs to be repeated to infer shape descriptors for new image data using a pre-trained/existing shape model. Here, we propose DeepSSM, a deep learning-based framework for learning the functional mapping from images to low-dimensional shape descriptors and their associated shape representations, thereby inferring statistical representation of anatomy directly from 3D images. Once trained using an existing shape model, DeepSSM circumvents the heavy and manual pre-processing and segmentation and significantly improves the computational time, making it a viable solution for fully end-to-end SSM applications. In addition, we introduce a model-based data-augmentation strategy to address data scarcity. Finally, this paper presents and analyzes two different architectural variants of DeepSSM with different loss functions using three medical datasets and their downstream clinical application. Experiments showcase that DeepSSM performs comparably or better to the state-of-the-art SSM both quantitatively and on application-driven downstream tasks. Therefore, DeepSSM aims to provide a comprehensive blueprint for deep learning-based image-to-shape models.
H. Bhatia, S. N. Petruzza, R. Anirudh, A. G. Gyulassy, R. M. Kirby, V. Pascucci, P. T. Bremer.
Data-Driven Estimation of Temporal-Sampling Errors in Unsteady Flows, 2021.
While computer simulations typically store data at the highest available spatial resolution, it is often infeasible to do so for the temporal dimension. Instead, the common practice is to store data at regular intervals, the frequency of which is strictly limited by the available storage and I/O bandwidth. However, this manner of temporal subsampling can cause significant errors in subsequent analysis steps. More importantly, since the intermediate data is lost, there is no direct way of measuring this error after the fact. One particularly important use case that is affected is the analysis of unsteady flows using pathlines, as it depends on an accurate interpolation across time. Although the potential problem with temporal undersampling is widely acknowledged, there currently does not exist a practical way to estimate the potential impact. This paper presents a simple-to-implement yet powerful technique to estimate the error in pathlines due to temporal subsampling. Given an unsteady flow, we compute pathlines at the given temporal resolution as well as subsamples thereof. We then compute the error induced due to various levels of subsampling and use it to estimate the error between the given resolution and the unknown ground truth. Using two turbulent flows, we demonstrate that our approach, for the first time, provides an accurate, a posteriori error estimate for pathline computations. This estimate will enable scientists to better understand the uncertainties involved in pathline-based analysis techniques and can lead to new uncertainty visualization approaches using the predicted errors.
H. Bhatia, D. Hoang, N. Morrical, V. Pascucci, P.T. Bremer, P. Lindstrom.
AMM: Adaptive Multilinear Meshes, Subtitled arXiv:2007.15219, 2021.
Adaptive representations are increasingly indispensable for reducing the in-memory and on-disk footprints of large-scale data. Usual solutions are designed broadly along two themes: reducing data precision, e.g., through compression, or adapting data resolution, e.g., using spatial hierarchies. Recent research suggests that combining the two approaches, i.e., adapting both resolution and precision simultaneously, can offer significant gains over using them individually. However, there currently exist no practical solutions to creating and evaluating such representations at scale. In this work, we present a new resolution-precision-adaptive representation to support hybrid data reduction schemes and offer an interface to existing tools and algorithms. Through novelties in spatial hierarchy, our representation, Adaptive Multilinear Meshes (AMM), provides considerable reduction in the mesh size. AMM creates a piecewise multilinear representation of uniformly sampled scalar data and can selectively relax or enforce constraints on conformity, continuity, and coverage, delivering a flexible adaptive representation. AMM also supports representing the function using mixed-precision values to further the achievable gains in data reduction. We describe a practical approach to creating AMM incrementally using arbitrary orderings of data and demonstrate AMM on six types of resolution and precision datastreams. By interfacing with state-of-the-art rendering tools through VTK, we demonstrate the practical and computational advantages of our representation for visualization techniques. With an open-source release of our tool to create AMM, we make such evaluation of data reduction accessible to the community, which we hope will foster new opportunities and future data reduction schemes
S. R. Black, A. Janson, M. Mahan, J. Anderson, C. R. Butson.
Identification of Deep Brain Stimulation Targets for Neuropathic Pain After Spinal Cord Injury Using Localized Increases in White Matter Fiber Cross‐Section, In Neuromodulation: Technology at the Neural Interface, John Wiley & Sons, Inc., 2021.
Objectives
The spinal cord injury (SCI) patient population is overwhelmingly affected by neuropathic pain (NP), a secondary condition for which therapeutic options are limited and have a low degree of efficacy. The objective of this study was to identify novel deep brain stimulation (DBS) targets that may theoretically benefit those with NP in the SCI patient population. We hypothesize that localized changes in white matter identified in SCI subjects with NP compared to those without NP could be used to develop an evidence‐based approach to DBS target identification.
K. M. Campbell, H. Dai, Z. Su, M. Bauer, P. T. Fletcher, S. C. Joshi.
Structural Connectome Atlas Construction in the Space of Riemannian Metrics, Subtitled arXiv, 2021.
The structural connectome is often represented by fiber bundles generated from various types of tractography. We propose a method of analyzing connectomes by representing them as a Riemannian metric, thereby viewing them as points in an infinite-dimensional manifold. After equipping this space with a natural metric structure, the Ebin metric, weapply object-oriented statistical analysis to define an atlas as the Fŕechet mean of a population of Riemannian metrics. We demonstrate connectome registration and atlas formation using connectomes derived from diffusion tensors estimated from a subset of subjects from the Human Connectome Project.
S. H. Campbell, T. Bidone.
3D Model of Cell Migration and Proliferation in a Tissue Scaffold, In Biophysical Journal, Vol. 120, No. 3, Elsevier, pp. 265a. 2021.
Tissue scaffolds restore tissue functionality without the limitations of transplants. However, successful tissue growth depends on the interplay between scaffold properties and cell activities. It has been previously reported that scaffold porosity and Young's modulus affect cell migration and tissue generation. However, how the geometrical and mechanical properties of a scaffold exactly interplay with cell processes remain poorly understood and are essential for successful tissue growth. We developed a 3D computational model that simulates cell migration and proliferation on a scaffold. The model generates an adjustable 3D porous scaffold environment with a defined pore size and Young modulus. Cells are treated as explicit spherical particles comparable in size to bone-marrow cells and are initially seeded randomly throughout the scaffold. Cells can create adhesions, proliferate, and independently migrate across pores in a random walk. Cell adhesions during migration follow the molecular-clutch mechanism, where traction force from the cells against the scaffold stiffness reinforces adhesions lifetime up to a threshold. We used the model to test how variations in cell proliferation rate, scaffold Young's modulus, and porosity affect cell migration speed. At a low proliferation rate (1 x 10−7 s−1), the spread of cell speeds is larger than at a high replication rate (1 x 10−6 s−1). A biphasic relation between Young's modulus and cell speed is also observed reflecting the molecular-clutch mechanism at the level of individual adhesions. These observations are consistent with previous reports regarding fibroblast migration on collagen-glycosaminoglycan scaffolds. Additionally, our model shows that similar cell diameters and pore diameter induces a crowding effect decreasing cell speed. The results from our study provide important insights about biophysical mechanisms that govern cell motility on scaffolds with different properties for tissue engineering applications.
K.M. Campbell, H. Dai, Z. Su, M. Bauer, P.T. Fletcher, S.C. Joshi.
Integrated Construction of Multimodal Atlases with Structural Connectomes in the Space of Riemannian Metrics, Subtitled arXiv preprint arXiv:2109.09808, 2021.
The structural network of the brain, or structural connectome, can be represented by fiber bundles generated by a variety of tractography methods. While such methods give qualitative insights into brain structure, there is controversy over whether they can provide quantitative information, especially at the population level. In order to enable population-level statistical analysis of the structural connectome, we propose representing a connectome as a Riemannian metric, which is a point on an infinite-dimensional manifold. We equip this manifold with the Ebin metric, a natural metric structure for this space, to get a Riemannian manifold along with its associated geometric properties. We then use this Riemannian framework to apply object-oriented statistical analysis to define an atlas as the Fr\'echet mean of a population of Riemannian metrics. This formulation ties into the existing framework for diffeomorphic construction of image atlases, allowing us to construct a multimodal atlas by simultaneously integrating complementary white matter structure details from DWMRI and cortical details from T1-weighted MRI. We illustrate our framework with 2D data examples of connectome registration and atlas formation. Finally, we build an example 3D multimodal atlas using T1 images and connectomes derived from diffusion tensors estimated from a subset of subjects from the Human Connectome Project.
M. Carlson, X. Zheng, H. Sundar, G. E. Karniadakis, R. M. Kirby.
An open-source parallel code for computing the spectral fractional Laplacian on 3D complex geometry domains, In Computer Physics Communications, Vol. 261, North-Holland, pp. 107695. 2021.
We present a spectral element algorithm and open-source code for computing the fractional Laplacian defined by the eigenfunction expansion on finite 2D/3D complex domains with both homogeneous and nonhomogeneous boundaries. We demonstrate the scalability of the spectral element algorithm on large clusters by constructing the fractional Laplacian based on computed eigenvalues and eigenfunctions using up to thousands of CPUs. To demonstrate the accuracy of this eigen-based approach for computing the factional Laplacian, we approximate the solutions of the fractional diffusion equation using the computed eigenvalues and eigenfunctions on a 2D quadrilateral, and on a 3D cubic and cylindrical domain, and compare the results with the contrived solutions to demonstrate fast convergence. Subsequently, we present simulation results for a fractional diffusion equation on a hand-shaped domain discretized with 3D hexahedra, as well as on a domain constructed from the Hanford site geometry corresponding to nonzero Dirichlet boundary conditions. Finally, we apply the algorithm to solve the surface quasi-geostrophic (SQG) equation on a 2D square with periodic boundaries. Simulation results demonstrate the accuracy, efficiency, and geometric flexibility of our algorithm and that our algorithm can capture the subtle dynamics of anomalous diffusion modeled by the fractional Laplacian on complex geometry domains. The included open-source code is the first of its kind.
K. R. Carney, A. M. Khan, S. C. Samson, N. Mittal, S. J. Han, M. C. Mendoza, T. C. Bidone.
Nascent adhesions differentially regulate lamellipodium velocity and persistence, Subtitled bioRxiv, 2021.
DOI: 10.1101/2021.11.15.468602
Cell migration is essential to physiological and pathological biology. Migration is driven by the motion of a leading edge, in which actin polymerization pushes against the edge and adhesions transmit traction to the substrate while membrane tension increases. How the actin and adhesions synergistically control edge protrusion remains elusive. We addressed this question by developing a computational model in which the Brownian ratchet mechanism governs actin filament polymerization against the membrane and the molecular clutch mechanism governs adhesion to the substrate (BR-MC model). Our model predicted that actin polymerization is the most significant driver of protrusion, as actin had a greater effect on protrusion than adhesion assembly. Increasing the lifetime of nascent adhesions also enhanced velocity, but decreased the protrusion's motional persistence, because filaments maintained against the cell edge ceased polymerizing as membrane tension increased. We confirmed the model predictions with measurement of adhesion lifetime and edge motion in migrating cells. Adhesions with longer lifetime were associated with faster protrusion velocity and shorter persistence. Experimentally increasing adhesion lifetime increased velocity but decreased persistence. We propose a mechanism for actin polymerization-driven, adhesion-dependent protrusion in which balanced nascent adhesion assembly and lifetime generates protrusions with the power and persistence to drive migration.
Y. Chen, C. McNabb, T. Bidone.
Computational Model of E-cadherin Clustering under Cortical Tension, In Biophysical Journal, Vol. 120, No. 3, Elsevier, pp. 236a. 2021.
E-cadherins are adhesion proteins that play a critical role in the formation of cell-cell junctions for several physiological processes, including tissue development and homeostasis. The formation of E-cadherin clusters involves extracellular trans-and cis-associations between cadherin ectodomains and stabilization through intracellular coupling with the contractile actomyosin cortex. The dynamic remodeling of cell-cell junctions largely depends on cortical tension, but previous modeling frameworks did not incorporate this effect. In order to gain insights into the effects of cortical tension on the dynamic properties of E-cadherin clusters, here we developed a computational model based on Brownian dynamics. The model considers individual cadherins as explicit point particles undergoing cycles of lateral diffusion on two parallel surfaces that mimic the membrane of neighboring cells. E-cadherins transit between …
Y. Chen, L. Ji, A. Narayan, Z. Xu.
L1-based reduced over collocation and hyper reduction for steady state and time-dependent nonlinear equations, In Journal of Scientific Computing, Vol. 87, No. 1, Springer US, pp. 1--21. 2021.
The task of repeatedly solving parametrized partial differential equations (pPDEs) in optimization, control, or interactive applications makes it imperative to design highly efficient and equally accurate surrogate models. The reduced basis method (RBM) presents itself as such an option. Accompanied by a mathematically rigorous error estimator, RBM carefully constructs a low-dimensional subspace of the parameter-induced high fidelity solution manifold on which an approximate solution is computed. It can improve efficiency by several orders of magnitudes leveraging an offline-online decomposition procedure. However this decomposition, usually implemented with aid from the empirical interpolation method (EIM) for nonlinear and/or parametric-nonaffine PDEs, can be challenging to implement, or results in severely degraded online efficiency. In this paper, we augment and extend the EIM approach as a direct solver, as opposed to an assistant, for solving nonlinear pPDEs on the reduced level. The resulting method, called Reduced Over-Collocation method (ROC), is stable and capable of avoiding efficiency degradation exhibited in traditional applications of EIM. Two critical ingredients of the scheme are collocation at about twice as many locations as the dimension of the reduced approximation space, and an efficient L1-norm-based error indicator for the strategic selection of the parameter values whose snapshots span the reduced approximation space. Together, these two ingredients ensure that the proposed L1-ROC scheme is both offline- and online-efficient. A distinctive feature is that the efficiency degradation appearing in alternative RBM approaches that utilize EIM for nonlinear and nonaffine problems is circumvented, both in the offline and online stages. Numerical tests on different families of time-dependent and steady-state nonlinear problems demonstrate the high efficiency and accuracy of L1-ROC and its superior stability performance.
J. Chilleri, Y. He, D. Bedrov, R. M. Kirby.
Optimal allocation of computational resources based on Gaussian process: Application to molecular dynamics simulations, In Computational Materials Science, Vol. 188, Elsevier, pp. 110178. 2021.
Simulation models have been utilized in a wide range of real-world applications for behavior predictions of complex physical systems or material designs of large structures. While extensive simulation is mathematically preferable, external limitations such as available resources are often necessary considerations. With a fixed computational resource (i.e., total simulation time), we propose a Gaussian process-based numerical optimization framework for optimal time allocation over simulations at different locations, so that a surrogate model with uncertainty estimation can be constructed to approximate the full simulation. The proposed framework is demonstrated first via two synthetic problems, and later using a real test case of a glass-forming system with divergent dynamic relaxations where a Gaussian process is constructed to estimate the diffusivity and its uncertainty with respect to the temperature.
D. Dai, Y. Epshteyn, A. Narayan.
Hyperbolicity-Preserving and Well-Balanced Stochastic Galerkin Method for Two-Dimensional Shallow Water Equations, In SIAM Journal on Scientific Computing, Vol. 43, No. 2, Society for Industrial and Applied Mathematics, pp. A929-A952. 2021.
Stochastic Galerkin formulations of the two-dimensional shallow water systems parameterized with random variables may lose hyperbolicity, and hence change the nature of the original model. In this work, we present a hyperbolicity-preserving stochastic Galerkin formulation by carefully selecting the polynomial chaos approximations to the nonlinear terms of , and in the shallow water equations. We derive a sufficient condition to preserve the hyperbolicity of the stochastic Galerkin system which requires only a finite collection of positivity conditions on the stochastic water height at selected quadrature points in parameter space. Based on our theoretical results for the stochastic Galerkin formulation, we develop a corresponding well-balanced hyperbolicity-preserving central-upwind scheme. We demonstrate the accuracy and the robustness of the new scheme on several challenging numerical tests.
Page 15 of 142