Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2021


A. Dubey, M. Berzins, C. Burstedde, M.l L. Norman, D. Unat, M. Wahib. “Structured Adaptive Mesh Refinement Adaptations to Retain Performance Portability With Increasing Heterogeneity,” In Computing in Science & Engineering, Vol. 23, No. 5, pp. 62-66. 2021.
ISSN: 1521-9615
DOI: 10.1109/MCSE.2021.3099603

ABSTRACT

Adaptive mesh refinement (AMR) is an important method that enables many mesh-based applications to run at effectively higher resolution within limited computing resources by allowing high resolution only where really needed. This advantage comes at a cost, however: greater complexity in the mesh management machinery and challenges with load distribution. With the current trend of increasing heterogeneity in hardware architecture, AMR presents an orthogonal axis of complexity. The usual techniques, such as asynchronous communication and hierarchy management for parallelism and memory that are necessary to obtain reasonable performance are very challenging to reason about with AMR. Different groups working with AMR are bringing different approaches to this challenge. Here, we examine the design choices of several AMR codes and also the degree to which demands placed on them by their users influence these choices.



M. D. Foote, P. E. Dennison, P. R. Sullivan, K. B. O'Neill, A. K. Thorpe, D. R. Thompson, D. H. Cusworth, R. Duren, S. Joshi. “Impact of scene-specific enhancement spectra on matched filter greenhouse gas retrievals from imaging spectroscopy,” In Remote Sensing of Environment, Vol. 264, Elsevier, pp. 112574. 2021.

ABSTRACT

Matched filter techniques have been widely used for retrieval of greenhouse gas enhancements from imaging spectroscopy datasets. While multiple algorithmic techniques and refinements have been proposed, the greenhouse gas target spectrum used for concentration enhancement estimation has remained largely unaltered since the introduction of quantitative matched filter retrievals. The magnitude of retrieved methane and carbon dioxide enhancements, and thereby integrated mass enhancements (IME) and estimated flux of point-source emitters, is heavily dependent on this target spectrum. Current standard use of molecular absorption coefficients to create unit enhancement target spectra does not account for absorption by background concentrations of greenhouse gases, solar and sensor geometry, or atmospheric water vapor absorption. We introduce geometric and atmospheric parameters into the generation of scene-specific unit enhancement spectra to provide target spectra that are compatible with all greenhouse gas retrieval matched filter techniques. Specifically, we use radiative transfer modeling to model four parameters that are expected to change between scenes: solar zenith angle, column water vapor, ground elevation, and sensor altitude. These parameter values are well defined, with low variation within a single scene. A benchmark dataset consisting of ten AVIRIS-NG airborne imaging spectrometer scenes was used to compare IME retrieved using a matched filter algorithm. For methane plumes, IME resulting from use of standard, generic enhancement spectra varied from −22 to +28.7% compared to scene-specific enhancement spectra. Due to differences in spectral shape between the generic and scene-specific enhancement spectra, differences in methane plume IME were linked to surface spectral characteristics in addition to geometric and atmospheric parameters. IME differences were much larger for carbon dioxide plumes, with generic enhancement spectra producing integrated mass enhancements −76.1 to −48.1% compared to scene-specific enhancement spectra. Fluxes calculated from these integrated enhancements would vary by the same percentages, assuming equivalent wind conditions. Methane and carbon dioxide IME were most sensitive to changes in solar zenith angle and ground elevation. We introduce an interpolation approach that can efficiently generate scene-specific unit enhancement spectra for given sets of parameters. Scene-specific target spectra can improve confidence in greenhouse gas retrievals and flux estimates across collections of scenes with diverse geometric and atmospheric conditions.



K. Gadhave, J. Görtler, Z. Cutler, C. Nobre, O. Deussen, M. Meyer, J.M. Phillips, A. Lex. “Predicting intent behind selections in scatterplot visualizations,” In Information Visualization, Vol. 20, No. 4, pp. 207-228. 2021.
DOI: 10.1177/14738716211038604

ABSTRACT

Predicting and capturing an analyst’s intent behind a selection in a data visualization is valuable in two scenarios: First, a successful prediction of a pattern an analyst intended to select can be used to auto-complete a partial selection which, in turn, can improve the correctness of the selection. Second, knowing the intent behind a selection can be used to improve recall and reproducibility. In this paper, we introduce methods to infer analyst’s intents behind selections in data visualizations, such as scatterplots. We describe intents based on patterns in the data, and identify algorithms that can capture these patterns. Upon an interactive selection, we compare the selected items with the results of a large set of computed patterns, and use various ranking approaches to identify the best pattern for an analyst’s selection. We store annotations and the metadata to reconstruct a selection, such as the type of algorithm and its parameterization, in a provenance graph. We present a prototype system that implements these methods for tabular data and scatterplots. Analysts can select a prediction to auto-complete partial selections and to seamlessly log their intents. We discuss implications of our approach for reproducibility and reuse of analysis workflows. We evaluate our approach in a crowd-sourced study, where we show that auto-completing selection improves accuracy, and that we can accurately capture pattern-based intent.



K. Gadhave, Z.T. Cutler, A. Lex. “Reusing Interactive Analysis Workflows,” Subtitled “OSF Preprints,” 2021.

ABSTRACT

Interactive visual analysis has many advantages, but has the disadvantage that analysis processes and workflows cannot be easily stored and reused, which is in contrast to scripted analysis workflows using a programming language such as Python. In this paper, we introduce methods to semantically capture workflows in interactive visualization systems for different interactions such as selections, filters, categorizing/grouping, labeling, and aggregation. We design these workflows to be robust to updates in the dataset by capturing the semantics of underlying interactions, and, hence, they can be applied to updated datasets. We demonstrate this specification using a prototype that visualizes the data, shows interaction provenance, and allows generating workflows from this provenance. Finally, we introduce a Python library that can consume the workflow and apply it to the datasets, providing a seamless bridge between computational workflows and interactive visualization tools. We demonstrate our techniques using our UI prototype and Jupyter notebooks.



W. W. Good, B. Zenger, J. A. Bergquist, L. C. Rupp, K. K. Gillette, M. A.F. Gsell, G. Plank, R. S. MacLeod. “Quantifying the spatiotemporal influence of acute myocardial ischemia on volumetric conduction velocity,” In Journal of Electrocardiology, Vol. 66, Churchill Livingstone, pp. 86-94. 2021.

ABSTRACT

Introduction
Acute myocardial ischemia occurs when coronary perfusion to the heart is inadequate, which can perturb the highly organized electrical activation of the heart and can result in adverse cardiac events including sudden cardiac death. Ischemia is known to influence the ST and repolarization phases of the ECG, but it also has a marked effect on propagation (QRS); however, studies investigating propagation during ischemia have been limited.

Methods
We estimated conduction velocity (CV) and ischemic stress prior to and throughout 20 episodes of experimentally induced ischemia in order to quantify the progression and correlation of volumetric conduction changes during ischemia. To estimate volumetric CV, we 1) reconstructed the activation wavefront; 2) calculated the elementwise gradient to approximate propagation direction; and 3) estimated conduction speed (CS) with an inverse-gradient technique.
Results
We found that acute ischemia induces significant conduction slowing, reducing the global median speed by 20 cm/s. We observed a biphasic response in CS (acceleration then deceleration) early in some ischemic episodes. Furthermore, we noted a high temporal correlation between ST-segment changes and CS slowing; however, when comparing these changes over space, we found only moderate correlation (corr. = 0.60).
Discussion
This study is the first to report volumetric CS changes (acceleration and slowing) during episodes of acute ischemia in the whole heart. We showed that while CS changes progress in a similar time course to ischemic stress (measured by ST-segment shifts), the spatial overlap is complex and variable, showing extreme conduction slowing both in and around regions experiencing severe ischemia.



W. W. Good, K. Gillette, B. Zenger, J. Bergquist, L. C. Rupp, J. D. Tate, D. Anderson, M. Gsell, G. Plank, R. S. Macleod. “Estimation and validation of cardiac conduction velocity and wavefront reconstruction using epicardial and volumetric data,” In IEEE Transactions on Biomedical Engineering, IEEE, 2021.
DOI: 10.1109/TBME.2021.3069792

ABSTRACT

Objective: In this study, we have used whole heart simulations parameterized with large animal experiments to validate three techniques (two from the literature and one novel) for estimating epicardial and volumetric conduction velocity (CV). Methods: We used an eikonal-based simulation model to generate ground truth activation sequences with prescribed CVs. Using the sampling density achieved experimentally we examined the accuracy with which we could reconstruct the wavefront, and then examined the robustness of three CV estimation techniques to reconstruction related error. We examined a triangulation-based, inverse-gradient-based, and streamline-based techniques for estimating CV cross the surface and within the volume of the heart. Results: The reconstructed activation times agreed closely with simulated values, with 50-70% of the volumetric nodes and 97-99% of the epicardial nodes were within 1 ms of the ground truth. We found close agreement between the CVs calculated using reconstructed versus ground truth activation times, with differences in the median estimated CV on the order of 3-5% volumetrically and 1-2% superficially, regardless of what technique was used. Conclusion: Our results indicate that the wavefront reconstruction and CV estimation techniques are accurate, allowing us to examine changes in propagation induced by experimental interventions such as acute ischemia, ectopic pacing, or drugs. Significance: We implemented, validated, and compared the performance of a number of CV estimation techniques. The CV estimation techniques implemented in this study produce accurate, high-resolution CV fields that can be used to study propagation in the heart experimentally and clinically.



A. A. Gooch, S. Petruzza, A. Gyulassy, G. Scorzelli, V. Pascucci, L. Rantham, W. Adcock, C. Coopmans. “Lessons learned towards the immediate delivery of massive aerial imagery to farmers and crop consultants,” In Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VI, Vol. 11747, International Society for Optics and Photonics, pp. 22 -- 34. 2021.
DOI: 10.1117/12.2587694

ABSTRACT

In this paper, we document lessons learned from using ViSOAR Ag Explorer™ in the fields of Arkansas and Utah in the 2018-2020 growing seasons. Our insights come from creating software with fast reading and writing of 2D aerial image mosaics for platform-agnostic collaborative analytics and visualization. We currently enable stitching in the field on a laptop without the need for an internet connection. The full resolution result is then available for instant streaming visualization and analytics via Python scripting. While our software, ViSOAR Ag Explorer™ removes the time and labor software bottleneck in processing large aerial surveys, enabling a cost-effective process to deliver actionable information to farmers, we learned valuable lessons with regard to the acquisition, storage, viewing, analysis, and planning stages of aerial data surveys. Additionally, with the ultimate goal of stitching thousands of images in minutes on board a UAV at the time of data capture, we performed preliminary tests for on-board, real-time stitching and analysis on USU AggieAir sUAS using lightweight computational resources. This system is able to create a 2D map while flying and allow interactive exploration of the full resolution data as soon as the platform has landed or has access to a network. This capability further speeds up the assessment process on the field and opens opportunities for new real-time photogrammetry applications. Flying and imaging over 1500-2000 acres per week provides up-to-date maps that give crop consultants a much broader scope of the field in general as well as providing a better view into planting and field preparation than could be observed from field level. Ultimately, our software and hardware could provide a much better understanding of weed presence and intensity or lack thereof.



W. W. Good, B. Zenger, J. A. Bergquist, L. C. Rupp, K. Gillett, N. Angel, D. Chou, G. Plank, R. S. MacLeod. “Combining endocardial mapping and electrocardiographic imaging (ECGI) for improving PVC localization: A feasibility study,” In Journal of Electrocardiology, 2021.
ISSN: 0022-0736
DOI: https://doi.org/10.1016/j.jelectrocard.2021.08.013

ABSTRACT

Introduction

Accurate reconstruction of cardiac activation wavefronts is crucial for clinical diagnosis, management, and treatment of cardiac arrhythmias. Furthermore, reconstruction of activation profiles within the intramural myocardium has long been impossible because electrical mapping was only performed on the endocardial surface. Recent advancements in electrocardiographic imaging (ECGI) have made endocardial and epicardial activation mapping possible. We propose a novel approach to use both endocardial and epicardial mapping in a combined approach to reconstruct intramural activation times.

Objective

To implement and validate a combined epicardial/endocardial intramural activation time reconstruction technique.
Methods

We used 11 simulations of ventricular activation paced from sites throughout myocardial wall and extracted endocardial and epicardial activation maps at approximate clinical resolution. From these maps, we interpolated the activation times through the myocardium using thin-plate-spline radial basis functions. We evaluated activation time reconstruction accuracy using root-mean-squared error (RMSE) of activation times and the percent of nodes within 1 ms of the ground truth.
Results

Reconstructed intramural activation times showed an RMSE and percentage of nodes within 1 ms of the ground truth simulations of 3 ms and 70%, respectively. In the worst case, the RMSE and percentage of nodes were 4 ms and 60%, respectively.
Conclusion

We showed that a simple, yet effective combination of clinical endocardial and epicardial activation maps can accurately reconstruct intramural wavefronts. Furthermore, we showed that this approach provided robust reconstructions across multiple intramural stimulation sites.



J. K. Holmen, D. Sahasrabudhe, M. Berzins. “A Heterogeneous MPI+PPL Task Scheduling Approach for Asynchronous Many-Task Runtime Systems,” In Proceedings of the Practice and Experience in Advanced Research Computing 2021 on Sustainability, Success and Impact (PEARC21), ACM, 2021.

ABSTRACT

Asynchronous many-task runtime systems and MPI+X hybrid parallelism approaches have shown promise for helping manage theincreasing complexity of nodes in current and emerging high performance computing (HPC) systems, including those for exascale. Theincreasing architectural diversity, however, poses challenges for large legacy runtime systems emphasizing broad support for majorHPC systems. Performance portability layers (PPL) have shown promise for helping manage this diversity. This paper describes aheterogeneous MPI+PPL task scheduling approach for combining these promising solutions with additional consideration for parallelthird party libraries facing similar challenges to help prepare such a runtime for the diverse heterogeneous systems accompanyingexascale computing. This approach is demonstrated using a heterogeneous MPI+Kokkos task scheduler and the accompanyingportable abstractions [15] implemented in the Uintah Computational Framework, an asynchronous many-task runtime system, withadditional consideration for hypre, a parallel third party library. Results are shown for two challenging problems executing workloadsrepresentative of typical Uintah applications. These results show performance improvements up to 4.4x when using this schedulerand the accompanying portable abstractions [15] to port a previously MPI-Only problem to Kokkos::OpenMP and Kokkos::CUDA toimprove multi-socket, multi-device node use. Good strong-scaling to 1,024 NVIDIA V100 GPUs and 512 IBM POWER9 processor arealso shown using MPI+Kokkos::OpenMP+Kokkos::CUDA at scale



X. Huang, P. Klacansky, S. Petruzza, A. Gyulassy, P.T. Bremer, V. Pascucci. “Distributed merge forest: a new fast and scalable approach for topological analysis at scale,” In Proceedings of the ACM International Conference on Supercomputing, pp. 367-377. 2021.

ABSTRACT

Topological analysis is used in several domains to identify and characterize important features in scientific data, and is now one of the established classes of techniques of proven practical use in scientific computing. The growth in parallelism and problem size tackled by modern simulations poses a particular challenge for these approaches. Fundamentally, the global encoding of topological features necessitates inter process communication that limits their scaling. In this paper, we extend a new topological paradigm to the case of distributed computing, where the construction of a global merge tree is replaced by a distributed data structure, the merge forest, trading slower individual queries on the structure for faster end-to-end performance and scaling. Empirically, the queries that are most negatively affected also tend to have limited practical use. Our experimental results demonstrate the scalability of both the merge forest construction and the parallel queries needed in scientific workflows, and contrast this scalability with the two established alternatives that construct variations of a global tree.



M. H. Jensen, S. Joshi, S. Sommer. “Bridge Simulation and Metric Estimation on Lie Groups,” Subtitled “arXiv preprint arXiv:2106.03431,” 2021.

ABSTRACT

We present a simulation scheme for simulating Brownian bridges on complete and connected Lie groups. We show how this simulation scheme leads to absolute continuity of the Brownian bridge measure with respect to the guided process measure. This result generalizes the Euclidean result of Delyon and Hu to Lie groups. We present numerical results of the guided process in the Lie group $\SO(3)$. In particular, we apply importance sampling to estimate the metric on $\SO(3)$ using an iterative maximum likelihood method.



X. Jiang, J. C. Font, J. A. Bergquist, B. Zenger, W. W. Good, D. H. Brooks, R. S. MacLeod, L. Wang. “Deep Adaptive Electrocardiographic Imaging with Generative Forward Model for Error Reduction,” In Functional Imaging and Modeling of the Heart: 11th International Conference, In Functional Imaging and Modeling of the Heart: 11th International Conference, Vol. 12738, Springer Nature, pp. 471. 2021.

ABSTRACT

Accuracy of estimating the heart’s electrical activity with Electrocardiographic Imaging (ECGI) is challenging due to using an error-prone physics-based model (forward model). While getting better results than the traditional numerical methods following the underlying physics, modern deep learning approaches ignore the physics behind the electrical propagation in the body and do not allow the use of patientspecific geometry. We introduce a deep-learning-based ECGI framework capable of understanding the underlying physics, aware of geometry, and adjustable to patient-specific data. Using a variational autoencoder (VAE), we uncover the forward model’s parameter space, and when solving the inverse problem, these parameters will be optimized to reduce the errors in the forward model. In both simulation and real data experiments, we demonstrated the ability of the presented framework to provide accurate reconstruction of the heart’s electrical potentials and localization of the earliest activation sites.



R. Kamali, J. Kump, E. Ghafoori, M. Lange, N. Hu, T. J. Bunch, D. J. Dosdall, R. S. Macleod, R. Ranjan. “Area Available for Atrial Fibrillation to Propagate Is an Important Determinant of Recurrence After Ablation,” In JACC: Clinical Electrophysiology, Elsevier, 2021.

ABSTRACT

This study sought to evaluate atrial fibrillation (AF) ablation outcomes based on scar patterns and contiguous area available for AF wavefronts to propagate.



V. Keshavarzzadeh, M. Alirezaei, T. Tasdizen, R. M. Kirby. “Image-Based Multiresolution Topology Optimization Using Deep Disjunctive Normal Shape Model,” In Computer-Aided Design, Vol. 130, Elsevier, pp. 102947. 2021.

ABSTRACT

We present a machine learning framework for predicting the optimized structural topology design susing multiresolution data. Our approach primarily uses optimized designs from inexpensive coarse mesh finite element simulations for model training and generates high resolution images associated with simulation parameters that are not previously used. Our cost-efficient approach enables the designers to effectively search through possible candidate designs in situations where the design requirements rapidly change. The underlying neural network framework is based on a deep disjunctive normal shape model (DDNSM) which learns the mapping between the simulation parameters and segments of multi resolution images. Using this image-based analysis we provide a practical algorithm which enhances the predictability of the learning machine by determining a limited number of important parametric samples(i.e.samples of the simulation parameters)on which the high resolution training data is generated. We demonstrate our approach on benchmark compliance minimization problems including the 3D topology optimization where we show that the high-fidelity designs from the learning machine are close to optimal designs and can be used as effective initial guesses for the large-scale optimization problem.



V. Keshavarzzadeh, R. M. Kirby, A. Narayan. “Multilevel Designed Quadrature for Partial Differential Equations with Random Inputs,” In SIAM Journal on Scientific Computing, Vol. 43, No. 2, Society for Industrial and Applied Mathematics, pp. A1412-A1440. 2021.

ABSTRACT

We introduce a numerical method, multilevel designed quadrature for computing the statistical solution of partial differential equations with random input data. Similar to multilevel Monte Carlo methods, our method relies on hierarchical spatial approximations in addition to a parametric/stochastic sampling strategy. A key ingredient in multilevel methods is the relationship between the spatial accuracy at each level and the number of stochastic samples required to achieve that accuracy. Our sampling is based on flexible quadrature points that are designed for a prescribed accuracy, which can yield less overall computational cost compared to alternative multilevel methods. We propose a constrained optimization problem that determines the number of samples to balance the approximation error with the computational budget. We further show that the optimization problem is convex and derive analytic formulas for the optimal number of points at each level. We validate the theoretical estimates and the performance of our multilevel method via numerical examples on a linear elasticity and a steady state heat diffusion problem.



V. Keshavarzzadeh, R. M. Kirby, A. Narayan. “Robust topology optimization with low rank approximation using artificial neural networks,” In Computational Mechanics, 2021.
DOI: 10.1007/s00466-021-02069-3

ABSTRACT

We present a low rank approximation approach for topology optimization of parametrized linear elastic structures. The parametrization is considered on loading and stiffness of the structure. The low rank approximation is achieved by identifying a parametric connection among coarse finite element models of the structure (associated with different design iterates) and is used to inform the high fidelity finite element analysis. We build an Artificial Neural Network (ANN) map between low resolution design iterates and their corresponding interpolative coefficients (obtained from low rank approximations) and use this surrogate to perform high resolution parametric topology optimization. We demonstrate our approach on robust topology optimization with compliance constraints/objective functions and develop error bounds for the the parametric compliance computations. We verify these parametric computations with more challenging quantities of interest such as the p-norm of von Mises stress. To conclude, we use our approach on a 3D robust topology optimization and show significant reduction in computational cost via quantitative measures.



R. Kirby, K. Nottingham, R. Roy, S. Godil, B. Catanzaro. “Guiding Global Placement With Reinforcement Learning,” Subtitled “arXiv preprint arXiv:2109.02631,” 2021.

ABSTRACT

Recent advances in GPU accelerated global and detail placement have reduced the time to solution by an order of magnitude. This advancement allows us to leverage data driven optimization (such as Reinforcement Learning) in an effort to improve the final quality of placement results. In this work we augment state-of-the-art, force-based global placement solvers with a reinforcement learning agent trained to improve the final detail placed Half Perimeter Wire Length (HPWL). We propose novel control schemes with either global or localized control of the placement process. We then train reinforcement learning agents to use these controls to guide placement to improved solutions. In both cases, the augmented optimizer finds improved placement solutions. Our trained agents achieve an average 1% improvement in final detail place HPWL across a range of academic benchmarks and more than 1% in global place HPWL on real industry designs.



D Kouřil, T Isenberg, B Kozlíková, M Meyer, E Gröller, I Viola. “HyperLabels---Browsing of Dense and Hierarchical Molecular 3D Models,” In IEEE transactions on visualization and computer graphics, IEEE, 2021.
DOI: 10.1109/TVCG.2020.2975583

ABSTRACT

We present a method for the browsing of hierarchical 3D models in which we combine the typical navigation of hierarchical structures in a 2D environment---using clicks on nodes, links, or icons---with a 3D spatial data visualization. Our approach is motivated by large molecular models, for which the traditional single-scale navigational metaphors are not suitable. Multi-scale phenomena, e. g., in astronomy or geography, are complex to navigate due to their large data spaces and multi-level organization. Models from structural biology are in addition also densely crowded in space and scale. Cutaways are needed to show individual model subparts. The camera has to support exploration on the level of a whole virus, as well as on the level of a small molecule. We address these challenges by employing HyperLabels: active labels that---in addition to their annotational role---also support user interaction. Clicks on HyperLabels select the next structure to be explored. Then, we adjust the visualization to showcase the inner composition of the selected subpart and enable further exploration. Finally, we use a breadcrumbs panel for orientation and as a mechanism to traverse upwards in the model hierarchy. We demonstrate our concept of hierarchical 3D model browsing using two exemplary models from meso-scale biology.



A.S. Krishnapriyan, A. Gholami, S. Zhe, R.M. Kirby, M.W. Mahoney. “Characterizing possible failure modes in physics-informed neural networks,” Subtitled “arXiv preprint arXiv:2109.01050,” 2021.

ABSTRACT

Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models. The typical approach is to incorporate physical domain knowledge as soft constraints on an empirical loss function and use existing machine learning methodologies to train the model. We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs. In particular, we analyze several distinct situations of widespread physical interest, including learning differential equations with convection, reaction, and diffusion operators. We provide evidence that the soft regularization in PINNs, which involves differential operators, can introduce a number of subtle problems, including making the problem ill-conditioned. Importantly, we show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize. We then describe two promising solutions to address these failure modes. The first approach is to use curriculum regularization, where the PINN's loss term starts from a simple PDE regularization, and becomes progressively more complex as the NN gets trained. The second approach is to pose the problem as a sequence-to-sequence learning task, rather than learning to predict the entire space-time at once. Extensive testing shows that we can achieve up to 1-2 orders of magnitude lower error with these methods as compared to regular PINN training.



L. Kühnel, T. Fletcher, S. Joshi, S. Sommer. “Latent Space Geometric Statistics,” In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part VI, Springer International Publishing, pp. 163-178. 2021.

ABSTRACT

Deep generative models, e.g., variational autoencoders and generative adversarial networks, result in latent representation of observed data. The low dimensionality of the latent space provides an ideal setting for analysing high-dimensional data that would otherwise often be infeasible to handle statistically. The linear Euclidean geometry of the high-dimensional data space pulls back to a nonlinear Riemannian geometry on latent space where classical linear statistical techniques are no longer applicable. We show how analysis of data in their latent space representation can be performed using techniques from the field of geometric statistics. Geometric statistics provide generalisations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood estimation of parametric distributions. Introduction to estimation procedures on latent space are considered, and the …