Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications

2024


M. Berzins. “COMPUTATIONAL ERROR ESTIMATION FOR THE MATERIAL POINT METHOD IN 1D AND 2D,” In VIII International Conference on Particle-Based Methods, PARTICLES 2023, 2024.

ABSTRACT

The Material Point Method (MPM) is widely used for challenging applications in engineering, and animation but lags behind some other methods in terms of error analysis and computable error estimates. The complexity and nonlinearity of the equations solved by the method and its reliance both on a mesh and on moving particles makes error estimation challenging. Some preliminary error analysis of a simple MPM method has shown the global error to be first order in space and time for a widely-used variant of the Material Point Method. The overall time dependent nature of MPM also complicates matters as both space and time errors and their evolution must be considered thus leading to the use of explicit error transport equations. The preliminary use of an error estimator based on this transport approach has yielded promising results in the 1D case. One other source of error in MPM is the grid-crossing error that can be problematic for large deformations leading to large errors that are identified by the error estimator used. The extension of the error estimation approach to two space higher dimensions is considered and together with additional algorithmic and theoretical results, shown to give promising results in preliminary computational experiments.



J. K. Holmen , M. Garcıa, A. Bagusetty, V. Madananth, A. Sanderson,, M. Berzins. “Making Uintah Performance Portable for Department of Energy Exascale Testbeds,” In Euro-Par 2023: Parallel Processing, pp. 1--12. 2024.

ABSTRACT

To help ease ports to forthcoming Department of Energy (DOE) exascale systems, testbeds have been made available to select users. These testbeds are helpful for preparing codes to run on the same hardware and similar software as in their respective exascale systems. This paper describes how the Uintah Computational Framework, an open-source asynchronous many-task (AMT) runtime system, has been modified to be performance portable across the DOE Crusher, DOE Polaris, and DOE Sunspot testbeds in preparation for portable simulations across the exascale DOE Frontier and DOE Aurora systems. The Crusher, Polaris, and Sunspot testbeds feature the AMD MI250X, NVIDIA A100, and Intel PVC GPUs, respectively. This performance portability has been made possible by extending Uintah’s intermediate portability layer [18] to additionally support the Kokkos::HIP, Kokkos::OpenMPTarget, and Kokkos::SYCL back-ends. This paper also describes notable updates to Uintah’s support for Kokkos, which were required to make this extension possible. Results are shown for a challenging radiative heat transfer calculation, central to the University of Utah’s predictive boiler simulations. These results demonstrate single-source portability across AMD-, NVIDIA-, and Intel-based GPUs using various Kokkos back-ends.



S. Subramaniam, M. Miller, several co-authors, Chris R. Johnson, et al.. “Grand Challenges at the Interface of Engineering and Medicine,” In IEEE Open Journal of Engineering in Medicine and Biology, Vol. 5, IEEE, pp. 1--13. 2024.
DOI: 10.1109/OJEMB.2024.3351717

ABSTRACT

Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating “avatars” (herein defined as an extension of “digital twins”) of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.


2023


J. Adams, S. Elhabian. “Fully Bayesian VIB-DeepSSM,” Subtitled “arXiv:2305.05797,” 2023.

ABSTRACT

Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation from both the probably approximately correct (PAC)-Bayes and variational inference perspectives. We demonstrate the efficacy of two scalable approaches for Bayesian VIB with epistemic uncertainty: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.



J. Adams, S. Elhabian. “Can point cloud networks learn statistical shape models of anatomies?,” Subtitled “arXiv:2305.05610,” 2023.

ABSTRACT

Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods require a time-consuming re-optimization process each time a new subject is added to the cohort, making the inference process prohibitive for clinical research. Additionally, they require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.



J. Adams, S. Elhabian. “Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud,” Subtitled “arXiv:2305.14486,” 2023.

ABSTRACT

We introduce Point2SSM, a novel unsupervised learning approach that can accurately construct correspondence-based statistical shape models (SSMs) of anatomy directly from point clouds. SSMs are crucial in clinical research for analyzing the population-level morphological variation in bones and organs. However, traditional methods for creating SSMs have limitations that hinder their widespread adoption, such as the need for noise-free surface meshes or binary volumes, reliance on assumptions or predefined templates, and simultaneous optimization of the entire cohort leading to lengthy inference times given new data. Point2SSM overcomes these barriers by providing a data-driven solution that infers SSMs directly from raw point clouds, reducing inference burdens and increasing applicability as point clouds are more easily acquired. Deep learning on 3D point clouds has seen recent success in unsupervised representation learning, point-to-point matching, and shape correspondence; however, their application to constructing SSMs of anatomies is largely unexplored. In this work, we benchmark state-of-the-art point cloud deep networks on the task of SSM and demonstrate that they are not robust to the challenges of anatomical SSM, such as noisy, sparse, or incomplete input and significantly limited training data. Point2SSM addresses these challenges via an attention-based module that provides correspondence mappings from learned point features. We demonstrate that the proposed method significantly outperforms existing networks in terms of both accurate surface sampling and correspondence, better capturing population-level statistics.



J. Adams, S.Y. Elhabian. “Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation,” Subtitled “arXiv:2308.07506,” 2023.

ABSTRACT

Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.



M. Adair, I. Rodero, M. Parashar, D. Melgar. “Accelerating Data-Intensive Seismic Research Through Parallel Workflow Optimization and Federated Cyberinfrastructure,” In Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, ACM, pp. 1970--1977. 2023.
DOI: 10.1145/3624062.3624276

ABSTRACT

Earthquake early warning systems use synthetic data from simulation frameworks like MudPy to train models for predicting the magnitudes of large earthquakes. MudPy, although powerful, has limitations: a lengthy simulation time to generate the required data, lack of user-friendliness, and no platform for discovering and sharing its data. We introduce FakeQuakes DAGMan Workflow (FDW), which utilizes Open Science Grid (OSG) for parallel computations to accelerate and streamline MudPy simulations. FDW significantly reduces runtime and increases throughput compared to a single-machine setup. Using FDW, we also explore partitioned parallel HTCondor DAGMan workflows to enhance OSG efficiency. Additionally, we investigate leveraging cyberinfrastructure, such as Virtual Data Collaboratory (VDC), for enhancing MudPy and OSG. Specifically, we simulate using Cloud bursting policies to enforce FDW job-offloading to VDC during OSG peak demand, addressing shared resource issues and user goals; we also discuss VDC’s value in facilitating a platform for broad access to MudPy products.



D. Akbaba, D. Lange, M. Correll, A. Lex, M. Meyer. “Troubling Collaboration: Matters of Care for Visualization Design Study,” In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23),, pp. 23--28. April, 2023.

ABSTRACT

A common research process in visualization is for visualization researchers to collaborate with domain experts to solve particular applied data problems. While there is existing guidance and expertise around how to structure collaborations to strengthen research contributions, there is comparatively little guidance on how to navigate the implications of, and power produced through the socio-technical entanglements of collaborations. In this paper, we qualitatively analyze refective interviews of past participants of collaborations from multiple perspectives: visualization graduate students, visualization professors, and domain collaborators. We juxtapose the perspectives of these individuals, revealing tensions about the tools that are built and the relationships that are formed — a complex web of competing motivations. Through the lens of matters of care, we interpret this web, concluding with considerations that both trouble and necessitate reformation of current patterns around collaborative work in visualization design studies to promote more equitable, useful, and care-ful outcomes.



M. Aliakbari, M.S. Sadrabadi, P. Vadasz, A. Arzani. “Ensemble physics informed neural networks: A framework to improve inverse transport modeling in heterogeneous domains,” In Physics of Fluids, AIP, 2023.

ABSTRACT

Modeling fluid flow and transport in heterogeneous systems is often challenged by unknown parameters that vary in space. In inverse
modeling, measurement data are used to estimate these parameters. Due to the spatial variability of these unknown parameters in
heterogeneous systems (e.g., permeability or diffusivity), the inverse problem is ill-posed and infinite solutions are possible. Physics-informed
neural networks (PINN) have become a popular approach for solving inverse problems. However, in inverse problems in heterogeneous sys-
tems, PINN can be sensitive to hyperparameters and can produce unrealistic patterns. Motivated by the concept of ensemble learning and
variance reduction in machine learning, we propose an ensemble PINN (ePINN) approach where an ensemble of parallel neural networks is
used and each sub-network is initialized with a meaningful pattern of the unknown parameter. Subsequently, these parallel networks provide
a basis that is fed into a main neural network that is trained using PINN. It is shown that an appropriately selected set of patterns can guide
PINN in producing more realistic results that are relevant to the problem of interest. To assess the accuracy of this approach, inverse trans-
port problems involving unknown heat conductivity, porous media permeability, and velocity vector fields were studied. The proposed
ePINN approach was shown to increase the accuracy in inverse problems and mitigate the challenges associated with non-uniqueness.



A. Arzani, L. Yuan, P. Newell, B. Wang. “Interpreting and generalizing deep learning in physics-based problems with functional linear models,” Subtitled “arXiv:2307.04569,” 2023.

ABSTRACT

Although deep learning has achieved remarkable success in various scientific machine learning applications, its black-box nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretability in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning.



T. M. Athawale, C.R. Johnson, S. Sane,, D. Pugmire. “Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 1, IEEE, pp. 613-23. 2023.

ABSTRACT

Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green’s theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets



A.Z.B. Aziz, J. Adams, S. Elhabian. “Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models,” Subtitled “arXiv:2310.01529,” 2023.

ABSTRACT

Statistical shape modeling (SSM) is an enabling quantitative tool to study anatomical shapes in various medical applications. However, directly using 3D images in these applications still has a long way to go. Recent deep learning methods have paved the way for reducing the substantial preprocessing steps to construct SSMs directly from unsegmented images. Nevertheless, the performance of these models is not up to the mark. Inspired by multiscale/multiresolution learning, we propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models. The training is performed in multiple scales, and each scale utilizes the output from the previous scale. This strategy enables the model to learn coarse shape features in the first scales and gradually learn detailed fine shape features in the later scales. We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale. Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives. This training methodology can be employed to improve the stability and accuracy of any deep learning method for inferring statistical representations of anatomies from medical images and can be adopted by existing deep learning methods to improve model accuracy and training stability.



J. Baker, E. Cherkaev, A. Narayan, B. Wang. “Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs,” In Journal of Scientific Computing, Vol. 95, No. 14, 2023.

ABSTRACT

Proper orthogonal decomposition (POD) allows reduced-order modeling of complex dynamical systems at a substantial level, while maintaining a high degree of accuracy in modeling the underlying dynamical systems. Advances in machine learning algorithms enable learning POD-based dynamics from data and making accurate and fast predictions of dynamical systems. This paper extends the recently proposed heavy-ball neural ODEs (HBNODEs) (Xia et al. NeurIPS, 2021] for learning data-driven reduced-order models (ROMs) in the POD context, in particular, for learning dynamics of time-varying coefficients generated by the POD analysis on training snapshots constructed by solving full-order models. HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees, including 1) HBNODE can learn long-range dependencies effectively from sequential observations, which is crucial for learning intrinsic patterns from sequential data, and 2) HBNODE is computationally efficient in both training and testing. We compare HBNODE with other popular ROMs on several complex dynamical systems, including the von Kármán Street flow, the Kurganov-Petrova-Popov equation, and the one-dimensional Euler equations for fluids modeling.



J.W. Beiriger, W. Tao, M.K. Bruce, E. Anstadt, C. Christiensen, J. Smetona, R. Whitaker, J. Goldstein. “CranioRate TM: An Image-Based, Deep-Phenotyping Analysis Toolset and Online Clinician Interface for Metopic Craniosynostosis,” In Plastic and Reconstructive Surgery, 2023.

ABSTRACT

Introduction:
The diagnosis and management of metopic craniosynostosis involves subjective decision-making at the point of care. The purpose of this work is to describe a quantitative severity metric and point-of-care user interface to aid clinicians in the management of metopic craniosynostosis and to provide a platform for future research through deep phenotyping.

Methods:
Two machine-learning algorithms were developed that quantify the severity of craniosynostosis – a supervised model specific to metopic craniosynostosis (Metopic Severity Score) and an unsupervised model used for cranial morphology in general (Cranial Morphology Deviation). CT imaging from multiple institutions were compiled to establish the spectrum of severity and a point-of-care tool was developed and validated.

Results:
Over the study period (2019-2021), 254 patients with metopic craniosynostosis and 92 control patients who underwent CT scan between the ages of 6 and 18 months were included. Scans were processed using an unsupervised machine-learning based dysmorphology quantification tool, CranioRate TM. The average Metopic severity score (MSS) for normal controls was 0.0±1.0 and for metopic synostosis was 4.9±2.3 (p<0.001). The average Cranial Morphology Deviation (CMD) for normal controls was 85.2±19.2 and for metopic synostosis was 189.9±43.4 (p<0.001). A point-of-care user interface (craniorate.org) has processed 46 CT images from 10 institutions.

Conclusion:
The resulting quantification of severity using MSS and CMD has shown an improved capacity, relative to conventional measures, to automatically classify normal controls versus patients with metopic synostosis. We have mathematically described, in an objective and quantifiable manner, the distribution of phenotypes in metopic craniosynostosis.



M. Berzins. “Error Estimation for the Material Point and Particle in Cell Methods,” In admos2023, 2023.

ABSTRACT

The Material Point Method (MPM) is widely used for challenging applications in engineering, and animation. The complexity of the method makes error estimation challenging. Error analysis of a simple MPM method is undertaken and the global error is shown to be first order in space and time for a widely-used variant of the method. Computational experiments illustrate the estimated accuracy.



J. A. Bergquist, B. Zenger, L. Rupp, A. Busatto, J. D. Tate, D. H. Brooks, A. Narayan, R. MacLeod. “Uncertainty quantification of the effect of cardiac position variability in the inverse problem of electrocardiographic imaging,” In Journal of Physiological Measurement, IOP Science, 2023.
DOI: 10.1088/1361-6579/acfc32

ABSTRACT

Objective:&#xD;Electrocardiographic imaging (ECGI) is a functional imaging modality that consists of two related problems, the forward problem of reconstructing body surface electrical signals given cardiac bioelectric activity, and the inverse problem of reconstructing cardiac bioelectric activity given measured body surface signals. ECGI relies on a model for how the heart generates bioelectric signals which is subject to variability in inputs. The study of how uncertainty in model inputs affects the model output is known as uncertainty quantification (UQ). This study establishes develops, and characterizes the application of UQ to ECGI.&#xD;&#xD;Approach:&#xD;We establish two formulations for applying UQ to ECGI: a polynomial chaos expansion (PCE) based parametric UQ formulation (PCE-UQ formulation), and a novel UQ-aware inverse formulation which leverages our previously established ``joint-inverse" formulation (UQ joint-inverse formulation). We apply these to evaluate the effect of uncertainty in the heart position on the ECGI solutions across a range of ECGI datasets.&#xD;&#xD;Main Results:&#xD;We demonstrated the ability of our UQ-ECGI formulations to characterize the effect of parameter uncertainty on the ECGI inverse problem. We found that while the PCE-UQ inverse solution provided more complex outputs such as sensitivities and standard deviation, the UQ joint-inverse solution provided a more interpretable output in the form of a single ECGI solution. We find that between these two methods we are able to assess a wide range of effects that heart position variability has on the ECGI solution.&#xD;&#xD;Significance:&#xD;This study, for the first time, characterizes in detail the application of UQ to the ECGI inverse problem. We demonstrated how UQ can provide insight into the behavior of ECGI using variability in cardiac position as a test case. This study lays the groundwork for future development of UQ-ECGI studies, as well as future development of ECGI formulations which are robust to input parameter variability.



T.C. Bidone, D.J. Odde. “Multiscale models of integrins and cellular adhesions,” In Current Opinion in Structural Biology, Vol. 80, Elsevier, 2023.

ABSTRACT

Computational models of integrin-based adhesion complexes have revealed important insights into the mechanisms by which cells establish connections with their external environment. However, how changes in conformation and function of individual adhesion proteins regulate the dynamics of whole adhesion complexes remains largely elusive. This is because of the large separation in time and length scales between the dynamics of individual adhesion proteins (nanoseconds and nanometers) and the emergent dynamics of the whole adhesion complex (seconds and micrometers), and the limitations of molecular simulation approaches in extracting accurate free energies, conformational transitions, reaction mechanisms, and kinetic rates, that can inform mechanisms at the larger scales. In this review, we discuss models of integrin-based adhesion complexes and highlight their main findings regarding: (i) the conformational transitions of integrins at the molecular and macromolecular scales and (ii) the molecular clutch mechanism at the mesoscale. Lastly, we present unanswered questions in the field of modeling adhesions and propose new ideas for future exciting modeling opportunities.



B. Borotikar, T.E.M. Mutsvangwa, S..Y Elhabian, E.Audenaert . “Editorial: Statistical model-based computational biomechanics: applications in joints and internal organs,” In Frontiers in Bioengineering and Biotechnology, Vol. 11, 2023.
DOI: 10.3389/fbioe.2023.1232464



S. Brink, M. McKinsey, D. Boehme, C. Scully-Allison, I. Lumsden, D. Hawkins, T. Burgess, V. Lama, J. Luettgau, K.E. Isaacs, M. Taufer, O. Pearce. “Thicket: Seeing the Performance Experiment Forest for the Individual Run Trees,” In HPDC ’23, ACM, 2023.

ABSTRACT

Thicket is an open-source Python toolkit for Exploratory Data Analysis (EDA) of multi-run performance experiments. It enables an understanding of optimal performance configuration for large-scale application codes. Most performance tools focus on a single execution (e.g., single platform, single measurement tool, single scale). Thicket bridges the gap to convenient analysis in multi-dimensional, multi-scale, multi-architecture, and multi-tool performance datasets by providing an interface for interacting with the performance data.

Thicket has a modular structure composed of three components. The first component is a data structure for multi-dimensional performance data, which is composed automatically on the portable basis of call trees, and accommodates any subset of dimensions present in the dataset. The second is the metadata, enabling distinction and sub-selection of dimensions in performance data. The third is a dimensionality reduction mechanism, enabling analysis such as computing aggregated statistics on a given data dimension. Extensible mechanisms are available for applying analyses (e.g., top-down on Intel CPUs), data science techniques (e.g., K-means clustering from scikit-learn), modeling performance (e.g., Extra-P), and interactive visualization. We demonstrate the power and flexibility of Thicket through two case studies, first with the open-source RAJA Performance Suite on CPU and GPU clusters and another with a arge physics simulation run on both a traditional HPC cluster and an AWS Parallel Cluster instance.