Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

Image Analysis

SCI's imaging work addresses fundamental questions in 2D and 3D image processing, including filtering, segmentation, surface reconstruction, and shape analysis. In low-level image processing, this effort has produce new nonparametric methods for modeling image statistics, which have resulted in better algorithms for denoising and reconstruction. Work with particle systems has led to new methods for visualizing and analyzing 3D surfaces. Our work in image processing also includes applications of advanced computing to 3D images, which has resulted in new parallel algorithms and real-time implementations on graphics processing units (GPUs). Application areas include medical image analysis, biological image processing, defense, environmental monitoring, and oil and gas.


ross

Ross Whitaker

Segmentation
sarang

Sarang Joshi

Shape Statistics
Segmentation
Brain Atlasing
tolga

Tolga Tasdizen

Image Processing
Machine Learning
chris

Chris Johnson

Diffusion Tensor Analysis
shireen

Shireen Elhabian

Image Analysis
Computer Vision


Funded Research Projects:



Publications in Image Analysis:


Grand Challenges at the Interface of Engineering and Medicine
S. Subramaniam, M. Miller, several co-authors, Chris R. Johnson, et al.. In IEEE Open Journal of Engineering in Medicine and Biology, Vol. 5, IEEE, pp. 1--13. 2024.
DOI: 10.1109/OJEMB.2024.3351717

Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating “avatars” (herein defined as an extension of “digital twins”) of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.



Multi-task Training as Regularization Strategy for Seismic Image Segmentation
S. Saha, W. Gazi, R. Mohammed, T. Rapstine, H. Powers, R. Whitaker. In IEEE Geoscience and Remote Sensing Letters, Vol. 20, IEEE, pp. 1--5. 2023.
DOI: 10.1109/LGRS.2023.3328837

This letter proposes multitask learning as a regularization method for segmentation tasks in seismic images. We examine application-specific auxiliary tasks, such as the estimation/detection of horizons, dip angle, and amplitude that geophysicists consider relevant for identification of channels (a geological feature), which is currently done through painstaking outlining by qualified experts. We show that multitask training helps in better generalization on test datasets with very similar and different structure/statistics. In such settings, we also show that multitask learning performs better on unseen datasets relative to the baseline.



CLASSMix: Adaptive stain separation-based contrastive learning with pseudo labeling for histopathological image classification
Subtitled “arXiv:2312.06978v2,” B. Zhang, H. Manoochehri, M.M. Ho, F. Fooladgar, Y. Chong, B. Knudsen, D. Sirohi, T. Tasdizen. 2023.

Histopathological image classification is one of the critical aspects in medical image analysis. Due to the high expense associated with the labeled data in model training, semi-supervised learning methods have been proposed to alleviate the need of extensively labeled datasets. In this work, we propose a model for semi-supervised classification tasks on digital histopathological Hematoxylin and Eosin (H&E) images. We call the new model Contrastive Learning with Adaptive Stain Separation and MixUp (CLASS-M). Our model is formed by two main parts: contrastive learning between adaptively stain separated Hematoxylin images and Eosin images, and pseudo-labeling using MixUp. We compare our model with other state-of-the-art models on clear cell renal cell carcinoma (ccRCC) datasets from our institution and The Cancer Genome Atlas Program (TCGA). We demonstrate that our CLASS-M model has the best performance on both datasets. The contributions of different parts in our model are also analyzed.



High-Fidelity CT on Rails-Based Characterization of Delivered Dose Variation in Conformal Head and Neck Treatments
H. Dai, V. Sarkar, C. Dial, M.D. Foote, Y. Hitchcock, S. Joshi, B. Salter. In Applied Radiation Oncology, 2023.
DOI: 10.1101/2023.04.07.23288305

Objective: This study aims to characterize dose variations from the original plan for a cohort of patients with head-and-neck cancer (HNC) using high-quality CT on rails (CTOR) datasets and evaluate a predictive model for identifying patients needing replanning.

Materials and Methods: In total, 74 patients with HNC treated on our CTOR-equipped machine were evaluated in this retrospective study. Patients were treated at our facility using in-room, CTOR image guidance—acquiring CTOR kV fan beam CT images on a weekly to near-daily basis. For each patient, a particular day’s delivered treatment dose was calculated by applying the approved, planned beam set to the post image-guided alignment CT image of the day. Total accumulated delivered dose distributions were calculated and compared with the planned dose distribution, and differences were characterized by comparison of dose and biological response statistics.

Results: The majority of patients in the study saw excellent agreement between planned and delivered dose distribution in targets—the mean deviations of dose received by 95% and 98% of the planning target volumes of the cohort are −0.7% and −1.3%, respectively. In critical organs, we saw a +6.5% mean deviation of mean dose in the parotid glands, −2.3% mean deviation of maximum dose in the brainstem, and +0.7% mean deviation of maximum dose in the spinal cord. Of 74 patients, 10 experienced nontrivial variation of delivered parotid dose, which resulted in a normal tissue complication probability (NTCP) increase compared with the anticipated NTCP in the original plan, ranging from 11% to 44%.

Conclusion: We determined that a midcourse evaluation of dose deviation was not effective in predicting the need for replanning for our patient cohorts. The observed nontrivial dose difference to parotid gland delivered dose suggests that even when rigorous, high-quality image guidance is performed, clinically concerning variations to predicted dose delivery can still occur.



Particle-Based Shape Modeling for Arbitrary Regions-of-Interest,
H. Xu, A. Morris, S.Y. Elhabian. In Shape in Medical Imaging, Lecture Notes in Computer Science, vol 14350, 2023.

Statistical Shape Modeling (SSM) is a quantitative method for analyzing morphological variations in anatomical structures. These analyses often necessitate building models on targeted anatomical regions of interest to focus on specific morphological features. We propose an extension to particle-based shape modeling (PSM), a widely used SSM framework, to allow shape modeling to arbitrary regions of interest. Existing methods to define regions of interest are computationally expensive and have topological limitations. To address these shortcomings, we use mesh fields to define free-form constraints, which allow for delimiting arbitrary regions of interest on shape surfaces. Furthermore, we add a quadratic penalty method to the model optimization to enable computationally efficient enforcement of any combination of cutting-plane and free-form constraints. We demonstrate the effectiveness of this method on a challenging synthetic dataset and two medical datasets.



Two-Stage Deep Learning Framework for Quality Assessment of Left Atrial Late Gadolinium Enhanced MRI Images
Subtitled “arXiv:2310.08805v1,” K.M.A. Sultan, B. Orkild, A. Morris, E. Kholmovski, E. Bieging, E. Kwan, R. Ranjan, E. DiBella, s. Elhabian. 2023.

Accurate assessment of left atrial fibrosis in patients with atrial fibrillation relies on high-quality 3D late gadolinium enhancement (LGE) MRI images. However, obtaining such images is challenging due to patient motion, changing breathing patterns, or sub-optimal choice of pulse sequence parameters. Automated assessment of LGE-MRI image diagnostic quality is clinically significant as it would enhance diagnostic accuracy, improve efficiency, ensure standardization, and contributes to better patient outcomes by providing reliable and high-quality LGE-MRI scans for fibrosis quantification and treatment planning. To address this, we propose a two-stage deep-learning approach for automated LGE-MRI image diagnostic quality assessment. The method includes a left atrium detector to focus on relevant regions and a deep network to evaluate diagnostic quality. We explore two training strategies, multi-task learning, and pretraining using contrastive learning, to overcome limited annotated data in medical imaging. Contrastive Learning result shows about 4%, and 9% improvement in F1-Score and Specificity compared to Multi-Task learning when there’s limited data.



Review of Multi-Faceted Morphologic Signatures of Actinide Process Materials for Nuclear Forensic Science
L.W. McDonald IV, K. Sentz, A. Hagen, B.W. Chung, T. Tasdizen, et. al.. In Journal of Nuclear Materials, Elsevier, 2023.

Particle morphology is an emerging signature that has the potential to identify the processing history of unknown nuclear materials. Using readily available scanning electron microscopes (SEM), the morphology of nearly any solid material can be measured within hours. Coupled with robust image analysis and classification methods, the morphological features can be quantified and support identification of the processing history of unknown nuclear materials. The viability of this signature depends on developing databases of morphological features, coupled with a rapid data analysis and accurate classification process. With developed reference methods, datasets, and throughputs, morphological analysis can be applied within days to (i) interdicted bulk nuclear materials (gram to kilogram quantities), and (ii) trace amounts of nuclear materials detected on swipes or environmental samples. This review aims to develop validated and verified analytical strategies for morphological analysis relevant to nuclear forensics.



Progressive DeepSSM: Training Methodology for Image-To-Shape Deep Models
Subtitled “arXiv:2310.01529,” A.Z.B. Aziz, J. Adams, S. Elhabian. 2023.

Statistical shape modeling (SSM) is an enabling quantitative tool to study anatomical shapes in various medical applications. However, directly using 3D images in these applications still has a long way to go. Recent deep learning methods have paved the way for reducing the substantial preprocessing steps to construct SSMs directly from unsegmented images. Nevertheless, the performance of these models is not up to the mark. Inspired by multiscale/multiresolution learning, we propose a new training strategy, progressive DeepSSM, to train image-to-shape deep learning models. The training is performed in multiple scales, and each scale utilizes the output from the previous scale. This strategy enables the model to learn coarse shape features in the first scales and gradually learn detailed fine shape features in the later scales. We leverage shape priors via segmentation-guided multi-task learning and employ deep supervision loss to ensure learning at each scale. Experiments show the superiority of models trained by the proposed strategy from both quantitative and qualitative perspectives. This training methodology can be employed to improve the stability and accuracy of any deep learning method for inferring statistical representations of anatomies from medical images and can be adopted by existing deep learning methods to improve model accuracy and training stability.



Improving Robustness for Model Discerning Synthesis Process of Uranium Oxide with Unsupervised Domain Adaptation,
C. Ly, C. Nizinski, A. Hagen, L. McDonald IV, T. Tasdizen. In Frontiers in Nuclear Engineering, 2023.

The quantitative characterization of surface structures captured in scanning electron microscopy (SEM) images has proven to be effective for discerning provenance of an unknown nuclear material. Recently, many works have taken advantage of the powerful performance of convolutional neural networks (CNNs) to provide faster and more consistent characterization of surface structures. However, one inherent limitation of CNNs is their degradation in performance when encountering discrepancy between training and test datasets, which limits their use widely.The common discrepancy in an SEM image dataset occurs at low-level image information due to user-bias in selecting acquisition parameters and microscopes from different manufacturers.Therefore, in this study, we present a domain adaptation framework to improve robustness of CNNs against the discrepancy in low-level image information. Furthermore, our proposed approach makes use of only unlabeled test samples to adapt a pretrained model, which is more suitable for nuclear forensics application for which obtaining both training and test datasets simultaneously is a challenge due to data sensitivity. Through extensive experiments, we demonstrate that our proposed approach effectively improves the performance of a model by at least 18% when encountering domain discrepancy, and can be deployed in many CNN architectures.



MedShapeNet - A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Subtitled “arXiv:2308.16139v3,” J. Li, A. Pepe, C. Gsaxner, G. Luijten, Y. Jin, S. Elhabian, et. al.. 2023.

We present MedShapeNet, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D surgical instrument models. Prior to the deep learning era, the broad application of statistical shape models (SSMs) in medical image analysis is evidence that shapes have been commonly used to describe medical data. Nowadays, however, state-of-the-art (SOTA) deep learning algorithms in medical imaging are predominantly voxel-based. In computer vision, on the contrary, shapes (including, voxel occupancy grids, meshes, point clouds and implicit surface models) are preferred data representations in 3D, as seen from the numerous shape-related publications in premier vision conferences, such as the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), as well as the increasing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models) in computer vision research. MedShapeNet is created as an alternative to these commonly used shape benchmarks to facilitate the translation of data-driven vision algorithms to medical applications, and it extends the opportunities to adapt SOTA vision algorithms to solve critical medical problems. Besides, the majority of the medical shapes in MedShapeNet are modeled directly on the imaging data of real patients, and therefore it complements well existing shape benchmarks consisting of computer-aided design (CAD) models. MedShapeNet currently includes more than 100,000 medical shapes, and provides annotations in the form of paired data. It is therefore also a freely available repository of 3D models for extended reality (virtual reality - VR, augmented reality - AR, mixed reality - MR) and medical 3D printing. This white paper describes in detail the motivations behind MedShapeNet, the shape acquisition procedures, the use cases, as well as the usage of the online shape search portal: https://medshapenet.ikim.nrw/



Structural Cycle GAN for Virtual Immunohistochemistry Staining of Gland Markers in the Colon
Subtitled “arXiv:2308.13182,” S. Dubey, T. Kataria, B. Knudsen, S.Y. Elhabian. 2023.

With the advent of digital scanners and deep learning, diagnostic operations may move from a microscope to a desktop. Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading, but pathologists do need different immunohistochemical (IHC) stains to analyze specific structures or cells. Obtaining all of these stains (H&E and different IHCs) on a single specimen is a tedious and time-consuming task. Consequently, virtual staining has emerged as an essential research direction. Here, we propose a novel generative model, Structural Cycle-GAN (SC-GAN), for synthesizing IHC stains from H&E images, and vice versa. Our method expressly incorporates structural information in the form of edges (in addition to color data) and employs attention modules exclusively in the decoder of the proposed generator model. This integration enhances feature localization and preserves contextual information during the generation process. In addition, a structural loss is incorporated to ensure accurate structure alignment between the generated and input markers. To demonstrate the efficacy of the proposed model, experiments are conducted with two IHC markers emphasizing distinct structures of glands in the colon: the nucleus of epithelial cells (CDX2) and the cytoplasm (CK818). Quantitative metrics such as FID and SSIM are frequently used for the analysis of generative models, but they do not correlate explicitly with higher-quality virtual staining results. Therefore, we propose two new quantitative metrics that correlate directly with the virtual staining specificity of IHC markers.



Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation
Subtitled “arXiv:2308.07506,” J. Adams, S.Y. Elhabian. 2023.

Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.



A Non-Contrast Multi-Parametric MRI Biomarker for Assessment of MR-Guided Focused Ultrasound Thermal Therapies
S. Johnson, B. Zimmerman, H. Odéen, J. Shea, N. Winkler, R. Factor, S. Joshi, A. Payne. In IEEE Transactions on Biomedical Engineering, IEEE, pp. 1--12. 2023.
DOI: 10.1109/TBME.2023.3303445

Objective: We present the development of a non-contrast multi-parametric magnetic resonance (MPMR) imaging biomarker to assess treatment outcomes for magnetic resonance-guided focused ultrasound (MRgFUS) ablations of localized tumors. Images obtained immediately following MRgFUS ablation were inputs for voxel- wise supervised learning classifiers, trained using registered histology as a label for thermal necrosis. Methods: VX2 tumors in New Zealand white rabbits quadriceps were thermally ablated using an MRgFUS system under 3 T MRI guidance. Animals were re-imaged three days post-ablation and euthanized. Histological necrosis labels were created by 3D registration between MR images and digitized H&E segmentations of thermal necrosis to enable voxel- wise classification of necrosis. Supervised MPMR classifier inputs included maximum temperature rise, cumulative thermal dose (CTD), post-FUS differences in T2-weighted images, and apparent diffusion coefficient, or ADC, maps. A logistic regression, support vector machine, and random forest classifier were trained in red a leave-one-out strategy in test data from four subjects. Results: In the validation dataset, the MPMR classifiers achieved higher recall and Dice than than a clinically adopted 240 cumulative equivalent minutes at 43 C (CEM 43 ) threshold (0.43) in all subjects.redThe average Dice scores of overlap with the registered histological label for the logistic regression (0.63) and support vector machine (0.63) MPMR classifiers were within 6% of the acute contrast-enhanced non-perfused volume (0.67). Conclusions: Voxel- wise registration of MPMR data to histological outcomes facilitated supervised learning of an accurate non-contrast MR biomarker for MRgFUS ablations in a rabbit VX2 tumor model.



To pretrain or not to pretrain? A case study of domain-specific pretraining for semantic segmentation in histopathology
Subtitled “arXiv:2307.03275,” T. Kataria, B. Knudsen, S. Elhabian. 2023.

Annotating medical imaging datasets is costly, so fine-tuning (or transfer learning) is the most effective method for digital pathology vision applications such as disease classification and semantic segmentation. However, due to texture bias in models trained on real-world images, transfer learning for histopathology applications might result in underperforming models, which necessitates the need for using unlabeled histopathology data and self-supervised methods to discover domain-specific characteristics. Here, we tested the premise that histopathology-specific pretrained models provide better initializations for pathology vision tasks, i.e., gland and cell segmentation. In this study, we compare the performance of gland and cell segmentation tasks with domain-specific and non-domain-specific pretrained weights. Moreover, we investigate the data size at which domain-specific pretraining produces a statistically significant difference in performance. In addition, we investigated whether domain-specific initialization improves the effectiveness of out-of-domain testing on distinct datasets but the same task. The results indicate that performance gain using domain-specific pretraining depends on both the task and the size of the training dataset. In instances with limited dataset sizes, a significant improvement in gland segmentation performance was also observed, whereas models trained on cell segmentation datasets exhibit no improvement.



ADASSM: Adversarial Data Augmentation in Statistical Shape Models From Images
Subtitled “arXiv:2307.03273v2,” M.S.T. Karanam, T. Kataria, S. Elhabian. 2023.

Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias results in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.



Editorial: Image-based computational approaches for personalized cardiovascular medicine: improving clinical applicability and reliability through medical imaging and experimental data
S. Pirola, A. Arzani, C. Chiastra, F. Sturla. In Frontiers in Medical Technology, Vol. 5, 2023.
DOI: 10.3389/fmedt.2023.1222837



Modeling the Shape of the Brain Connectome via Deep Neural Networks,
H. Dai, M. Bauer, P.T. Fletcher, S. Joshi. In Information Processing in Medical Imaging, Springer Nature Switzerland, pp. 291--302. 2023.
ISBN: 978-3-031-34048-2

The goal of diffusion-weighted magnetic resonance imaging (DWI) is to infer the structural connectivity of an individual subject's brain in vivo. To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. This leads to the challenging problem of estimating a Riemannian metric that is compatible with the DWI data, i.e., a metric such that the geodesic curves represent individual fiber tracts of the connectomics. We reduce this problem to that of solving a highly nonlinear set of partial differential equations (PDEs) and study the applicability of convolutional encoder-decoder neural networks (CEDNNs) for solving this geometrically motivated PDE. Our method achieves excellent performance in the alignment of geodesics with white matter pathways and tackles a long-standing issue in previous geodesic tractography methods: the inability to recover crossing fibers with high fidelity. Code is available at https://github.com/aarentai/Metric-Cnn-3D-IPMI.



Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud
Subtitled “arXiv:2305.14486,” J. Adams, S. Elhabian. 2023.

We introduce Point2SSM, a novel unsupervised learning approach that can accurately construct correspondence-based statistical shape models (SSMs) of anatomy directly from point clouds. SSMs are crucial in clinical research for analyzing the population-level morphological variation in bones and organs. However, traditional methods for creating SSMs have limitations that hinder their widespread adoption, such as the need for noise-free surface meshes or binary volumes, reliance on assumptions or predefined templates, and simultaneous optimization of the entire cohort leading to lengthy inference times given new data. Point2SSM overcomes these barriers by providing a data-driven solution that infers SSMs directly from raw point clouds, reducing inference burdens and increasing applicability as point clouds are more easily acquired. Deep learning on 3D point clouds has seen recent success in unsupervised representation learning, point-to-point matching, and shape correspondence; however, their application to constructing SSMs of anatomies is largely unexplored. In this work, we benchmark state-of-the-art point cloud deep networks on the task of SSM and demonstrate that they are not robust to the challenges of anatomical SSM, such as noisy, sparse, or incomplete input and significantly limited training data. Point2SSM addresses these challenges via an attention-based module that provides correspondence mappings from learned point features. We demonstrate that the proposed method significantly outperforms existing networks in terms of both accurate surface sampling and correspondence, better capturing population-level statistics.



Image2SSM: Reimagining Statistical Shape Models from Images with Radial Basis Functions
Subtitled “arXiv:2305.11946,” H. Xu, S. Elhabian. 2023.

Statistical shape modeling (SSM) is an essential tool for analyzing variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical images, gone through segmentation and rigid registration, are represented using lower-dimensional shape features, on which statistical analysis can be performed. Various methods for constructing compact shape representations have been proposed, but they involve laborious and costly steps. We propose Image2SSM, a novel deep-learning-based approach for SSM that leverages image-segmentation pairs to learn a radial-basis-function (RBF)-based representation of shapes directly from images. This RBF-based shape representation offers a rich self-supervised signal for the network to estimate a continuous, yet compact representation of the underlying surface that can adapt to complex geometries in a data-driven manner. Image2SSM can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes while requiring minimal parameter tuning and no user assistance. Once trained, Image2SSM can be used to infer low-dimensional shape representations from new unsegmented images, paving the way toward scalable approaches for SSM, especially when dealing with large cohorts. Experiments on synthetic and real datasets show the efficacy of the proposed method compared to the state-of-art correspondence-based method for SSM.



Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy
Subtitled “arXiv:2305.07805,” K. Iyer, S. Elhabian. 2023.

Statistical shape modeling is the computational process of discovering significant shape parameters from segmented anatomies captured by medical images (such as MRI and CT scans), which can fully describe subject-specific anatomy in the context of a population. The presence of substantial non-linear variability in human anatomy often makes the traditional shape modeling process challenging. Deep learning techniques can learn complex non-linear representations of shapes and generate statistical shape models that are more faithful to the underlying population-level variability. However, existing deep learning models still have limitations and require established/optimized shape models for training. We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes, forming a correspondence-based shape model. Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection. The proposed method operates directly on meshes and is computationally efficient, making it an attractive alternative to traditional and deep learning-based SSM approaches.