Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

Image Analysis

SCI's imaging work addresses fundamental questions in 2D and 3D image processing, including filtering, segmentation, surface reconstruction, and shape analysis. In low-level image processing, this effort has produce new nonparametric methods for modeling image statistics, which have resulted in better algorithms for denoising and reconstruction. Work with particle systems has led to new methods for visualizing and analyzing 3D surfaces. Our work in image processing also includes applications of advanced computing to 3D images, which has resulted in new parallel algorithms and real-time implementations on graphics processing units (GPUs). Application areas include medical image analysis, biological image processing, defense, environmental monitoring, and oil and gas.


ross

Ross Whitaker

Segmentation
sarang

Sarang Joshi

Shape Statistics
Segmentation
Brain Atlasing
tolga

Tolga Tasdizen

Image Processing
Machine Learning
chris

Chris Johnson

Diffusion Tensor Analysis
shireen

Shireen Elhabian

Image Analysis
Computer Vision


Funded Research Projects:



Publications in Image Analysis:


A Pathologist-Informed Workflow for Classification of Prostate Glands in Histopathology,
A. Ferrero, B. Knudsen, D. Sirohi, R. Whitaker. In Medical Optical Imaging and Virtual Microscopy Image Analysis, Springer Nature Switzerland, pp. 53--62. 2022.
DOI: 10.1007/978-3-031-16961-8_6

Pathologists diagnose and grade prostate cancer by examining tissue from needle biopsies on glass slides. The cancer's severity and risk of metastasis are determined by the Gleason grade, a score based on the organization and morphology of prostate cancer glands. For diagnostic work-up, pathologists first locate glands in the whole biopsy core, and---if they detect cancer---they assign a Gleason grade. This time-consuming process is subject to errors and significant inter-observer variability, despite strict diagnostic criteria. This paper proposes an automated workflow that follows pathologists' modus operandi, isolating and classifying multi-scale patches of individual glands in whole slide images (WSI) of biopsy tissues using distinct steps: (1) two fully convolutional networks segment epithelium versus stroma and gland boundaries, respectively; (2) a classifier network separates benign from cancer glands at high magnification; and (3) an additional classifier predicts the grade of each cancer gland at low magnification. Altogether, this process provides a gland-specific approach for prostate cancer grading that we compare against other machine-learning-based grading methods.



Few-Shot Segmentation of Microscopy Images Using Gaussian Process,
S. Saha, O, Choi, R. Whitaker. In Medical Optical Imaging and Virtual Microscopy Image Analysis, Springer Nature Switzerland, pp. 94--104. 2022.
DOI: 10.1007/978-3-031-16961-8_10

Few-shot segmentation has received recent attention because of its promise to segment images containing novel classes based on a handful of annotated examples. Few-shot-based machine learning methods build generic and adaptable models that can quickly learn new tasks. This approach finds potential application in many scenarios that do not benefit from large repositories of labeled data, which strongly impacts the performance of the existing data-driven deep-learning algorithms. This paper presents a few-shot segmentation method for microscopy images that combines a neural-network architecture with a Gaussian-process (GP) regression. The GP regression is used in the latent space of an autoencoder-based segmentation model to learn the distribution of functions from the encoded image representations to the corresponding representation of the segmentation masks in the support set. This regression analysis serves as the prior for predicting the segmentation mask for the query image. The rich latent representation built by the GP using examples in the support set significantly impacts the performance of the segmentation model, demonstrated by extensive experimental evaluation.



Spatiotemporal Cardiac Statistical Shape Modeling: A Data-Driven Approach
Subtitled “arXiv preprint arXiv:2209.02736,” J. Adams, N. Khan, A. Morris, S. Elhabian. 2022.

Clinical investigations of anatomy’s structural changes over time could greatly benefit from population-level quantification of shape, or spatiotemporal statistic shape modeling (SSM). Such a tool enables characterizing patient organ cycles or disease progression in relation to a cohort of interest. Constructing shape models requires establishing a quantitative shape representation (e.g., corresponding landmarks). Particle-based shape modeling (PSM) is a data-driven SSM approach that captures population-level shape variations by optimizing landmark placement. However, it assumes cross-sectional study designs and hence has limited statistical power in representing shape changes over time. Existing methods for modeling spatiotemporal or longitudinal shape changes require predefined shape atlases and pre-built shape models that are typically constructed cross-sectionally. This paper proposes a data-driven approach inspired by the PSM method to learn population-level spatiotemporal shape changes directly from shape data. We introduce a novel SSM optimization scheme that produces landmarks that are in correspondence both across the population (inter-subject) and across time-series (intra-subject). We apply the proposed method to 4D cardiac data from atrial-fibrillation patients and demonstrate its efficacy in representing the dynamic change of the left atrium. Furthermore, we show that our method outperforms an image-based approach for spatiotemporal SSM with respect to a generative time-series model, the Linear Dynamical System (LDS). LDS fit using a spatiotemporal shape model optimized via our approach provides better generalization and specificity, indicating it accurately captures the underlying time-dependency.



Statistical Shape Modeling of Biventricular Anatomy with Shared Boundaries
Subtitled “arXiv:2209.02706v1,” K. Iyer, A. Morris, B. Zenger, K. Karnath, B.A. Orkild, O. Korshak, S. Elhabian. 2022.

Statistical shape modeling (SSM) is a valuable and powerful tool to generate a detailed representation of complex anatomy that enables quantitative analysis and the comparison of shapes and their variations. SSM applies mathematics, statistics, and computing to parse the shape into a quantitative representation (such as correspondence points or landmarks) that will help answer various questions about the anatomical variations across the population. Complex anatomical structures have many diverse parts with varying interactions or intricate architecture. For example, the heart is a four-chambered anatomy with several shared boundaries between chambers. Coordinated and efficient contraction of the chambers of the heart is necessary to adequately perfuse end organs throughout the body. Subtle shape changes within these shared boundaries of the heart can indicate potential pathological changes that lead to uncoordinated contraction and poor end-organ perfusion. Early detection and robust quantification could provide insight into ideal treatment techniques and intervention timing. However, existing SSM approaches fall short of explicitly modeling the statistics of shared boundaries. In this paper, we present a general and flexible data-driven approach for building statistical shape models of multi-organ anatomies with shared boundaries that captures morphological and alignment changes of individual anatomies and their shared boundary surfaces throughout the population. We demonstrate the effectiveness of the proposed methods using a biventricular heart dataset by developing shape models that consistently parameterize the cardiac biventricular structure and the interventricular septum (shared boundary surface) across the population data.



Discrete-Time Observations of Brownian Motion on Lie Groups and Homogeneous Spaces: Sampling and Metric Estimation
M.H. Jensen, S. Joshi, S. Sommer. In Algorithms, Vol. 15, No. 8, 2022.
ISSN: 1999-4893
DOI: 10.3390/a15080290

We present schemes for simulating Brownian bridges on complete and connected Lie groups and homogeneous spaces. We use this to construct an estimation scheme for recovering an unknown left- or right-invariant Riemannian metric on the Lie group from samples. We subsequently show how pushing forward the distributions generated by Brownian motions on the group results in distributions on homogeneous spaces that exhibit a non-trivial covariance structure. The pushforward measure gives rise to new non-parametric families of distributions on commonly occurring spaces such as spheres and symmetric positive tensors. We extend the estimation scheme to fit these distributions to homogeneous space-valued data. We demonstrate both the simulation schemes and estimation procedures on Lie groups and homogenous spaces, including SPD(3)=GL+(3)/SO(3) and S2=SO(3)/SO(2).



Relating Metopic Craniosynostosis Severity to Intracranial Pressure,
J.D. Blum, J. Beiriger, C. Kalmar, R.A. Avery, S. Lang, D.F. Villavisanis, L. Cheung, D.Y. Cho, W. Tao, R. Whitaker, S.P. Bartlett, J.A. Taylor, J.A. Goldstein, J.W. Swanson. In The Journal of Craniofacial Surgery, 2022.
DOI: 10.1097/SCS.0000000000008748

Purpose:

A subset of patients with metopic craniosynostosis are noted to have elevated intracranial pressure (ICP). However, it is not known if the propensity for elevated ICP is influenced by the severity of metopic cranial dysmorphology.

Methods:

Children with nonsyndromic single-suture metopic synostosis were prospectively enrolled and underwent optical coherence tomography to measure optic nerve head morphology. Preoperative head computed tomography scans were assessed for endocranial bifrontal angle as well as scaled metopic synostosis severity score (MSS) and cranial morphology deviation score determined by CranioRate, an automated severity classifier.
Results:

Forty-seven subjects were enrolled between 2014 and 2019, at an average age of 8.5 months at preoperative computed tomography and 11.8 months at index procedure. Fourteen patients (29.7%) had elevated optical coherence tomography parameters suggestive of elevated ICP at the time of surgery. Ten patients (21.3%) had been diagnosed with developmental delay, eight of whom demonstrated elevated ICP. There were no significant associations between measures of metopic severity and ICP. Metopic synostosis severity score and endocranial bifrontal angle were inversely correlated, as expected (r=−0.545, P<0.001). A negative correlation was noted between MSS and formally diagnosed developmental delay (r=−0.387, P=0.008). Likewise, negative correlations between age at procedure and both MSS and cranial morphology deviation was observed (r=−0.573, P<0.001 and r=−0.312, P=0.025, respectively).
Conclusions:

Increased metopic severity was not associated with elevated ICP at the time of surgery. Patients who underwent later surgical correction showed milder phenotypic dysmorphology with an increased incidence of developmental delay.



Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation
Subtitled “arXiv:2207.09771,” R. Lanfredi, J.D. Schroeder, T. Tasdizen. 2022.

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected in a non-intrusive way during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of abnormalities. We show that this method improves a model's interpretability without impacting its image-level classification.



Integrating atomistic simulations and machine learning to design multi-principal element alloys with superior elastic modulus
M. Grant, M. R. Kunz, K. Iyer, L. I. Held, T. Tasdizen, J. A. Aguiar, P. P. Dholabhai. In Journal of Materials Research, Springer International Publishing, pp. 1--16. 2022.

Multi-principal element, high entropy alloys (HEAs) are an emerging class of materials that have found applications across the board. Owing to the multitude of possible candidate alloys, exploration and compositional design of HEAs for targeted applications is challenging since it necessitates a rational approach to identify compositions exhibiting enriched performance. Here, we report an innovative framework that integrates molecular dynamics and machine learning to explore a large chemical-configurational space for evaluating elastic modulus of equiatomic and non-equiatomic HEAs along primary crystallographic directions. Vital thermodynamic properties and machine learning features have been incorporated to establish fundamental relationships correlating Young’s modulus with Gibbs free energy, valence electron concentration, and atomic size difference. In HEAs, as the number of elements increases …



Characterization of uncertainties and model generalizability for convolutional neural network predictions of uranium ore concentrate morphology
C. A. Nizinski, C. Ly, C. Vachet, A. Hagen, T. Tasdizen, L. W. McDonald. In Chemometrics and Intelligent Laboratory Systems, Vol. 225, Elsevier, pp. 104556. 2022.
ISSN: 0169-7439
DOI: https://doi.org/10.1016/j.chemolab.2022.104556

As the capabilities of convolutional neural networks (CNNs) for image classification tasks have advanced, interest in applying deep learning techniques for determining the natural and anthropogenic origins of uranium ore concentrates (UOCs) and other unknown nuclear materials by their surface morphology characteristics has grown. But before CNNs can join the nuclear forensics toolbox along more traditional analytical techniques – such as scanning electron microscopy (SEM), X-ray diffractometry, mass spectrometry, radiation counting, and any number of spectroscopic methods – a deeper understanding of “black box” image classification will be required. This paper explores uncertainty quantification for convolutional neural networks and their ability to generalize to out-of-distribution (OOD) image data sets. For prediction uncertainty, Monte Carlo (MC) dropout and random image crops as variational inference techniques are implemented and characterized. Convolutional neural networks and classifiers using image features from unsupervised vector-quantized variational autoencoders (VQ-VAE) are trained using SEM images of pure, unaged, unmixed uranium ore concentrates considered “unperturbed.” OOD data sets are developed containing perturbations from the training data with respect to the chemical and physical properties of the UOCs or data collection parameters; predictions made on the perturbation sets identify where significant shortcomings exist in the current training data and techniques used to develop models for classifying uranium process history, and provides valuable insights into how datasets and classification models can be improved for better generalizability to out-of-distribution examples.



3D Photography to Quantify the Severity of Metopic Craniosynostosis
M. K. Bruce, W. Tao, J. Beiriger, C. Christensen, M. J. Pfaff, R. Whitaker, J. A. Goldstein. In The Cleft Palate-Craniofacial Journal, SAGE Publications, 2022.

Objective

This study aims to determine the utility of 3D photography for evaluating the severity of metopic craniosynostosis (MCS) using a validated, supervised machine learning (ML) algorithm.

Design/Setting/Patients

This single-center retrospective cohort study included patients who were evaluated at our tertiary care center for MCS from 2016 to 2020 and underwent both head CT and 3D photography within a 2-month period.
Main Outcome Measures

The analysis method builds on our previously established ML algorithm for evaluating MCS severity using skull shape from CT scans. In this study, we regress the model to analyze 3D photographs and correlate the severity scores from both imaging modalities.
Results

14 patients met inclusion criteria, 64.3% male (n = 9). The mean age in years at 3D photography and CT imaging was 0.97 and 0.94, respectively. Ten patient images were obtained preoperatively, and 4 patients did not require surgery. The severity prediction of the ML algorithm correlates closely when comparing the 3D photographs to CT bone data (Spearman correlation coefficient [SCC] r = 0.75; Pearson correlation coefficient [PCC] r = 0.82).

Conclusion

The results of this study show that 3D photography is a valid alternative to CT for evaluation of head shape in MCS. Its use will provide an objective, quantifiable means of assessing outcomes in a rigorous manner while decreasing radiation exposure in this patient population.



Deep Learning the Shape of the Brain Connectome
Subtitled “arXiv preprint arXiv:2203.06122, 2022,” H. Dai, M. Bauer, P.T. Fletcher, S.C. Joshi. 2022.

To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. We show for the first time how one can leverage deep neural networks to estimate a Riemannian metric of the brain that can accommodate fiber crossings and is a natural modeling tool to infer the shape of the brain from DWMRI. Our method achieves excellent performance in geodesic-white-matter-pathway alignment and tackles the long-standing issue in previous methods: the inability to recover the crossing fibers with high fidelity.



Google Street View Images as Predictors of Patient Health Outcomes, 2017–2019
Q. C. Nguyen, T. Belnap, P. Dwivedi, A. Hossein Nazem Deligani, A. Kumar, D. Li, R. Whitaker, J. Keralis, H. Mane, X. Yue, T. T. Nguyen, T. Tasdizen, K. D. Brunisholz. In Big Data and Cognitive Computing, Vol. 6, No. 1, Multidisciplinary Digital Publishing Institute, 2022.

Collecting neighborhood data can both be time- and resource-intensive, especially across broad geographies. In this study, we leveraged 1.4 million publicly available Google Street View (GSV) images from Utah to construct indicators of the neighborhood built environment and evaluate their associations with 2017–2019 health outcomes of approximately one-third of the population living in Utah. The use of electronic medical records allows for the assessment of associations between neighborhood characteristics and individual-level health outcomes while controlling for predisposing factors, which distinguishes this study from previous GSV studies that were ecological in nature. Among 938,085 adult patients, we found that individuals living in communities in the highest tertiles of green streets and non-single-family homes have 10–27% lower diabetes, uncontrolled diabetes, hypertension, and obesity, but higher substance use disorders—controlling for age, White race, Hispanic ethnicity, religion, marital status, health insurance, and area deprivation index. Conversely, the presence of visible utility wires overhead was associated with 5–10% more diabetes, uncontrolled diabetes, hypertension, obesity, and substance use disorders. Our study found that non-single-family and green streets were related to a lower prevalence of chronic conditions, while visible utility wires and single-lane roads were connected with a higher burden of chronic conditions. These contextual characteristics can better help healthcare organizations understand the drivers of their patients’ health by further considering patients’ residential environments, which present both …



Adversarially Robust Classification by Conditional Generative Model Inversion
Subtitled “arXiv preprint arXiv:2201.04733,” M. Alirezaei, T. Tasdizen. 2022.

Most adversarial attack defense methods rely on obfuscating gradients. These methods are successful in defending against gradient-based attacks; however, they are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient. Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers which allows an adversary to find small perturbations in the input space that lead to large changes in the output space. On the other hand, a generative model is typically a low-to-high-dimensional mapping. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, which was shown to generate obfuscated gradients that are easily circumvented, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks compared to naturally trained, feed-forward classifiers.



Translational computer science at the scientific computing and imaging institute
C. R. Johnson. In Journal of Computational Science, Vol. 52, pp. 101217. 2021.
ISSN: 1877-7503
DOI: https://doi.org/10.1016/j.jocs.2020.101217

The Scientific Computing and Imaging (SCI) Institute at the University of Utah evolved from the SCI research group, started in 1994 by Professors Chris Johnson and Rob MacLeod. Over time, research centers funded by the National Institutes of Health, Department of Energy, and State of Utah significantly spurred growth, and SCI became a permanent interdisciplinary research institute in 2000. The SCI Institute is now home to more than 150 faculty, students, and staff. The history of the SCI Institute is underpinned by a culture of multidisciplinary, collaborative research, which led to its emergence as an internationally recognized leader in the development and use of visualization, scientific computing, and image analysis research to solve important problems in a broad range of domains in biomedicine, science, and engineering. A particular hallmark of SCI Institute research is the creation of open source software systems, including the SCIRun scientific problem-solving environment, Seg3D, ImageVis3D, Uintah, ViSUS, Nektar++, VisTrails, FluoRender, and FEBio. At this point, the SCI Institute has made more than 50 software packages broadly available to the scientific community under open-source licensing and supports them through web pages, documentation, and user groups. While the vast majority of academic research software is written and maintained by graduate students, the SCI Institute employs several professional software developers to help create, maintain, and document robust, tested, well-engineered open source software. The story of how and why we worked, and often struggled, to make professional software engineers an integral part of an academic research institute is crucial to the larger story of the SCI Institute’s success in translational computer science (TCS).



Comparing radiologists’ gaze and saliency maps generated by interpretability methods for chest x-rays
Subtitled “arXiv:2112.11716v1,” R.B. Lanfredi, A. Arora, T. Drew, J.D. Schroeder, T. Tasdizen. 2021.

The interpretability of medical image analysis models is considered a key research field. We use a dataset of eye-tracking data from five radiologists to compare the outputs of interpretability methods against the heatmaps representing where radiologists looked. We conduct a class-independent analysis of the saliency maps generated by two methods selected from the literature: Grad-CAM and attention maps from an attention-gated model. For the comparison, we use shuffled metrics, which avoid biases from fixation locations. We achieve scores comparable to an interobserver baseline in one shuffled metric, highlighting the potential of saliency maps from Grad-CAM to mimic a radiologist’s attention over an image. We also divide the dataset into subsets to evaluate in which cases similarities are higher.



Bridge Simulation on Lie Groups and Homogeneous Spaces with Application to Parameter Estimation
Subtitled “arXiv:2112.00866,” M. Højgaard Jensen, L. Hilgendorf, S. Joshi, S. Sommer. 2021.



Prediction of Femoral Head Coverage from Articulated Statistical Shape Models of Patients with Developmental Dysplasia of the Hip
P. R. Atkins, P. Agrawal, J. D. Mozingo, K. Uemura, K. Tokunaga, C. L. Peters, S. Y. Elhabian, R. T. Whitaker, A. E. Anderson. In Journal of Orthopaedic Research, Wiley, 2021.
DOI: 10.1002/jor.25227

Developmental dysplasia of the hip (DDH) is commonly described as reduced femoral head coverage due to anterolateral acetabular deficiency. Although reduced coverage is the defining trait of DDH, more subtle and localized anatomic features of the joint are also thought to contribute to symptom development and degeneration. These features are challenging to identify using conventional approaches. Herein, we assessed the morphology of the full femur and hemi-pelvis using an articulated statistical shape model (SSM). The model determined the morphological and pose-based variations associated with DDH in a population of Japanese females and established which of these variations predict coverage. Computed tomography images of 83 hips from 47 patients were segmented for input into a correspondence-based SSM. The dominant modes of variation in the model initially represented scale and pose. After removal of these factors through individual bone alignment, femoral version and neck-shaft angle, pelvic curvature, and acetabular version dominated the observed variation. Femoral head oblateness and prominence of the acetabular rim and various muscle attachment sites of the femur and hemi-pelvis were found to predict 3D CT-based coverage measurements (R2=0.5-0.7 for the full bones, R2=0.9 for the joint).



Validation of Artificial Intelligence Severity Assessment in Metopic Craniosynostosis
A. Junn, J. Dinis, S. C. Hauc, M. K. Bruce, K. E. Park, W. Tao, C. Christensen, R. Whitaker, J. A. Goldstein, M. Alperovich. In The Cleft Palate-Craniofacial Journal, SAGE Publications, 2021.
DOI: https://doi.org/10.1177/10.1177/10556656211061021

Objective
Several severity metrics have been developed for metopic craniosynostosis, including a recent machine learning-derived algorithm. This study assessed the diagnostic concordance between machine learning and previously published severity indices.

Design
Preoperative computed tomography (CT) scans of patients who underwent surgical correction of metopic craniosynostosis were quantitatively analyzed for severity. Each scan was manually measured to derive manual severity scores and also received a scaled metopic severity score (MSS) assigned by the machine learning algorithm. Regression analysis was used to correlate manually captured measurements to MSS. ROC analysis was performed for each severity metric and were compared to how accurately they distinguished cases of metopic synostosis from controls.
Results
In total, 194 CT scans were analyzed, 167 with metopic synostosis and 27 controls. The mean scaled MSS for the patients with metopic was 6.18 ± 2.53 compared to 0.60 ± 1.25 for controls. Multivariable regression analyses yielded an R-square of 0.66, with significant manual measurements of endocranial bifrontal angle (EBA) (P = 0.023), posterior angle of the anterior cranial fossa (p < 0.001), temporal depression angle (P = 0.042), age (P < 0.001), biparietal distance (P < 0.001), interdacryon distance (P = 0.033), and orbital width (P < 0.001). ROC analysis demonstrated a high diagnostic value of the MSS (AUC = 0.96, P < 0.001), which was comparable to other validated indices including the adjusted EBA (AUC = 0.98), EBA (AUC = 0.97), and biparietal/bitemporal ratio (AUC = 0.95).
Conclusions
The machine learning algorithm offers an objective assessment of morphologic severity that provides a reliable composite impression of severity. The generated score is comparable to other severity indices in ability to distinguish cases of metopic synostosis from controls.



Determining the Composition of a Mixed Material with Synthetic Data
C. Ly, C. A. Nizinski, A. Toydemir, C. Vachet, L. W. McDonald, T. Tasdizen. In Microscopy and Microanalysis, Cambridge University Press, pp. 1--11. 2021.
DOI: 10.1017/S1431927621012915

Determining the composition of a mixed material is an open problem that has attracted the interest of researchers in many fields. In our recent work, we proposed a novel approach to determine the composition of a mixed material using convolutional neural networks (CNNs). In machine learning, a model “learns” a specific task for which it is designed through data. Hence, obtaining a dataset of mixed materials is required to develop CNNs for the task of estimating the composition. However, the proposed method instead creates the synthetic data of mixed materials generated from using only images of pure materials present in those mixtures. Thus, it eliminates the prohibitive cost and tedious process of collecting images of mixed materials. The motivation for this study is to provide mathematical details of the proposed approach in addition to extensive experiments and analyses. We examine the approach on two datasets to demonstrate the ease of extending the proposed approach to any mixtures. We perform experiments to demonstrate that the proposed approach can accurately determine the presence of the materials, and sufficiently estimate the precise composition of a mixed material. Moreover, we provide analyses to strengthen the validation and benefits of the proposed approach.



Computational Image Techniques for Analyzing Lanthanide and Actinide Morphology,
C. A. Nizinski, C. Ly, L. W. McDonald IV, T. Tasdizen. In Rare Earth Elements and Actinides: Progress in Computational Science Applications, Ch. 6, pp. 133-155. 2021.
DOI: 10.1021/bk-2021-1388.ch006

This chapter introduces computational image analysis techniques and how they may be used for material characterization as it pertains to lanthanide and actinide chemistry. Specifically, the underlying theory behind particle segmentation, texture analysis, and convolutional neural networks for material characterization are briefly summarized. The variety of particle segmentation techniques that have been used to effectively measure the size and shape of morphological features from scanning electron microscope images will be discussed. In addition, the extraction of image texture features via gray-level co-occurrence matrices and angle measurement techniques are described and demonstrated. To conclude, the application of convolutional neural networks to lanthanide and actinide materials science challenges are described with applications for image classification, feature extraction, and predicting a materials morphology discussed.