2023
M. Hu, K. Zhang, Q. Nguyen, T. Tasdizen.
The effects of passive design on indoor thermal comfort and energy savings for residential buildings in hot climates: A systematic review, In Urban Climate, Vol. 49, pp. 101466. 2023.
DOI: https://doi.org/10.1016/j.uclim.2023.101466
In this study, a systematic review and meta-analysis were conducted to identify, categorize, and investigate the effectiveness of passive cooling strategies (PCSs) for residential buildings. Forty-two studies published between 2000 and 2021 were reviewed; they examined the effects of PCSs on indoor temperature decrease, cooling load reduction, energy savings, and thermal comfort hour extension. In total, 30 passive strategies were identified and classified into three categories: design approach, building envelope, and passive cooling system. The review found that using various passive strategies can achieve, on average, (i) an indoor temperature decrease of 2.2 °C, (ii) a cooling load reduction of 31%, (iii) energy savings of 29%, and (v) a thermal comfort hour extension of 23%. Moreover, the five most effective passive strategies were identified as well as the differences between hot and dry climates and hot and humid climates.
M. Shao, T. Tasdizen, S. Joshi.
Analyzing the Domain Shift Immunity of Deep Homography Estimation, Subtitled arXiv:2304.09976v1, 2023.
Homography estimation is a basic image-alignment method in many applications. Recently, with the development of convolutional neural networks (CNNs), some learning based approaches have shown great success in this task. However, the performance across different domains has never been researched. Unlike other common tasks (e.g., classification, detection, segmentation), CNN based homography estimation models show a domain shift immunity, which means a model can be trained on one dataset and tested on another without any transfer learning. To explain this unusual performance, we need to determine how CNNs estimate homography. In this study, we first show the domain shift immunity of different deep homography estimation models. We then use a shallow network with a specially designed dataset to analyze the features used for estimation. The results show that networks use low-level texture information to estimate homography. We also design some experiments to compare the performance between different texture densities and image features distorted on some common datasets to demonstrate our findings. Based on these findings, we provide an explanation of the domain shift immunity of deep homography estimation.
2022
M. Alirezaei, T. Tasdizen.
Adversarially Robust Classification by Conditional Generative Model Inversion, Subtitled arXiv preprint arXiv:2201.04733, 2022.
Most adversarial attack defense methods rely on obfuscating gradients. These methods are successful in defending against gradient-based attacks; however, they are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient. Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers which allows an adversary to find small perturbations in the input space that lead to large changes in the output space. On the other hand, a generative model is typically a low-to-high-dimensional mapping. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, which was shown to generate obfuscated gradients that are easily circumvented, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks compared to naturally trained, feed-forward classifiers.
M. Alirezaei, Q.C. Nguyen, R. Whitaker, T. Tasdizen.
Multi-Task Classification for Improved Health Outcome Prediction Based on Environmental Indicators, In IEEE Access, 2022.
DOI: 10.1109/ACCESS.2023.3295777
The influence of the neighborhood environment on health outcomes has been widely recognized in various studies. Google street view (GSV) images offer a unique and valuable tool for evaluating neighborhood environments on a large scale. By annotating the images with labels indicating the presence or absence of certain neighborhood features, we can develop classifiers that can automatically analyze and evaluate the environment. However, labeling GSV images on a large scale is a time-consuming and labor-intensive task. Considering these challenges, we propose using a multi-task classifier to improve training a classifier with limited supervised, GSV data. Our multi-task classifier utilizes readily available, inexpensive online images collected from Flicker as a related classification task. The hypothesis is that a classifier trained on multiple related tasks is less likely to overfit to small amounts of training data and generalizes better to unseen data. We leverage the power of multiple related tasks to improve the classifier’s overall performance and generalization capability. Here we show that, with the proposed learning paradigm, predicted labels for GSV test images are more accurate. Across different environment indicators, the accuracy, F1 score and balanced accuracy increase up to 6 % in the multi-task learning framework compared to its single-task learning counterpart. The enhanced accuracy of the predicted labels obtained through the multi-task classifier contributes to a more reliable and precise regression analysis determining the correlation between predicted built environment indicators and health outcomes. The R2 values calculated for different health outcomes improve by up to 4 % using multi-task learning detected indicators.
M. Grant, M. R. Kunz, K. Iyer, L. I. Held, T. Tasdizen, J. A. Aguiar, P. P. Dholabhai.
Integrating atomistic simulations and machine learning to design multi-principal element alloys with superior elastic modulus, In Journal of Materials Research, Springer International Publishing, pp. 1--16. 2022.
Multi-principal element, high entropy alloys (HEAs) are an emerging class of materials that have found applications across the board. Owing to the multitude of possible candidate alloys, exploration and compositional design of HEAs for targeted applications is challenging since it necessitates a rational approach to identify compositions exhibiting enriched performance. Here, we report an innovative framework that integrates molecular dynamics and machine learning to explore a large chemical-configurational space for evaluating elastic modulus of equiatomic and non-equiatomic HEAs along primary crystallographic directions. Vital thermodynamic properties and machine learning features have been incorporated to establish fundamental relationships correlating Young’s modulus with Gibbs free energy, valence electron concentration, and atomic size difference. In HEAs, as the number of elements increases …
R. Lanfredi, J.D. Schroeder, T. Tasdizen.
Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation, Subtitled arXiv:2207.09771, 2022.
Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected in a non-intrusive way during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of abnormalities. We show that this method improves a model's interpretability without impacting its image-level classification.
Q. C. Nguyen, T. Belnap, P. Dwivedi, A. Hossein Nazem Deligani, A. Kumar, D. Li, R. Whitaker, J. Keralis, H. Mane, X. Yue, T. T. Nguyen, T. Tasdizen, K. D. Brunisholz.
Google Street View Images as Predictors of Patient Health Outcomes, 2017–2019, In Big Data and Cognitive Computing, Vol. 6, No. 1, Multidisciplinary Digital Publishing Institute, 2022.
Collecting neighborhood data can both be time- and resource-intensive, especially across broad geographies. In this study, we leveraged 1.4 million publicly available Google Street View (GSV) images from Utah to construct indicators of the neighborhood built environment and evaluate their associations with 2017–2019 health outcomes of approximately one-third of the population living in Utah. The use of electronic medical records allows for the assessment of associations between neighborhood characteristics and individual-level health outcomes while controlling for predisposing factors, which distinguishes this study from previous GSV studies that were ecological in nature. Among 938,085 adult patients, we found that individuals living in communities in the highest tertiles of green streets and non-single-family homes have 10–27% lower diabetes, uncontrolled diabetes, hypertension, and obesity, but higher substance use disorders—controlling for age, White race, Hispanic ethnicity, religion, marital status, health insurance, and area deprivation index. Conversely, the presence of visible utility wires overhead was associated with 5–10% more diabetes, uncontrolled diabetes, hypertension, obesity, and substance use disorders. Our study found that non-single-family and green streets were related to a lower prevalence of chronic conditions, while visible utility wires and single-lane roads were connected with a higher burden of chronic conditions. These contextual characteristics can better help healthcare organizations understand the drivers of their patients’ health by further considering patients’ residential environments, which present both …
C. A. Nizinski, C. Ly, C. Vachet, A. Hagen, T. Tasdizen, L. W. McDonald.
Characterization of uncertainties and model generalizability for convolutional neural network predictions of uranium ore concentrate morphology, In Chemometrics and Intelligent Laboratory Systems, Vol. 225, Elsevier, pp. 104556. 2022.
ISSN: 0169-7439
DOI: https://doi.org/10.1016/j.chemolab.2022.104556
As the capabilities of convolutional neural networks (CNNs) for image classification tasks have advanced, interest in applying deep learning techniques for determining the natural and anthropogenic origins of uranium ore concentrates (UOCs) and other unknown nuclear materials by their surface morphology characteristics has grown. But before CNNs can join the nuclear forensics toolbox along more traditional analytical techniques – such as scanning electron microscopy (SEM), X-ray diffractometry, mass spectrometry, radiation counting, and any number of spectroscopic methods – a deeper understanding of “black box” image classification will be required. This paper explores uncertainty quantification for convolutional neural networks and their ability to generalize to out-of-distribution (OOD) image data sets. For prediction uncertainty, Monte Carlo (MC) dropout and random image crops as variational inference techniques are implemented and characterized. Convolutional neural networks and classifiers using image features from unsupervised vector-quantized variational autoencoders (VQ-VAE) are trained using SEM images of pure, unaged, unmixed uranium ore concentrates considered “unperturbed.” OOD data sets are developed containing perturbations from the training data with respect to the chemical and physical properties of the UOCs or data collection parameters; predictions made on the perturbation sets identify where significant shortcomings exist in the current training data and techniques used to develop models for classifying uranium process history, and provides valuable insights into how datasets and classification models can be improved for better generalizability to out-of-distribution examples.
A. Quistberg, C.I. Gonzalez, P. Arbeláez, O.L. Sarmiento, L. Baldovino-Chiquillo, Q. Nguyen, T. Tasdizen, L.A.G. Garcia, D. Hidalgo, S.J. Mooney, A.V.D. Roux, G. Lovasi.
430 Training neural networks to identify built environment features for pedestrian safety, In Injury Prevention, Vol. 28, No. 2, BMJ, pp. A65. 2022.
DOI: 10.1136/injuryprev-2022-safety2022.194
Background
We used panoramic images and neural networks to measure street-level built environment features with relevance to pedestrian safety.
Methods
Street-level features were identified from systematic literature search and local experience in Bogota, Colombia (study location). Google Street View© panoramic images were sampled from 10,810 intersection and street segment locations, including 2,642 where pedestrian collisions occurred 2015–2019; the most recent, nearest (<25 meters) available image was selected for each sampled intersection or segment. Human raters annotated image features which were used to train neural networks. Neural networks and human raters were compared across all features using mean Average Recall (mAR) and mean Average Precision (mAP) estimated performance. Feature prevalence was compared by pedestrian vs non-pedestrian collision locations.
Results
Thirty features were identified related to roadway (e.g., medians), crossing areas (e.g., crosswalk), traffic control (e.g., pedestrian signal), and roadside (e.g., trees) with streetlights the most frequently detected object (N=10,687 images). Neural networks achieved mAR=15.4 versus 25.4 for humans, and a mAP=16.0. Bus lanes, pedestrian signals, and pedestrian bridges were significantly more prevalent at pedestrian collision locations, whereas speed bumps, school zones, sidewalks, trees, potholes and streetlights were significantly more prevalent at non-pedestrian collision locations.
Conclusion
Neural networks have substantial potential to obtain timely, accurate built environment data crucial to improve road safety. Training images need to be well-annotated to ensure accurate object detection and completeness.
Learning Outcomes
1) Describe how neural networks can be used for road safety research; 2) Describe challenges of using neural networks.
2021
V. Keshavarzzadeh, M. Alirezaei, T. Tasdizen, R. M. Kirby.
Image-Based Multiresolution Topology Optimization Using Deep Disjunctive Normal Shape Model, In Computer-Aided Design, Vol. 130, Elsevier, pp. 102947. 2021.
We present a machine learning framework for predicting the optimized structural topology design susing multiresolution data. Our approach primarily uses optimized designs from inexpensive coarse mesh finite element simulations for model training and generates high resolution images associated with simulation parameters that are not previously used. Our cost-efficient approach enables the designers to effectively search through possible candidate designs in situations where the design requirements rapidly change. The underlying neural network framework is based on a deep disjunctive normal shape model (DDNSM) which learns the mapping between the simulation parameters and segments of multi resolution images. Using this image-based analysis we provide a practical algorithm which enhances the predictability of the learning machine by determining a limited number of important parametric samples(i.e.samples of the simulation parameters)on which the high resolution training data is generated. We demonstrate our approach on benchmark compliance minimization problems including the 3D topology optimization where we show that the high-fidelity designs from the learning machine are close to optimal designs and can be used as effective initial guesses for the large-scale optimization problem.
R. B. Lanfredi, M. Zhang, W. F. Auffermann, J. Chan, P. T. Duong, V. Srikumar, T. Drew, J. D. Schroeder, T. Tasdizen.
REFLACX, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays, Subtitled arXiv:2109.14187, 2021.
Deep learning has shown recent success in classifying anomalies in chest x-rays, but datasets are still small compared to natural image datasets. Supervision of abnormality localization has been shown to improve trained models, partially compensating for dataset sizes. However, explicitly labeling these anomalies requires an expert and is very time-consuming. We propose a method for collecting implicit localization data using an eye tracker to capture gaze locations and a microphone to capture a dictation of a report, imitating the setup of a reading room, and potentially scalable for large datasets. The resulting REFLACX (Reports and Eye-Tracking Data for Localization of Abnormalities in Chest X-rays) dataset was labeled by five radiologists and contains 3,032 synchronized sets of eye-tracking data and timestamped report transcriptions. We also provide bounding boxes around lungs and heart and validation labels consisting of ellipses localizing abnormalities and image-level labels. Furthermore, a small subset of the data contains readings from all radiologists, allowing for the calculation of inter-rater scores.
R.B. Lanfredi, A. Arora, T. Drew, J.D. Schroeder, T. Tasdizen.
Comparing radiologists’ gaze and saliency maps generated by interpretability methods for chest x-rays, Subtitled arXiv:2112.11716v1, 2021.
The interpretability of medical image analysis models is considered a key research field. We use a dataset of eye-tracking data from five radiologists to compare the outputs of interpretability methods against the heatmaps representing where radiologists looked. We conduct a class-independent analysis of the saliency maps generated by two methods selected from the literature: Grad-CAM and attention maps from an attention-gated model. For the comparison, we use shuffled metrics, which avoid biases from fixation locations. We achieve scores comparable to an interobserver baseline in one shuffled metric, highlighting the potential of saliency maps from Grad-CAM to mimic a radiologist’s attention over an image. We also divide the dataset into subsets to evaluate in which cases similarities are higher.
C. Ly, C. Nizinski, C. Vachet, L. McDonald, T. Tasdizen.
Learning to Estimate the Composition of a Mixture with Synthetic Data, In Microscopy and Microanalysis, 2021.
Identifying the precise composition of a mixed material is important in various applications. For instance, in nuclear forensics analysis, knowing the process history of unknown or illicitly trafficked nuclear materials when they are discovered is desirable to prevent future losses or theft of material from the processing facilities. Motivated by this open problem, we describe a novel machine learning approach to determine the composition of a mixture from SEM images. In machine learning, the training data distribution should reflect the distribution of the data the model is expected to make predictions for, which can pose a hurdle. However, a key advantage of our proposed framework is that it requires reference images of pure material samples only. Removing the need for reference samples of various mixed material compositions reduces the time and monetary cost associated with reference sample preparation and imaging. Moreover, our proposed framework can determine the composition of a mixture composed of chemically similar materials, whereas other elemental analysis tools such as powder X-ray diffraction (p-XRD) have trouble doing so. For example, p-XRD is unable to discern mixtures composed of triuranium octoxide (U3O8) synthesized from different synthetic routes such as uranyl peroxide (UO4) and ammonium diuranate (ADU) [1]. In contrast, our proposed framework can easily determine the composition of uranium oxides mixture synthesized from different synthetic routes, as we illustrate in the experiments.
C. Ly, C. A. Nizinski, A. Toydemir, C. Vachet, L. W. McDonald, T. Tasdizen.
Determining the Composition of a Mixed Material with Synthetic Data, In Microscopy and Microanalysis, Cambridge University Press, pp. 1--11. 2021.
DOI: 10.1017/S1431927621012915
Determining the composition of a mixed material is an open problem that has attracted the interest of researchers in many fields. In our recent work, we proposed a novel approach to determine the composition of a mixed material using convolutional neural networks (CNNs). In machine learning, a model “learns” a specific task for which it is designed through data. Hence, obtaining a dataset of mixed materials is required to develop CNNs for the task of estimating the composition. However, the proposed method instead creates the synthetic data of mixed materials generated from using only images of pure materials present in those mixtures. Thus, it eliminates the prohibitive cost and tedious process of collecting images of mixed materials. The motivation for this study is to provide mathematical details of the proposed approach in addition to extensive experiments and analyses. We examine the approach on two datasets to demonstrate the ease of extending the proposed approach to any mixtures. We perform experiments to demonstrate that the proposed approach can accurately determine the presence of the materials, and sufficiently estimate the precise composition of a mixed material. Moreover, we provide analyses to strengthen the validation and benefits of the proposed approach.
Q. C Nguyen, J. M. Keralis, P. Dwivedi, A. E. Ng, M. Javanmardi, S. Khanna, Y. Huang, K. D. Brunisholz, A. Kumar, T. Tasdizen.
Leveraging 31 Million Google Street View Images to Characterize Built Environments and Examine County Health Outcomes , In Public Health Reports, Vol. 136, No. 2, SAGE Publications, pp. 201-211. 2021.
DOI: doi.org/10.1177/0033354920968799
Objectives
Built environments can affect health, but data in many geographic areas are limited. We used a big data source to create national indicators of neighborhood quality and assess their associations with health.
C. A. Nizinski, C. Ly, L. W. McDonald IV, T. Tasdizen.
Computational Image Techniques for Analyzing Lanthanide and Actinide Morphology, In Rare Earth Elements and Actinides: Progress in Computational Science Applications, Ch. 6, pp. 133-155. 2021.
DOI: 10.1021/bk-2021-1388.ch006
This chapter introduces computational image analysis techniques and how they may be used for material characterization as it pertains to lanthanide and actinide chemistry. Specifically, the underlying theory behind particle segmentation, texture analysis, and convolutional neural networks for material characterization are briefly summarized. The variety of particle segmentation techniques that have been used to effectively measure the size and shape of morphological features from scanning electron microscope images will be discussed. In addition, the extraction of image texture features via gray-level co-occurrence matrices and angle measurement techniques are described and demonstrated. To conclude, the application of convolutional neural networks to lanthanide and actinide materials science challenges are described with applications for image classification, feature extraction, and predicting a materials morphology discussed.
N. Ramesh, T. Tasdizen.
Detection and segmentation in microscopy images, In Computer Vision for Microscopy Image Analysis, Academic Press, pp. 43-71. 2021.
DOI: 10.1016/B978-0-12-814972-0.00003-5
The plethora of heterogeneous data generated using modern microscopy imaging techniques eliminates the possibility of manual image analysis for biologists. Consequently, reliable and robust computerized techniques are critical to analyze microscopy data. Detection problems in microscopy images focuses on accurately identifying the objects of interest in an image that can be used to investigate hypotheses about developmental or pathological processes and can be indicative of prognosis in patients. Detection is also considered to be the preliminary step for solving subsequent problems, such as segmentation and tracking for various biological applications. Segmentation of the desired structures and regions in microscopy images require pixel-level labels to uniquely identify the individual structures and regions with contours for morphological and physiological analysis. Distributions of features extracted from the segmented regions can be used to compare normal versus disease or normal versus wild-type populations. Segmentation can be considered as a precursor for solving classification, reconstruction, and tracking problems in microscopy images. In this chapter, we discuss how the field of microscopic image analysis has progressed over the years, starting with traditional approaches and then followed by the study of learning algorithms. Because there is a lot of variability in microscopy data, it is essential to study learning algorithms that can adapt to these changes. We focus on deep learning approaches with convolutional neural networks (CNNs), as well as hierarchical methods for segmentation and detection in optical and electron microscopy images. Limitation of training data is one of the significant problems; hence, we explore solutions to learn better models with minimal user annotations.
2020
C. Ly, C. Vachet, I. Schwerdt, E. Abbott, A. Brenkmann, L.W. McDonald, T. Tasdizen.
Determining uranium ore concentrates and their calcination products via image classification of multiple magnifications, In Journal of Nuclear Materials, 2020.
Many tools, such as mass spectrometry, X-ray diffraction, X-ray fluorescence, ion chromatography, etc., are currently available to scientists investigating interdicted nuclear material. These tools provide an analysis of physical, chemical, or isotopic characteristics of the seized material to identify its origin. In this study, a novel technique that characterizes physical attributes is proposed to provide insight into the processing route of unknown uranium ore concentrates (UOCs) and their calcination products. In particular, this study focuses on the characteristics of the surface structure captured in scanning electron microscopy (SEM) images at different magnification levels. Twelve common commercial processing routes of UOCs and their calcination products are investigated. Multiple-input single-output (MISO) convolution neural networks (CNNs) are implemented to differentiate the processing routes. The proposed technique can determine the processing route of a given sample in under a second running on a graphics processing unit (GPU) with an accuracy of more than 95%. The accuracy and speed of this proposed technique enable nuclear scientists to provide the preliminary identification results of interdicted material in a short time period. Furthermore, this proposed technique uses a predetermined set of magnifications, which in turn eliminates the human bias in selecting the magnification during the image acquisition process.
2019
A. B. Hanson, R. N. Lee, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV.
Quantifying Impurity Effects on the Surface Morphology of α-U3O8, In Analytical Chemistry, 2019.
DOI: doi:10.1021/acs.analchem.9b02013
The morphological effect of impurities on α-U3O8 has been investigated. This study provides the first evidence that the presence of impurities can alter nuclear material morphology, and these changes can be quantified to aid in revealing processing history. Four elements: Ca, Mg, V, and Zr were implemented in the uranyl peroxide synthesis route and studied individually within the α-U3O8. Six total replicates were synthesized, and replicates 1–3 were filtered and washed with Millipore water (18.2 MΩ) to remove any residual nitrates. Replicates 4–6 were filtered but not washed to determine the amount of impurities removed during washing. Inductively coupled plasma mass spectrometry (ICP-MS) was employed at key points during the synthesis to quantify incorporation of the impurity. Each sample was characterized using powder X-ray diffraction (p-XRD), high-resolution scanning electron microscopy (HRSEM), and SEM with energy dispersive X-ray spectroscopy (SEM-EDS). p-XRD was utilized to evaluate any crystallographic changes due to the impurities; HRSEM imagery was analyzed with Morphological Analysis for MAterials (MAMA) software and machine learning classification for quantification of the morphology; and SEM-EDS was utilized to locate the impurity within the α-U3O8. All samples were found to be quantifiably distinguishable, further demonstrating the utility of quantitative morphology as a signature for the processing history of nuclear material.
S. T. Heffernan, N. Ly, B. J. Mower, C. Vachet, I. J. Schwerdt, T. Tasdizen, L. W. McDonald IV.
Identifying surface morphological characteristics to differentiate between mixtures of U3O8 synthesized from ammonium diuranate and uranyl peroxide, In Radiochimica Acta, 2019.
In the present study, surface morphological differences of mixtures of triuranium octoxide (U3O8), synthesized from uranyl peroxide (UO4) and ammonium diuranate (ADU), were investigated. The purity of each sample was verified using powder X-ray diffractometry (p-XRD), and scanning electron microscopy (SEM) images were collected to identify unique morphological features. The U3O8 from ADU and UO4 was found to be unique. Qualitatively, both particles have similar features being primarily circular in shape. Using the morphological analysis of materials (MAMA) software, particle shape and size were quantified. UO4 was found to produce U3O8 particles three times the area of those produced from ADU. With the starting morphologies quantified, U3O8 samples from ADU and UO4 were physically mixed in known quantities. SEM images were collected of the mixed samples, and the MAMA software was used to quantify particle attributes. As U3O8 particles from ADU were unique from UO4, the composition of the mixtures could be quantified using SEM imaging coupled with particle analysis. This provides a novel means of quantifying processing histories of mixtures of uranium oxides. Machine learning was also used to help further quantify characteristics in the image database through direct classification and particle segmentation using deep learning techniques based on Convolutional Neural Networks (CNN). It demonstrates that these techniques can distinguish the mixtures with high accuracy as well as showing significant differences in morphology between the mixtures. Results from this study demonstrate the power of quantitative morphological analysis for determining the processing history of nuclear materials.