Interpretable AI for Radiology

Interpretable deep learning for radiology—combining clinical workflow signals, localization supervision, and expert-aligned explanations so imaging models are accurate and inspectable.

Interpretable AI in radiology requires more than accurate predictions. In chest x-ray analysis, models should also indicate whether they are relying on the same kinds of evidence that radiologists use and whether their decisions are grounded in anatomically meaningful image patterns. Our work addresses this through two complementary directions: radiologist eye tracking and counterfactual image generation. Eye-tracking data collected during routine report dictation provide scalable, label-specific localization signals that help train models to highlight abnormalities more faithfully, while counterfactual methods generate disease-related image changes that make model evidence easier to inspect than standard saliency maps alone. Together, these approaches aim to make radiology AI more transparent, clinically grounded, and better suited for trust, validation, and bias discovery.

Technically, we combine weak supervision from human behavior with generative models of disease evidence. In one line of work, gaze traces are aligned with timestamped report dictation to extract label-specific eye-tracking maps, which are then used to supervise localization-aware chest x-ray models without hurting classification accuracy. In another, adversarial generative models create counterfactual transformations that reveal what image changes correspond to differences in disease presence or severity, either through additive effect maps or deformation fields that better capture anatomical shape change. This project focuses on radiology, but the broader interpretability theme also extends to pathology, where related work such as HistoEM ranks and visualizes discriminative gland-level features to relate predictions back to tissue morphology.

Illustration: Interpretable AI for Radiology

Status: active

Topics

InterpretabilityRadiologyLocalizationMultimodal

Collaborators

  • William F. Auffermann
  • Vivek Srikumar
  • Trafton Drew
  • Joyce D. Schroeder
  • Beatrice S. Knudsen

Related publications

HistoEM: A Pathologist-Guided and Explainable Workflow for Histopathological Image Analysis Using Expectation Maximization

Alessandro Ferrero, Elham Ghelichkhan, Hamid Manoochehri, Man Minh Ho, Daniel J Albertson, Benjamin J Brintz, Tolga Tasdizen, Ross T Whitaker, Beatrice S Knudsen

Modern Pathology 2024

Figure: Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation

Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation

Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen

Frontiers in Radiology 2023

REFLACX, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays

Ricardo Bigolin Lanfredi, Mingyuan Zhang, William F. Auffermann, Jessica Chan, Phuong-Anh T. Duong, Vivek Srikumar, Trafton Drew, Joyce D. Schroeder, Tolga Tasdizen

Scientific Data 2022

Figure: Interpretation of Disease Evidence for Medical Images Using Adversarial Deformation Fields

Interpretation of Disease Evidence for Medical Images Using Adversarial Deformation Fields

Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Clement Vachet, Tolga Tasdizen

MICCAI 2020 (LNCS 12262) 2020

Figure: Adversarial Regression Training for Visualizing the Progression of Chronic Obstructive Pulmonary Disease with Chest X-Rays

Adversarial Regression Training for Visualizing the Progression of Chronic Obstructive Pulmonary Disease with Chest X-Rays

Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Clement Vachet, Tolga Tasdizen

MICCAI 2019 (LNCS 11769) 2019