Learning with Limited Supervision

Methods for training deep networks when annotated data is scarce, expensive, or unreliable, including semi-supervised, self-supervised, and weakly supervised learning.

Many important vision problems must be solved with supervision that is sparse, coarse, or costly to obtain. Our work in limited supervision is motivated by this mismatch. Rather than treating labeled data as the only source of learning signal, we develop methods that use unlabeled data, structural constraints, and carefully designed invariances to learn useful representations and decision boundaries with far less annotation. This perspective has shaped influential early work in semi-supervised deep learning based on prediction consistency under stochastic transformations and perturbations, and it continues to inform our more recent work in weakly supervised and contrastive learning for scientific and medical imaging.

Methodologically, we focus on learning objectives that match the supervision actually available. In histopathology, this means contrastive strategies that leverage stain structure, pseudo-labels, and slide-level information to learn strong local features without dense patch annotations. In electron microscopy, it means combining a small amount of supervision with hierarchical consistency constraints so that unlabeled data can guide accurate segmentation. Across these directions, the broader goal is the same: to build data-efficient models that learn from structure, not just labels, and that remain effective in real-world domains where annotation is expensive and incomplete.

Methods

Semi-supervised learningSelf-supervised learningContrastive learningPseudo-labelingWeak supervision

Application Areas

Medical imagingHistopathologyScientific imaging

Selected Projects

Selected Publications

Figure: Weakly Supervised Contrastive Learning for Histopathology Patch Embeddings

Weakly Supervised Contrastive Learning for Histopathology Patch Embeddings

Bodong Zhang, Xiwen Li, Hamid Manoochehri, Xiaoya Tang, Deepika Sirohi, Beatrice S. Knudsen, Tolga Tasdizen

arXiv preprint 2026

Figure: CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification

CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification

Bodong Zhang, Hamid Manoochehri, Man Minh Ho, Fahimeh Fooladgar, Yosep Chong, Beatrice S. Knudsen, Deepika Sirohi, Tolga Tasdizen

Medical Image Analysis 2025

Improving uranium oxide pathway discernment and generalizability using contrastive self-supervised learning

Jakob Johnson, Luther McDonald, Tolga Tasdizen

Computational Materials Science 2024

Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning

Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen

Advances in Neural Information Processing Systems (NeurIPS 2016) 2016

Figure: SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation

SSHMT: Semi-supervised Hierarchical Merge Tree for Electron Microscopy Image Segmentation

Ting Liu, Miaomiao Zhang, Mehran Javanmardi, Nisha Ramesh, Tolga Tasdizen

European Conference on Computer Vision (ECCV 2016) 2016