Learning with Limited Supervision
Methods for training deep networks when annotated data is scarce, expensive, or unreliable, including semi-supervised, self-supervised, and weakly supervised learning.
Many important vision problems must be solved with supervision that is sparse, coarse, or costly to obtain. Our work in limited supervision is motivated by this mismatch. Rather than treating labeled data as the only source of learning signal, we develop methods that use unlabeled data, structural constraints, and carefully designed invariances to learn useful representations and decision boundaries with far less annotation. This perspective has shaped influential early work in semi-supervised deep learning based on prediction consistency under stochastic transformations and perturbations, and it continues to inform our more recent work in weakly supervised and contrastive learning for scientific and medical imaging.
Methodologically, we focus on learning objectives that match the supervision actually available. In histopathology, this means contrastive strategies that leverage stain structure, pseudo-labels, and slide-level information to learn strong local features without dense patch annotations. In electron microscopy, it means combining a small amount of supervision with hierarchical consistency constraints so that unlabeled data can guide accurate segmentation. Across these directions, the broader goal is the same: to build data-efficient models that learn from structure, not just labels, and that remain effective in real-world domains where annotation is expensive and incomplete.
Methods
Application Areas
Selected Projects
Selected Publications
CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification
Improving uranium oxide pathway discernment and generalizability using contrastive self-supervised learning
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning