2021
DINO: Emerging Properties in Self-Supervised Vision Transformers
How self-distillation with no labels produces Vision Transformer attention maps that automatically segment objects — without any pixel-level supervision.
Explore machine learning papers and reviews related to attention-maps. Find insights, analysis, and implementation details.
How self-distillation with no labels produces Vision Transformer attention maps that automatically segment objects — without any pixel-level supervision.