BYOL: Bootstrap Your Own Latent
How self-supervised learning works without negative pairs — a predictor and momentum target network are all you need to prevent representation collapse.
Explore machine learning papers and reviews related to contrastive-learning. Find insights, analysis, and implementation details.
How self-supervised learning works without negative pairs — a predictor and momentum target network are all you need to prevent representation collapse.
How a momentum-updated encoder and a dictionary queue make contrastive learning practical — large dictionaries with consistent keys, no large-batch requirement.
How a simple framework — augmentation, shared encoder, projection head, and contrastive loss — set a new standard for self-supervised visual representation learning.
How variance, invariance, and covariance regularization enables self-supervised representation learning without negative pairs or momentum encoders.