MENU

Fun & Interesting

Contrastive Learning with SimCLR V1/V2 and Some Intriguing Properties

COMPUTER VISION TALKS 2,765 4 years ago
Video Not Working? Fix It Now

Abstract: SimCLR is a simple framework for contrastive learning of visual representations. It considerably outperforms previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over the previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels. I will also talk about (1) SimCLRv2 which is an extension of the SimCLR framework for improved semi-supervised learning, and (2) some intriguing properties of contrastive losses. Short bio: Ting Chen is a research scientist in the Google Brain team. He joined Google after obtaining his Ph.D. from the University of California, Los Angeles. His main research interest is (unsupervised) representation learning.

Comment