Back to researchers

Piotr Dollár

Masked autoencoders for vision (MAE)

Co-authored MAE: a strong template for scalable self-supervised vision pretraining.

Highlights

VisionSelf-supervisedTransformers
Focus: Masked autoencoders for vision (MAE)
Why it matters: Co-authored MAE: a strong template for scalable self-supervised vision pretraining.

Research Areas

VisionSelf-supervisedTransformers
Piotr Dollár - AI Researcher Profile | 500AI