Back to researchers

Johannes Welbl

Compute-optimal scaling for LLM training

Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Highlights

DeepMindChinchillaScalingCompute-optimal
Focus: Compute-optimal scaling for LLM training
Why it matters: Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Research Areas

DeepMindChinchillaScalingCompute-optimal
Johannes Welbl - AI Researcher Profile | 500AI