Back to researchers

Jordan Hoffmann

Compute-optimal scaling for LLM training

Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Highlights

DeepMindChinchillaScalingCompute-optimal
Focus: Compute-optimal scaling for LLM training
Why it matters: Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Research Areas

DeepMindChinchillaScalingCompute-optimal
Jordan Hoffmann - AI Researcher Profile | 500AI