Back to researchers

Karen Simonyan

Compute-optimal scaling for LLM training

Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Highlights

DeepMindChinchillaScalingCompute-optimal
Focus: Compute-optimal scaling for LLM training
Why it matters: Co-authored the Chinchilla paper—an anchor for how to spend compute when training LLMs.

Research Areas

DeepMindChinchillaScalingCompute-optimal
Karen Simonyan - AI Researcher Profile | 500AI