Deep RL, scientific AI, leadership
Important both as a researcher and as an institution builder whose long-running agenda tied deep RL, multimodal systems, and scientific AI into one coherent lab strategy.
Lab & Ecosystem
Researchers shaping frontier multimodal, RL, and scientific AI systems across the DeepMind lineage.
Within 500AI, Google DeepMind is most legible through researchers like Demis Hassabis, Pushmeet Kohli, David Silver.
This cluster is especially tied to Multimodal, Agents & Reasoning, Reinforcement Learning. Frequent institution signals include Google DeepMind, Google, DeepMind. Recurring entry points include Gemini: A Family of Highly Capable Multimodal Models, Flamingo: a Visual Language Model for Few-Shot Learning.
Snapshot
Researchers
1,185
Related topics
8
Starting points
8
Developed dossiers
35
Useful lenses pulled from the strongest researcher profiles in this cluster.
Deep reinforcement learning
Via Demis Hassabis
Applying frontier AI to science and public-interest problems
Via Pushmeet Kohli
AlphaGo and game-playing systems
Via David Silver
Gemini
Via Elena Buchatskaya
Chinchilla and compute-optimal scaling
Via Diego de las Casas
Gopher-era large-language-model work
Via Eric Noland
Frequent institutions showing up across linked profiles in this ecosystem.
Repeatedly linked papers, projects, and repositories across this lab cluster.
Gemini: A Family of Highly Capable Multimodal Models
1113Linked by 1113 profiles in this cluster
Flamingo: a Visual Language Model for Few-Shot Learning
20Linked by 20 profiles in this cluster
Gemini: A Family of Highly Capable Multimodal Models
19Linked by 19 profiles in this cluster
A Generalist Agent
18Linked by 18 profiles in this cluster
Training Compute-Optimal Large Language Models
18Linked by 18 profiles in this cluster
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
15Linked by 15 profiles in this cluster
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
6Linked by 6 profiles in this cluster
AlphaStar: Mastering the real-time strategy game StarCraft II
2Linked by 2 profiles in this cluster
Source clusters that repeatedly anchor researcher pages in this ecosystem.
Gemini: A Family of Highly Capable Multimodal Models
1113Used across 1113 researcher pages in this lab cluster
Flamingo: a Visual Language Model for Few-Shot Learning
20Used across 20 researcher pages in this lab cluster
A Generalist Agent
18Used across 18 researcher pages in this lab cluster
Training Compute-Optimal Large Language Models
18Used across 18 researcher pages in this lab cluster
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
6Used across 6 researcher pages in this lab cluster
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
2Used across 2 researcher pages in this lab cluster
A stronger first pass through Google DeepMind, ranked by profile depth, evidence, and editorial importance.
Deep RL, scientific AI, leadership
Important both as a researcher and as an institution builder whose long-running agenda tied deep RL, multimodal systems, and scientific AI into one coherent lab strategy.
Robotics, vision, structured prediction
A strong person to follow if you want to understand how frontier AI gets pushed into science, security, and trustworthy deployment rather than staying inside benchmark culture.
Deep RL, planning, games
A central figure in modern reinforcement learning whose work turned deep RL from an exciting idea into a line of systems that repeatedly reset expectations.
Compute-optimal scaling for LLM training
Worth tracking for the DeepMind thread that links large-model scaling research to the multimodal Gemini stack, rather than treating those as separate eras.
Compute-optimal scaling for LLM training
A useful profile for the DeepMind researchers who helped carry the lab’s language-model program from scaling-law work into Gemini rather than appearing only on the final product layer.
Compute-optimal scaling for LLM training
A useful profile for the quieter contributor layer behind DeepMind’s frontier language-model systems, especially across Chinchilla and Gemini.
Compute-optimal scaling for LLM training
Worth tracking for the contributor layer inside DeepMind’s language-model program rather than only the most visible public faces of Gemini and Chinchilla.
Compute-optimal scaling for LLM training
A useful profile for the research layer behind DeepMind’s large-model program, especially across the line from Gopher and Chinchilla into Gemini.
Compute-optimal scaling for LLM training
A useful page for the less public but still important DeepMind contributors behind frontier language-model scaling and Gemini.
1,185 linked profiles.