Generative pretraining, multimodal models
Important because several of the modern foundation-model playbooks trace back to work he helped drive, especially around generative pretraining and multimodal transfer.
Lab & Ecosystem
Researchers behind GPT-era training, post-training, multimodal systems, and model evaluation work.
Within 500AI, OpenAI is most legible through researchers like Alec Radford, Ilya Sutskever, John Schulman.
This cluster is especially tied to Evaluation & Benchmarks, Code Models, Post-Training & Alignment. Frequent institution signals include OpenAI, Boston College, University of Siena. Recurring entry points include GPT-4 Technical Report, Evaluating Large Language Models Trained on Code.
Snapshot
Researchers
289
Related topics
8
Starting points
8
Developed dossiers
10
Useful lenses pulled from the strongest researcher profiles in this cluster.
Frequent institutions showing up across linked profiles in this ecosystem.
Repeatedly linked papers, projects, and repositories across this lab cluster.
GPT-4 Technical Report
207Linked by 207 profiles in this cluster
Evaluating Large Language Models Trained on Code
37Linked by 37 profiles in this cluster
Training language models to follow instructions with human feedback
17Linked by 17 profiles in this cluster
Language Models are Few-Shot Learners (GPT-3)
15Linked by 15 profiles in this cluster
Learning Transferable Visual Models From Natural Language Supervision
5Linked by 5 profiles in this cluster
Robust Speech Recognition via Large-Scale Weak Supervision
4Linked by 4 profiles in this cluster
Whisper (GitHub)
4Linked by 4 profiles in this cluster
Hierarchical Text-Conditional Image Generation with CLIP Latents
3Linked by 3 profiles in this cluster
Source clusters that repeatedly anchor researcher pages in this ecosystem.
GPT-4 Technical Report
204Used across 204 researcher pages in this lab cluster
Evaluating Large Language Models Trained on Code
36Used across 36 researcher pages in this lab cluster
Training language models to follow instructions with human feedback
17Used across 17 researcher pages in this lab cluster
Language Models are Few-Shot Learners (GPT-3)
15Used across 15 researcher pages in this lab cluster
Learning Transferable Visual Models From Natural Language Supervision
5Used across 5 researcher pages in this lab cluster
Robust Speech Recognition via Large-Scale Weak Supervision
4Used across 4 researcher pages in this lab cluster
A stronger first pass through OpenAI, ranked by profile depth, evidence, and editorial importance.
Generative pretraining, multimodal models
Important because several of the modern foundation-model playbooks trace back to work he helped drive, especially around generative pretraining and multimodal transfer.
Deep learning, large-scale training
A defining figure of the deep-learning era whose influence comes from both landmark technical contributions and his role in setting the ambition level of frontier-model labs.
Reinforcement learning, post-training
A key bridge between reinforcement-learning methodology and the post-training techniques now used to shape assistant behavior.
Instruction-following via RLHF (InstructGPT)
A useful person to follow for the OpenAI thread that runs from dexterous robotics into later evaluation and capability-measurement work on large language models.
Large-scale language modeling
One of the clearest researchers to study for the GPT-3 era, especially around few-shot learning, scaling behavior, and what larger language models started making possible in practice.
Instruction tuning and RLHF
A good person to follow if you care about what deployment-minded safety work looks like inside a frontier lab, especially around moderation, image systems, and system-card style evaluation.
Instruction following, alignment
A useful person to study for the policy-and-deployment side of frontier AI, especially where product releases need a more explicit hazard and misuse analysis.
Instruction following, post-training
A useful anchor for understanding the practical scaling-law and GPT-3 era, especially the people who turned broad intuition about scale into concrete training decisions.
Instruction-following via RLHF (InstructGPT)
A high-signal person to follow for the evaluation and verification side of alignment, especially where language models are pushed to produce answers that can actually be checked rather than merely sounding plausible.
289 linked profiles.