Back to researchers

Lianmin Zheng

Fast, cheap LLM serving (PagedAttention)

Co-authored vLLM: a widely used serving stack for efficient LLM inference.

Highlights

vLLMServingSystems
Focus: Fast, cheap LLM serving (PagedAttention)
Why it matters: Co-authored vLLM: a widely used serving stack for efficient LLM inference.

Research Areas

vLLMServingSystems
Lianmin Zheng - AI Researcher Profile | 500AI