Back to researchers

Nicholas Carlini

Adversarial ML, security of deployed models

High-signal work on real failure modes: adversarial examples, extraction, and practical model security.

Highlights

SecuritySafetyRed teaming
Focus: Adversarial ML, security of deployed models
Why it matters: High-signal work on real failure modes: adversarial examples, extraction, and practical model security.

Research Areas

SecuritySafetyRed teaming
Nicholas Carlini - AI Researcher Profile | 500AI