Full-Time

Security Lead, Agentic Red Team at Google DeepMind

Company Google DeepMind
Sector Technology
Posted Posted 1 days ago

Job Description

Job Title: Security Lead, Agentic Red Team

We're a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the 'Agentic Launch Gap'; the critical window where novel AI capabilities outpace traditional security reviews.

As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.

Key Responsibilities:

  • Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.
  • Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.
  • Design Automated Validation Systems: Collaborate with Google teams to engineer 'Auto RedTeaming' solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.
  • Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.
  • Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.
  • Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.

About You:

  • Bachelor's degree in Computer Science, Information Security, or equivalent practical experience.
  • Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.
  • Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).
  • Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.
  • Experience managing or technically leading small, high-performance engineering teams.

In addition, the following would be an advantage:

  • Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).
  • Familiarity with AI safety benchmarks and evaluation frameworks.
  • Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.
  • Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.

The US base salary range for this full-time position is between $248,000 – $349,000 + bonus + equity + benefits.

XML job scraping automation by YubHub

Similar Jobs

Full-Time

Model Behavior Tutor – Social Cognition & EQ

xAI
Remote
More Info
Full-Time

Model Behavior Tutor – Epistemic Rigor & Truthfulness

xAI
Remote
More Info
Full-Time

Member of Technical Staff – Grok Chat Model

xAI
Palo Alto, CA
More Info
Full-Time

Member of Technical Staff – X Platform Security

xAI
Palo Alto, CA
More Info
Full-Time

IT Systems Engineer

xAI
Palo Alto, CA
More Info
Full-Time

Senior IT Systems Engineer

xAI
Palo Alto, CA
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.