Menu
Full-Time

Research Engineer / Scientist, Alignment Science at Anthropic

Company Anthropic
Location London
Salary Competitive salary
Posted Posted 1 days ago

Job Description

Opening. This role exists to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems.

What you'll do

You'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems.

  • Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
  • Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
  • Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  • Write scripts and prompts to efficiently produce evaluation questions to test models' reasoning abilities in safety-relevant contexts.

What you need

  • Significant software, ML, or research engineering experience
  • Some experience contributing to empirical AI research projects
  • Some familiarity with technical AI safety research

Why this matters

One paragraph about career impact and value.

AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Similar Jobs

Full-Time

Website Engineering

ElevenLabs
London
More Info
Full-Time

Audio Engineer

ElevenLabs
empty string
More Info
Full-Time

Affiliate & Influencer Marketing Manager

ElevenLabs
United States
More Info
Full-Time

Sales Development Representative

ElevenLabs
San Francisco
More Info
Full-Time

Enterprise Solutions Engineer

ElevenLabs
San Francisco
More Info
Full-Time

Full-Stack Engineer (Back-End Leaning)

ElevenLabs
United Kingdom
More Info
Apply Now