Opening. This role exists to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems.
What you'll do
You'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy).
- Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
- Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
- Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
- Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.
- Contribute ideas, figures, and writing to research papers, blog posts, and talks.
- Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.
What you need
- Significant software, ML, or research engineering experience
- Some experience contributing to empirical AI research projects
- Some familiarity with technical AI safety research
- Ability to pick up slack, even if it goes outside your job description
- Care about the impacts of AI
Why this matters
One paragraph about career impact and value.