Opening. This role is a critical part of our research team, focused on ensuring safety with self-improving, highly autonomous AI systems.
What you'll do
As a Research Engineer on our team, you'll build and eval model organisms of autonomous systems and develop the defensive agents needed to counter them.
- Design and build autonomous AI systems that can use tools and operate across diverse environments—creating model organisms that help us understand and defend against advanced adversarial AI
- Create evals and training environments to understand and shape agent behavior in desirable ways
- Develop defensive agents that can detect, disrupt, or outcompete adversarial AI systems in realistic scenarios
What you need
- Strong software engineering skills, particularly in Python
- Experience building and working with LLM-based agents or autonomous systems
Why this matters
Our work will inform decisions at the highest levels of the company, contribute to public demonstrations that shape policy discourse, and help build technical defenses that could matter enormously as AI systems become more capable.