About the role
The Knowledge Work team builds the training environments and evaluations that make Claude effective at real-world professional workflows , searching, analysing, and creating across the tools and documents knowledge workers use every day.
As that work scales, the systems behind it need to be as rigorous as the research itself. We are looking for a Research Engineer to own the reliability, observability, and infrastructure foundation that the team's research depends on.
You will be responsible for ensuring our training and evaluation runs remain stable, well-instrumented, and high-quality as they grow in scale and complexity. A core part of this role is shifting reliability work from reactive to proactive: hardening systems, stress-testing at realistic scale, and building the observability and tooling that surface problems early , so researchers can stay focused on research rather than incident response.
You will be the team's stable, context-rich owner for environment health and evaluation integrity, and the primary point of contact for partner teams when issues arise.
While you'll work closely with researchers building new training environments, the priority for this role is the reliability those environments depend on. It's best suited to an engineer who finds real ownership and impact in making critical systems dependable, and in being the person behind trustworthy evaluation results the entire organisation relies on.
Key Responsibilities:
- Serve as the dedicated reliability owner for the Knowledge Work training environments, providing continuity of context and reducing the operational overhead of rotating ownership
- Own a clean, canonical set of evaluation tools and processes for Knowledge Work capabilities, including the process used for model releases
- Build and automate observability, dashboards, and operational tooling for our training environments and evaluation systems, with an emphasis on high signal-to-noise: a small set of trusted metrics and alerts rather than sprawling instrumentation
- Proactively harden environments and evaluation systems through load testing, fault injection, and stress testing at realistic scale, so failures surface early rather than during critical training work
- Act as the primary point of contact for partner training and infrastructure teams when issues in our environments arise, and drive incidents to resolution
- Reduce the operational burden on researchers so they can stay focused on research
Minimum Qualifications:
- Highly experienced Python engineer who ships reliable, well-instrumented code that teammates trust in production
- Demonstrated experience operating ML or distributed systems at scale, including significant on-call and incident-response experience
- Strong SRE or production-engineering mindset , reaching for SLOs, load tests, and failure injection before reaching for more dashboards
- Foundational ML knowledge sufficient to understand what a training environment or evaluation is actually measuring, and recognise when an evaluation has become stale or gameable
- Able to read research code and reason evaluation integrity
Preferred Qualifications:
- 5+ years of experience operating ML or distributed systems at scale
- Experience building or operating RL environments, agent harnesses, or LLM evaluation frameworks
- Familiarity with reward modelling, evaluation design, or detecting and mitigating reward hacking
- Experience with observability stacks (metrics, tracing, structured logging) and operational dashboard tooling
- Background in chaos engineering, fault injection, or large-scale load testing
- Experience with data quality pipelines, drift detection, or evaluation-set curation and versioning
- Familiarity with large-scale training or inference infrastructure (schedulers, multi-agent orchestration, sandboxed execution)
- Prior experience as a dedicated reliability or operations owner embedded within a research team
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
How we’re different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, including a comprehensive health insurance package, 401(k) matching, and generous paid time off.
XML job scraping automation by YubHub