Job Posting
Researcher, Frontier Biological and Chemical Risks
Location
San Francisco
Employment Type
Full time
Department
Safety Systems
Compensation
- Estimated Base Salary $295K – $445K
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
-
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
-
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
-
401(k) retirement plan with employer match
-
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
-
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
-
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
-
Mental health and wellness support
-
Employer-paid basic life and disability coverage
-
Annual learning and development stipend to fuel your professional growth
-
Daily meals in our offices, and meal delivery credits as eligible
-
Relocation support for eligible employees
-
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
More details about our benefits are available to candidates during the hiring process.
This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.
About the Team
The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.
Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.
The mission of the Preparedness team is to:
-
Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society
-
Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems
Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.
About the Role
We are looking to hire exceptional research engineers that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end.
You will own the scientific validity of our frontier preparedness capability evaluations—designing new evals grounded in real threat models (including high-consequence domains like CBRN as well as cyber and other frontier-risk areas), and maintaining existing evals so they don’t stale or silently regress. You’ll define datasets, graders, rubrics, and threshold guidance, and produce auditable artifacts (evaluation cards, capability reports, system-card inputs) that leadership can trust during high-stakes launches.
In this role, you'll:
-
Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks
-
Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks
-
Design and build scalable systems and processes that can support these kinds of evaluations
-
Contribute to the refinement of risk management and the overall development of “best practice” guidelines for AI safety evaluations
You might thrive in this role if you:
-
Are passionate and knowledgeable about short-term and long-term AI safety risks
-
Demonstrate the ability to think outside the box and have a robust “red-teaming mindset”
-
Have experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk
-
Are able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end
It would be great if you also have:
-
First-hand experience in red-teaming systems—be it computer systems or otherwise
-
A good understanding of the (nuances of) societal aspects of AI deployment
-
Excellent communication skills and the ability to work cross-functionally
This role may require access to technology or technical data controlled under the U.S. Export Administration Regulations or International Traffic in Arms Regulations. Therefore, this role is restricted to individuals described in paragraph (a)(1) of the definition of “U.S. person” in the U.S. Export Administration Regulations, 15 C.F.R. § 772.1, and in the International Traffic in Arms Regulations, 22 C.F.R. § 120.62. U.S. persons are U.S. citizens, U.S. legal permanent residents, individuals granted asylum status in the United States, and individuals who are not U.S. citizens but are lawfully admitted for permanent residence in the United States.