Full Time

Researcher, Loss of Control at OpenAI

Company OpenAI
Location San Francisco
Salary $295K – $445K
How You'll Work onsite
Level senior
Sector Technology
Posted Posted 0 days ago

Job Description

Compensation

Estimated Base Salary $295K – $445K

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

About the team

The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.

About the role

As frontier AI systems become more capable, they are increasingly able to pursue long-horizon goals, use tools, adapt to feedback, and operate with greater autonomy. These advances create enormous potential benefits, but they also introduce the risk that models may behave in ways that are misaligned, deceptive, or difficult to supervise or contain. Reducing loss of control risk is therefore a core challenge for safely developing and deploying advanced AI systems.

As a Researcher for loss of control mitigations, you will help design and implement an end-to-end mitigation stack to reduce the risk of intentionally subversive or insufficiently controllable model behavior across OpenAI’s products and internal deployments. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as model capabilities, deployment patterns, and threat models evolve.

In this role, you will:

  • Design and implement mitigation components for loss of control risk,spanning prevention, monitoring, detection, containment, and enforcement,under the guidance of senior technical and risk leadership.
  • Integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams, helping ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase.
  • Evaluate technical trade-offs within the loss of control domain (coverage, robustness, latency, model utility, and operational complexity) and propose pragmatic, testable solutions.
  • Collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios, including deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight.
  • Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against increasingly capable and potentially subversive model behaviors,such as sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception,and iterate based on findings.

You might thrive in this role if you:

  • Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.
  • Bring demonstrated experience in deep learning and transformer models.
  • Are proficient with frameworks such as PyTorch or TensorFlow.
  • Possess a strong foundation in data structures, algorithms, and software engineering principles.
  • Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.
  • Excel at working collaboratively with cross-functional teams across research, policy, product, and engineering.
  • Have significant experience designing and evaluating technical safeguards, control mechanisms, or monitoring systems for advanced AI behavior.
  • (Nice to have) Bring background knowledge in alignment, control, interpretability, robustness, adversarial ML, or related fields.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

XML job scraping automation by YubHub

Similar Jobs

Full Time

Senior Knowledge & Enablement Specialist

Synthesia
UK Remote; Berlin; London; Paris
More Info
Full Time

Principal ML Platform Engineer

Synthesia
Europe
More Info
Full Time

Engineering Manager (Avatars)

Synthesia
Europe
More Info
Full Time

Renewals Manager

Synthesia
Austin
More Info
Full Time

Social Media Lead

Synthesia
London
More Info
Full Time

Staff Fullstack Engineer, Avatars

Synthesia
Europe
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.