Full-Time

Manager, Protection Scientist Engineer, Intelligence and Investigations at OpenAI

Company OpenAI
Location San Francisco
Salary Competitive salary
Posted Posted 1 days ago

Job Description

Job Posting

Manager, Protection Scientist Engineer, Intelligence and Investigations

Location

San Francisco

Employment Type

Full time

Department

Intelligence & Investigations

Compensation

  • $288K – $425K • Offers Equity

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts

  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)

  • 401(k) retirement plan with employer match

  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)

  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees

  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)

  • Mental health and wellness support

  • Employer-paid basic life and disability coverage

  • Annual learning and development stipend to fuel your professional growth

  • Daily meals in our offices, and meal delivery credits as eligible

  • Relocation support for eligible employees

  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

More details about our benefits are available to candidates during the hiring process.

This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.

The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.

About the Role

Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a manager of Protection Scientist Engineers, you will lead a small but growing team of PSEs who design and build systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. The team also responds to and investigates critical escalations, especially those that are not caught by our existing safety systems. The team, and you, will leverage expert understanding of our products and data, and work cross-functionally with product, policy, and scaled engineering teams.

This role is based in our San Francisco office and may involve resolving urgent escalations outside of normal work hours. Some investigations and work may involve sensitive content, including sexual, child safety, violent, or otherwise-disturbing material.

In this role, you will:

  • Lead, manage, and support an interdisciplinary and technical team across US timezones, working with them to chart longer term strategies.

  • Leverage your expertise in designing, launching, and improving systems of defense to mentor the team and develop both technical and organisational solutions.

  • Identify and create opportunities to enhance and make more efficient the team’s core work through understanding both cross-functional and technical challenges.

  • Review, and at times design and participate in, the implementation of abuse detection, review, and enforcement for new product launches and major harms.

  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.

  • Communicate the work of the team, at times externally, and coordinate with cross-functional partners to scope and prioritize work across abuse response.

You might thrive in this role if you:

  • Have at least two years of experience managing or tech leading teams that fight threats at a tech company and/or government organization.

  • Have deep experience with technical analysis and harm detection, especially using SQL and Python.

  • Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.

  • Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.

  • Have experience scaling and automating processes, especially with language models.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human ne

Similar Jobs

Full-Time

Strategic Customer Success Manager

Synthesia
New York City
More Info
Full-Time

Software Engineer, Machine Learning

Synthesia
Europe
More Info
Full-Time

Software Engineer, Back End – Video Generation (Tech Lead Level)

Synthesia
London
More Info
Full-Time

Marketing Rev Ops Manager

Synthesia
London
More Info
Full-Time

GTM Methodology Lead

Synthesia
New York City
More Info
Full-Time

Customer Support Associate

Synthesia
US Remote
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.