Job Posting
Model Policy Manager, Youth Well-being
Location
San Francisco
Employment Type
Full time
Department
Safety Systems
Compensation
- Estimated Base Salary $207K – $295K • Offers Equity
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
-
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
-
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
-
401(k) retirement plan with employer match
-
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
-
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
-
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
-
Mental health and wellness support
-
Employer-paid basic life and disability coverage
-
Annual learning and development stipend to fuel your professional growth
-
Daily meals in our offices, and meal delivery credits as eligible
-
Relocation support for eligible employees
-
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
More details about our benefits are available to candidates during the hiring process.
This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.
About the Team
The Safety Systems team is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models by driving rapid policy taxonomy iteration based on data and defining evaluation criteria for foundational models’ ability to reason about safety. Key focus areas include: catastrophic risk, mental health, teen safety and multimodal safety.
About the Role
Providing access to powerful AI models introduces a host of challenging questions when it comes to model safety: How do we define safe behavior for how a model should behave? To what end? How do we do this in such a way that is actionable, objective and sustains replicability?
This is a senior role in which you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies do not create harm. The ideal candidate can identify and develop cohesive and thoughtful taxonomies of harm on high risk topics with a sense of urgency. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Importantly, this role is embedded in our research teams and directly informs model training.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you’ll:
-
Design model policies that govern safe model behavior in an objective and defensible way – e.g. how should the model respond in risky/unsafe scenarios? What does unsafe mean? How do we achieve safety while preserving beneficial model capabilities?
-
You will develop taxonomies that inform data collection campaigns, model behaviour and monitoring strategies and also toe the line between maximizing utility and preventing catastrophic risk.
-
Lead prioritization for safety efforts across the company for new model launches, understanding and addressing technical and business trade-offs.
-
Develop a broad range of subject matter expertise while maintaining agility across topics.
-
You will work across many internal teams which will require high organizational acumen and confident decision making.
You might thrive in this role if you:
-
Have extensive experience researching LLMs, ML, AI, tech policy, moral reasoning, and/or enjoy classification problems.
-
Have extensive experience defining, refining and enforcing policies for ML models across training, evaluation, and deployment.
-
Understand the practical challenges of translating policy into model behavior across the full training stack, and can incorporate these constraints into policy design.
-
Can reason about the benefits and risks of open-ended problem spaces, generate novel approaches under ambiguity, and take full ownership of end-to-end solutions from concept through execution.
Most relevant publications:
-
Introducing HealthBench
-
Preparing for future AI capabilities in biology
-
Safety evaluations hub
-
OpenAI GPT5 System Card
-
Evaluating Fairness in ChatGPT
-
Improving Model Safety Behavior with Rule-Based Rewards
-
OpenAI Model Spec
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences of individuals from all walks of life.