Full-Time

Strategic Risk Analyst at OpenAI

Company OpenAI
Location San Francisco
Salary Competitive salary
Posted Posted 1 days ago

Job Description

Job Posting

Strategic Risk Analyst

Location

San Francisco

Employment Type

Full time

Location Type

Hybrid

Department

Intelligence & Investigations

Compensation

  • $198K – $320K • Offers Equity

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts

  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)

  • 401(k) retirement plan with employer match

  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)

  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees

  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)

  • Mental health and wellness support

  • Employer-paid basic life and disability coverage

  • Annual learning and development stipend to fuel your professional growth

  • Daily meals in our offices, and meal delivery credits as eligible

  • Relocation support for eligible employees

  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

More details about our benefits are available to candidates during the hiring process.

This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.

About the team

The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analysing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.

We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.

About the role

As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesise internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritisation inputs

You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.

In this role, you will

  • Monitor and analyse internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.

  • Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distil implications for OpenAI’s products and threat landscape.

  • Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.

  • Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early.

  • Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.

  • Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.

  • Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.

  • Build early-warning and monitoring capabilities with data, engineering, and visualisation partners, including dashboards that highlight leading indicators and unusual changes.

  • Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.

  • Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp.

You might thrive in this role if you

  • Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work.

  • Demonstrated ability to analyse complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritised recommendations.

  • Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.

  • Comfort working across qualitative and quantitative inputs, including (1) casework,

Similar Jobs

Full-Time

Strategic Customer Success Manager

Synthesia
New York City
More Info
Full-Time

Software Engineer, Machine Learning

Synthesia
Europe
More Info
Full-Time

Software Engineer, Back End – Video Generation (Tech Lead Level)

Synthesia
London
More Info
Full-Time

Marketing Rev Ops Manager

Synthesia
London
More Info
Full-Time

GTM Methodology Lead

Synthesia
New York City
More Info
Full-Time

Customer Support Associate

Synthesia
US Remote
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.