We're looking for someone to join our Safety team and own key outcomes across policy, automation, and enterprise guardrails. You'll design integrity policies aligned with global regulations, and shape how enterprises implement guardrails when building on our APIs.
What you'll do
- Design and evolve safety policies for audio AI, image/video AI and agentic safety. Aligned with ISO42001, EU AI Act, DSA, US state laws, and global regulatory developments
- Build scalable, AI-powered systems and workflows that dramatically reduce response times and increase policy coverage
What you need
- Broad experience across Trust & Safety: policy, operations, investigations, and content moderation, not just one specialty
- Deep familiarity with the global AI regulatory landscape: EU AI Act, DSA, US state laws, and emerging frameworks