We're seeking a Safeguards Enforcement Lead to join our team at Anthropic. As a Safeguards Enforcement Lead, you will play a critical role in ensuring the reliability, interpretability, and steerability of our AI systems.
Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We want AI to be safe and beneficial for our users and for society as a whole.
Responsibilities:
- Develop and implement safeguards to prevent misuse of our AI systems
- Collaborate with cross-functional teams to ensure compliance with regulatory requirements
- Analyse data to identify potential risks and develop mitigation strategies
- Work with stakeholders to communicate risks and benefits of our AI systems
Benefits:
- Competitive salary and benefits package
- Opportunity to work with a talented team of researchers and engineers
- Professional development opportunities
Requirements:
- Bachelor's degree in Computer Science, Philosophy, or related field
- 5+ years of experience in a related field
- Strong understanding of AI safety and ethics
- Excellent communication and collaboration skills
Preferred Qualifications:
- Master's degree in Computer Science, Philosophy, or related field
- Experience with machine learning and deep learning
- Familiarity with regulatory requirements for AI systems
Note: This job description is not an exhaustive list of responsibilities. You will be expected to perform other duties as required by the company.
XML job scraping automation by YubHub