Opening. This role is a stepping stone towards our overall goal of mechanistically understanding neural networks. As a manager on the Interpretability team, you'll support a team of expert researchers and engineers who are trying to understand at a deep, mechanistic level, how modern large language models work internally.
What you'll do
As a manager on the Interpretability team, you'll support a team of expert researchers and engineers who are trying to understand at a deep, mechanistic level, how modern large language models work internally. Few things can accelerate this work more than great managers. Your work as manager will be critical in making sure that our fast-growing team is able to meet its ambitious safety research goals over the coming years.
What you need
- Partner with a research lead on direction, project planning and execution, hiring, and people development
- Set and maintain a high bar for execution speed and quality, including identifying improvements to processes that help the team operate effectively
- Coach and support team members to have more impact and develop in their careers
- Drive the team's recruiting efforts, including hiring planning, process improvements, and sourcing and closing
- Help identify and support opportunities for collaboration with other teams across Anthropic
- Communicate team updates and results to other teams and leadership
- Maintain a deep understanding of the team's technical work and its implications for AI safety