We are seeking strong Research Scientists with expertise in AI research and experience in interdisciplinary sociotechnical modeling to join a multimodal safety research effort within Google DeepMind's Frontier AI unit.
This role requires a passion for understanding and modeling the interactions between AI and society, a strong awareness of the AI alignment and safety landscape, and a penchant for developing novel ideas, methods, interfaces, and tools.
As a Research Scientist at Google DeepMind, you will join a team working to supercharge exploration, assessment, and steering of evolving AI behaviours, with a focus on subjective and creative tasks. You will tackle the underlying research questions to improve collaborative specification of alignment objectives and assessment of adherence to desired behaviours.
Key responsibilities include generating new ideas, executing cutting-edge ideas, communicating research findings, collaborating with other researchers, and driving technical projects.
To be successful in this role, you will need a PhD degree in Computer Science, Machine Learning, or a related technical field, a strong publication record in top machine learning conferences, and demonstrated hands-on experience in developing multimodal AI models and systems.
In addition, experience with large-scale vision language models, fine-tuning and post-training LLMs using RL, and developing agentic AI solutions to complex problems would be an advantage.
XML job scraping automation by YubHub