Opening. This role is part of the Finetuning Alignment team, spearheading the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading.
What you'll do
As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model's knowledge. You'll develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model, create and maintain comprehensive honesty benchmarks and evaluation frameworks, and implement techniques to ground model outputs in verified information.
What you need
- Have an MS/PhD in Computer Science, ML, or related field
- Possess strong programming skills in Python
- Have industry experience with language model finetuning and classifier training
- Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy