We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry's leading GenAI Evaluation Suite. You will be a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise.
What you'll do
- Partner with Scale’s Operations team and enterprise customers to translate ambiguity into structured evaluation data, guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.
- Analyze feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments.
- Design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems. This includes creating models that critique, grade, and explain agent outputs (e.g., RLAIF, model-judging-model setups), along with scalable evaluation pipelines and diagnostic tools.
- Pursue research initiatives that explore new methodologies for automatically analyzing, evaluating, and improving the behavior of enterprise agents, pushing the boundaries of how AI systems are assessed and optimized in real-world contexts.
What you need
- Bachelor’s degree in Computer Science, Electrical Engineering, a related field, or equivalent practical experience.
- 2+ years of experience in Machine Learning or Applied Research, focused on applied ML systems or evaluation infrastructure.
- Hands-on experience with Large Language Models (LLMs) and Generative AI in professional or research environments.
- Strong understanding of frontier model evaluation methodologies and the current research landscape.
- Proficiency in Python and major ML frameworks (e.g., PyTorch, TensorFlow).
- Solid engineering and statistical analysis foundation, with experience developing data-driven methods for assessing model quality.
Why this matters
This role is critical to our mission of delivering the industry's leading GenAI Evaluation Suite. You will be a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise.