Menu
Full-Time

AI Research Engineer, Enterprise Evaluations at Scale AI

Company Scale AI
Location San Francisco, CA; New York, NY
Salary Competitive salary
Posted Posted 1 days ago

Job Description

We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry's leading GenAI Evaluation Suite. You will be a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise.

What you'll do

  • Partner with Scale’s Operations team and enterprise customers to translate ambiguity into structured evaluation data, guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.
  • Analyze feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments.
  • Design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems. This includes creating models that critique, grade, and explain agent outputs (e.g., RLAIF, model-judging-model setups), along with scalable evaluation pipelines and diagnostic tools.
  • Pursue research initiatives that explore new methodologies for automatically analyzing, evaluating, and improving the behavior of enterprise agents, pushing the boundaries of how AI systems are assessed and optimized in real-world contexts.

What you need

  • Bachelor’s degree in Computer Science, Electrical Engineering, a related field, or equivalent practical experience.
  • 2+ years of experience in Machine Learning or Applied Research, focused on applied ML systems or evaluation infrastructure.
  • Hands-on experience with Large Language Models (LLMs) and Generative AI in professional or research environments.
  • Strong understanding of frontier model evaluation methodologies and the current research landscape.
  • Proficiency in Python and major ML frameworks (e.g., PyTorch, TensorFlow).
  • Solid engineering and statistical analysis foundation, with experience developing data-driven methods for assessing model quality.

Why this matters

This role is critical to our mission of delivering the industry's leading GenAI Evaluation Suite. You will be a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise.

Similar Jobs

Full-Time

Website Engineering

ElevenLabs
London
More Info
Full-Time

Audio Engineer

ElevenLabs
empty string
More Info
Full-Time

Affiliate & Influencer Marketing Manager

ElevenLabs
United States
More Info
Full-Time

Sales Development Representative

ElevenLabs
San Francisco
More Info
Full-Time

Enterprise Solutions Engineer

ElevenLabs
San Francisco
More Info
Full-Time

Full-Stack Engineer (Back-End Leaning)

ElevenLabs
United Kingdom
More Info
Apply Now