Full-Time

AI Infrastructure Engineer, Core Infrastructure at Scale

Company Scale
Sector Technology
Posted Posted 1 days ago

Job Description

As an AI Infrastructure Engineer on the Core Infrastructure team, you will design and build the next generation of foundational systems that power all ML Infrastructure compute at scale. Our platform is responsible for orchestrating workloads across heterogeneous compute environments, optimising for reliability, cost efficiency, and developer velocity.

You will design and maintain fault-tolerant, cost-efficient systems that manage compute allocation, scheduling, and autoscaling across clusters and clouds. You will build common abstractions and APIs that unify job submission, telemetry, and observability across serving and training workloads. You will develop systems for usage metering, cost attribution, and quota management, enabling transparency and control over compute budgets. You will improve reliability and efficiency of large-scale GPU workloads through better scheduling, bin-packing, preemption, and resource sharing. You will partner with ML engineers and API teams to identify bottlenecks and define long-term architectural standards. You will lead projects end-to-end, from requirements gathering and design to rollout and monitoring, in a cross-functional environment.

Ideally, you would have 4+ years of experience building large-scale backend or distributed systems. You should have strong programming skills in Python, Go, or Rust, and familiarity with modern cloud-native architecture. You should have experience with containers and orchestration tools (Kubernetes, Docker) and Infrastructure as Code (Terraform). You should have familiarity with schedulers or workload management systems (e.g., Kubernetes controllers, Slurm, Ray, internal job queues). You should have understanding of observability and reliability practices (metrics, tracing, alerting, SLOs). You should have a track record of improving system efficiency, reliability, or developer velocity in production environments.

Nice to have experience with multi-tenant compute platforms or internal PaaS. Nice to have knowledge of GPU scheduling, cost modelling, or hybrid cloud orchestration. Nice to have familiarity with LLM or ML training workloads, though deep ML expertise is not required.

XML job scraping automation by YubHub

Similar Jobs

Full-Time

Model Behavior Tutor – Social Cognition & EQ

xAI
Remote
More Info
Full-Time

Model Behavior Tutor – Epistemic Rigor & Truthfulness

xAI
Remote
More Info
Full-Time

Member of Technical Staff – Grok Chat Model

xAI
Palo Alto, CA
More Info
Full-Time

Member of Technical Staff – X Platform Security

xAI
Palo Alto, CA
More Info
Full-Time

IT Systems Engineer

xAI
Palo Alto, CA
More Info
Full-Time

Site Reliability Engineer – Cybersecurity

xAI
Palo Alto, CA
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.