Inference Technical Lead, Sora
Location
San Francisco
Employment Type
Full time
Location Type
Hybrid
Department
Research
Compensation
- $380K • Offers Equity
The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
Benefits
-
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
-
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
-
401(k) retirement plan with employer match
-
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
-
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
-
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
-
Mental health and wellness support
-
Employer-paid basic life and disability coverage
-
Annual learning and development stipend to fuel your professional growth
-
Daily meals in our offices, and meal delivery credits as eligible
-
Relocation support for eligible employees
-
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
About the Team
The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.
About the Role
We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.
This role is critical to scaling the team’s broader goals – it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.
Responsibilities
-
Perform engineering efforts focused on improving model serving, inference performance, and system efficiency
-
Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability
-
Partner closely with research and product teams to ensure our models perform effectively at scale
-
Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs
Requirements
-
Have deep expertise in model performance optimization, particularly at the inference layer
-
Have a strong background in kernel-level systems, data movement, and low-level performance tuning
-
Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads
-
Can navigate ambiguity, set technical direction, and drive complex initiatives to completion
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.