Full-Time

Hardware / Software CoDesign Engineer at OpenAI

Company OpenAI
Location San Francisco
Salary Competitive salary
Posted Posted 1 days ago

Job Description

Hardware / Software CoDesign Engineer

Location

San Francisco

Employment Type

Full time

Location Type

Hybrid

Department

Scaling

Compensation

  • $342K – $555K • Offers Equity

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

Benefits

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts

  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)

  • 401(k) retirement plan with employer match

  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)

  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees

  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)

  • Mental health and wellness support

  • Employer-paid basic life and disability coverage

  • Annual learning and development stipend to fuel your professional growth

  • Daily meals in our offices, and meal delivery credits as eligible

  • Relocation support for eligible employees

  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

About the Team

OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.

About the Role

As an Engineer on our hardware optimization and co-design team, you will co-design future hardware from different vendors for programmability and performance. You will work with our kernel, compiler and machine learning engineers to understand their unique needs related to ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. You will evangelize these constraints with various vendors to develop and influence future hardware architectures towards efficient training and inference on our models. If you are excited about efficiently distributing a large language model across devices, dealing with and optimizing system-wide/rack-wide networking bottlenecks and eventually tailoring the compute pipe and memory hierarchy of the hardware platform, simulating workloads at different abstractions and working closely with our partners, this is the perfect opportunity!

In this role, you will:

  • Co-design future hardware for programmability and performance with our hardware vendors

  • Assist hardware vendors in developing optimal kernels and add support for it in our compiler

  • Develop performance estimates for critical kernels for different hardware configurations and drive decisions on compute core and memory hierarchy features

  • Build system performance models at different abstraction levels and carry out analysis to drive decisions on scale up, scale out, front end networking

  • Work with machine learning engineers, kernel engineers and compiler developers to understand their vision and needs from high performance accelerators

  • Manage communication and coordination with internal and external partners

  • Influence the roadmap of hardware partners to optimize them for OpenAI’s workloads.

  • Evaluate potential partners’ accelerators and platforms.

  • As the scope of the role and team grows, understand and influence roadmaps for hardware partners for our datacenter networks, racks, and buildings.

You might thrive in this role if you have:

  • 4+ years of industry experience, including experience harnessing compute at scale and optimizing ML platform code to run efficiently on target hardware.

  • Strong experience in software/hardware co-design

  • Deep understanding of GPU and/or other AI accelerators

  • Experience with CUDA, Triton or a related accelerator programming language

  • Experience driving Machine Learning accuracy with low precision formats

  • Experience with system performance modeling and analysis to optimize ML model deployment

  • Strong coding skills in C/C++ and Python

  • Are familiar with the fundamentals of deep learning computing and chip architecture/microarchitecture.

These attributes are nice to have:

  • PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems

  • Strong understanding of LLMs and challenges related to their training and inference

Similar Jobs

Full-Time

Strategic Customer Success Manager

Synthesia
New York City
More Info
Full-Time

Software Engineer, Machine Learning

Synthesia
Europe
More Info
Full-Time

Software Engineer, Back End – Video Generation (Tech Lead Level)

Synthesia
London
More Info
Full-Time

Marketing Rev Ops Manager

Synthesia
London
More Info
Full-Time

GTM Methodology Lead

Synthesia
New York City
More Info
Full-Time

Customer Support Associate

Synthesia
US Remote
More Info

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.