Meta's AI Training and Inference Infrastructure is growing exponentially to support ever increasing uses cases of AI. This results in a dramatic scaling challenge that our engineers have to deal with on a daily basis. We need to build and evolve our network infrastructure that connects myriads of training accelerators like GPUs together. In addition, we need to ensure that the network is running smoothly and meets stringent performance and availability requirements of RDMA workloads that expects a loss-less fabric interconnect. To improve performance of these systems we constantly look for opportunities across stack: network fabric and host networking, comms lib and scheduling infrastructure.
What you'll do
Active member of a multi-disciplinary team to develop solutions for large scale training systems
Responsible for the overall performance of the communication system, including performance benchmarking, monitoring and troubleshooting production issues
Identify potential performance issues across the stack: comms lib, RDMA transport, host networking, scheduling and network fabric. Develop and deploy innovative solutions to address the performance issues
What you need
Currently has, or is in the process of obtaining a Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
BS/MS/PhD in relevant fields (EE, CS), with 2+ years work experience
Experience with using communication libraries, such as MPI, NCCL, and UCX
Experience with developing, evaluating and debugging host networking protocols such as RDMA
Experience with triaging performance issues in complex scale-out distributed applications
Must obtain work authorization in country of employment at the time of hire and maintain ongoing work authorization during employment