We are looking for an experienced Cloud Solution Architect to help assist customers with adoption of GPU hardware and Software, as well as building and deploying Machine Learning (ML) , Deep Learning (DL), data analytics solutions on various Cloud Computing Platforms.
As a Solutions Architect, you will engage directly with developers, researchers, and data scientists with some of NVIDIA’s most strategic technology customers as well as work directly with business and engineering teams on product strategy.
Key Responsibilities:
- Help cloud customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on cloud ML services and Kubernetes for large language models (LLMs) and generative AI workloads.
- Enhance performance tuning using TensorRT/TensorRT-LLM, vLLM, Dynamo, and Triton Inference Server to improve GPU utilization and model efficiency.
- Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to cloud customers implementing AI inference at scale.
- Build custom PoCs for solution that address customer’s critical business needs applying NVIDIA hardware and software technology
- Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions
- Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.
- Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues
Requirements:
- BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.
- 3+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production in cloud computing environments including AWS, GCP, or Azure
- 3+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow
- Excellent knowledge of the theory and practice of LLM and DL inference
- Strong fundamentals in programming, optimizations, and software design, especially in Python
- Experience with containerization and orchestration technologies like Docker and Kubernetes, monitoring, and observability solutions for AI deployments
- Knowledge of Inference technologies – NVIDIA NIM, TensorRT-LLM, Dynamo, Triton Inference Server, vLLM, etc
- Proficiency in problem-solving and debugging skills in GPU environments
- Excellent presentation, communication and collaboration skills
Nice to Have:
- AWS, GCP or Azure Professional Solution Architect Certification.
- Experience optimizing and deploying large MoE LLMs at scale
- Active contributions to open-source AI inference projects (e.g., vLLM, TensorRT-LLM Dynamo, SGLang, Triton or similar)
- Experience with Multi-GPU Multi-node Inference technologies like Tensor Parallelism/Expert Parallelism, Disaggregated Serving, LWS, MPI, EFA/Infiniband, NVLink/PCIe, etc
- Experience in developing and integrating monitoring and alerting solutions using Prometheus, Grafana, and NVIDIA DCGM and GPU performance Analysis and tools like NVIDIA Nsight Systems
Job feed automation by YubHub