We are seeking a dynamic and experienced Generative AI Solution Architect with specialised expertise in training Large Language Models (LLMs) and Agentic AI. As a key member of our AI Solutions team, you will play a pivotal role in architecting and delivering cutting-edge solutions that leverage the power of NVIDIA's generative AI technologies.
Your primary responsibilities will include:
Architecting end-to-end generative AI solutions with a focus on LLMs, Agentic and RAG workflows. Collaborating closely with customers to understand their language-related business challenges and design tailored solutions. Collaborating with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities. Working closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies. Engaging directly with customers to understand their language-related requirements and challenges. Leading workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimisation of Large Language Models using NVIDIA's hardware and software platforms. Implementing strategies for efficient and effective training of LLMs to achieve optimal performance. Designing and implementing RAG-based workflows to enhance content generation and information retrieval. Working closely with customers to integrate RAG workflows into their applications and systems and staying abreast of the latest developments in language models and generative AI technologies. Providing technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.
To be successful in this role, you will need to have:
A B.Tech, Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience. 8+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs). A proven track record of successfully deploying and optimising LLM models for inference in production environments. In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures. Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers. Proficiency in model deployment and optimisation techniques for efficient inference on various hardware platforms, with a focus on GPUs. Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.
If you have a proven ability to optimise LLM models for inference speed, memory efficiency, and resource utilisation, familiarity with containerisation technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment, deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts, and hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters, then we would love to hear from you!
XML job scraping automation by YubHub