We're tapping into the unlimited potential of AI to define the next era of computing. As an NVIDIAN, you'll be immersed in a diverse, supportive environment where everyone is inspired to do their best work.
As a new graduate, you'll help build the agentic infrastructure powering test automation and quality workflows for the NVIDIA Omniverse platform. This is a rare chance to initiate your career at the intersection of AI agents and production software quality. You will learn to build tests and tools other engineers depend on to ship quickly and confidently.
Responsibilities:
- Build multi-agent pipelines for automated test generation, log analysis, failure triage, and bug-filing workflows, working alongside senior engineers on well-scoped pieces of the system
- Contribute to evaluation systems that measure agent output quality , writing test cases, analyzing failure patterns, and extending eval frameworks under senior mentorship
- Add instrumentation, logging, and monitoring to agentic workflows so failures are visible and debuggable , learning the systems-thinking that makes infrastructure trustworthy
- Grow your judgment on where LLMs help and where they fail. Learn how to build solutions around both with mentorship.
Requirements:
- Pursuing or recently completed a Bachelor's Degree in Computer Science or equivalent
- Strong Python fundamentals , able to write clean, testable code and reason about structure beyond single scripts
- Hands-on exposure to AI-native development workflows , Claude Code, Cursor, Codex, or prompt engineering through coursework, internships, hackathons, or personal projects
- At least one project, open-source contribution, or coursework example where you coordinated an LLM into a working system end-to-end
- Foundational understanding of software testing, CI/CD concepts, or quality engineering principles
- Awareness of common LLM failure modes , hallucination, context limits, tool misuse , and curiosity about how to mitigate them
XML job scraping automation by YubHub