NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.NVIDIA is driving AI and high-performance computing forward. DGX Cloud aims to deliver a fully managed AI platform on major cloud providers, optimizing AI workloads using high-performance NVIDIA infrastructure. Work with NVIDIA's DGX Cloud team as a Senior Site Reliability Engineer to maintain high-performance DGX Cloud clusters for AI researchers and enterprise clients worldwide.What you’ll be doing:Build, implement and support operational and reliability aspects of large-scale Kubernetes clusters with focus on performance at scale, real time monitoring, logging and alertingDefine SLOs/SLIs, monitor error budgets, and streamline reportingSupport services before they launch through system creation consulting, developing software tools, platforms and frameworks, capacity management, and launch reviewsMaintain services once they are live by measuring and monitoring availability, latency and overall system healthOperate and optimize GPU workloads across AWS, GCP, Azure, OCI, and private cloudsScale systems sustainably through mechanisms like automation and evolve systems by pushing for changes that improve reliability and velocityLead triage and root-cause analysis of high-severity incidentsPractice balanced incident response and blameless postmortemsParticipate in on-call rotation to support production servicesWhat we need to see:BS in Computer Science or related technical field, or equivalent experience10+ years of experience operating production servicesExpert-level knowledge of Kubernetes administration, containerization, and microservices architectureExperience with infrastructure automation tools (e.g., Terraform, Ansible, Chef, Puppet)Proficiency in at least one high-level programming language (e.g., Python, Go)In-depth knowledge of Linux operating systems, networking fundamentals (TCP/IP), and cloud security standardsProficient knowledge of SRE principles, encompassing SLOs, SLIs, error budgets, and incident handlingExperience building and operating comprehensive observability stacks (monitoring, logging, tracing) using tools like OpenTelemetry, Prometheus, Grafana, ELK Stack, Lightstep, Splunk, etc.Ways to stand out from the crowd:Operating GPU-accelerated clusters with KubeVirt in productionApplying generative-AI techniques to reduce operational toilAutomating incidents with Shoreline or StackStorm