NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. As an AI/ML HPC Cluster Engineer, you will provide technical engagement and problem solving on the management of large-scale HPC systems, ensuring efficient resource utilization and supporting researchers with their workloads.
Support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization
Directly administer internal research clusters, conduct upgrades, incident response, and reliability improvements
Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions
Maintain heterogeneous AI/ML clusters on-premises and in the cloud
Support our researchers to run their workloads including performance analysis and optimizations
Analyze and optimize cluster efficiency, job fragmentation, and GPU waste to meet internal SLA targets
Support root cause analysis and suggest corrective action. Proactively find and fix issues before they occur
Triage and support postmortems for reliability incidents affecting users or infrastructure
Participate in a shared on-call rotation supported by strong automation, clear paths for responding to critical issues, and well-defined incident workflows
Qualification
Required
Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience
Minimum 2 years of experience administering multi-node compute infrastructure
Background in managing AI/HPC job schedulers like Slurm, K8s, PBS, RTDA, BCM (formerly known as Bright), or LSF
Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions