Senior Software Engineer, At Scale Compute Analysis
$152,000 - $241,500/year
Role Details
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
Join a team that analyzes large-scale datacenter workloads on GPU-accelerated clusters. You will turn telemetry and workload data into clear findings and visuals. You will partner with OS, container, GPU, and systems engineers. When useful, you will apply machine learning and deep learning techniques for categorization and forecasting. These will be coordinated into tools the team actually uses.
*What you’ll be doing:*
- Analyze large-scale workloads and infrastructure signals to find application and platform improvement opportunities.
- Work with high-dimensional data: spot trends, tie changes to known events, summarize conclusions, and communicate results to engineers and leadership.
- Partner with the team to clarify questions, scope analyses, and document methods so others can extend your work.
- Build and maintain practical visualizations and lightweight implementations (e.g. ML/DL for classification/prediction) inside existing software workflows.
*What we need to see:*
- 5+ years analyzing complex datasets, debugging data issues, and communicating trends clearly.
- BS or MS in Engineering, Mathematics, Physics, Computer Science, or equivalent experience.
- Strong Python and JavaScript;
- Comfortable being responsible for an analysis end-to-end.
- Hands-on use of telemetry / observability stacks (e.g. Grafana, Elasticsearch, Splunk).
- Shown grasp of core ML concepts; quick learner; strong analytical and problem-solving skills.
- Collaboration and communication.
*Ways to stand out from the crowd:*
- TensorFlow or PyTorch
- Linux and HPC / large-scale or performance-sensitive environments
- Experience visualizing high-dimensional problems
- Diligent, action-biased analysis style
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
NVIDIA also offers a comprehensive benefits package. We provide health care coverage, dental and vision, 401(K), including company matching and after tax contributions, Employee Stock Purchase Program (ESPP), Employee Assistance Program (EAP), company paid holidays, paid sick leave, vacation leave, professional time off, life and disability protection.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 27, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
For more details click Job Post.
About Nvidia
Nvidia is a leading designer of graphics processing units (GPUs) and system-on-chip units, powering gaming, professional visualization, data centers, and artificial intelligence workloads. Industry: Semiconductors & AI Computing