Solutions Architect, Inference Deployments

Nvidia

US, CA, Santa Clara, United States of America Posted 1 day, 14 hours ago

$152000 - $241500/year

Job Description

Solutions Architect, Inference Deployments page is loaded

Solutions Architect, Inference Deployments

Apply

locations
: US, CA, Santa Clara

time type
: Full time

posted on
: Posted 17 Days Ago

job requisition id
: JR2014105

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll collaborate closely with our engineering, DevOps, and customers to develop enterprise AI solutions. Together, we'll deliver generative AI to production!

What you'll be doing:

  • Build inference pipelines with tools like NVIDIA Dynamo, distributing tasks among GPU workers to improve efficiency.
  • Collaborate with DevOps teams to orchestrate disaggregated inference using Kubernetes for complex workloads.
  • Accelerate inference pipelines using TensorRT-LLM, vLLM, SGLang, and other backends to ensure seamless integration with disaggregated inference.
  • Provide mentorship and technical leadership to customers and internal teams, guiding them through the deployment of disaggregated inference systems and resolving complex issues.

What we need to see:

  • 5+ Years in Solutions Architecture with a proven track record of deploying distributed systems and AI inference workloads on Kubernetes.
  • Experience with one of NVIDIA Dynamo, Triton Inference Server, or TensorRT-LLM for model optimization and serving.
  • GPU orchestration using NVIDIA GPU Operator, NIM Operator, and Multi-Instance GPU (MIG) partitioning.
  • Solving sophisticated GPU allocation, memory hierarchies (HBM, DRAM, SSD), and low-latency networking (RDMA, UCX).
  • Demonstrated success in tuning large language models for low-latency inference in enterprise environments.
  • BS in CS/Engineering or equivalent experience.

Ways to stand out from the crowd:

  • Prior experience deploying NVIDIA inference technologies such as Dynamo, NIM, NIXL and Grove.
  • Deep understanding of transformer neural network, and inference acceleration technologies like quantization, speculative decoding, WideEP etc.
  • NVIDIA Certified AI Engineer or similar credentials.
  • Contributions to open-source projects including NVIDIA Dynamo, vLLM, KServe, or SGLang.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 3, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Similar Jobs (5)

#### Solutions Architect, Inference Deployments

locations
: US, CA, Santa Clara

time type
: Full time

posted on
: Posted 6 Days Ago

#### Solutions Architect, AI Models

locations
: 2 Locations

time type
: Full time

posted on
: Posted 30+ Days Ago

#### Solutions Architect, AI Infrastructure

locations
: 5 Locations

time type
: Full time

posted on
: Posted 7 Days Ago

View All 5 Jobs

About Us

Logo

NVIDIA is the world leader in accelerated computing.

NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society.

Learn more about NVIDIA.

Read More

For more details click Apply Now.

About Nvidia

Nvidia is a leading designer of graphics processing units (GPUs) and system-on-chip units, powering gaming, professional visualization, data centers, and artificial intelligence workloads. Industry: Semiconductors & AI Computing