Machine Learning Infrastructure Engineers

Shopify

Not Available Posted 1 day, 9 hours ago

Job Description

Machine Learning Infrastructure Engineers build and operate the end-to-end platform that powers AI—from data ingestion and training to large-scale, low-latency inference. They design high-performance, GPU-accelerated systems on Kubernetes, craft self-serve developer experiences, and ship the paved roads that let ML teams move fast, safely, and at global scale. Some companies separate ML Infra, ML Platform and ML Ops- at Shopify- we call this ML Infrastructure. We have an agile workforce who can flex their experience and solve problems across these three domains.

Responsibilities:

  • Build and operate ML control planes, APIs, CLIs, SDKs, and self-serve golden paths
  • Design and optimize multi-tenant GPU Kubernetes clusters, including autoscaling, scheduling, packing, and utilization
  • Own model lifecycle: training orchestration/experiments, registries/versioning, CI/CD, canary/blue-green, and safe rollback
  • Build real-time serving stacks (KServe/Seldon/TensorFlow Serving) and end-to-end pipelines for batch and streaming
  • Design feature platforms and engineer storage/data movement for datasets, features, and artifacts tuned for cost/performance
  • Implement observability and SLOs across pipelines, training, and inference; automate remediation and capacity planning
  • Partner with ML, data, and product teams to unblock delivery and accelerate idea-to-impact

Qualifications:

  • Proven platform/infrastructure engineering experience with a track record of shipping production systems and code
  • Deep Kubernetes/containerization expertise for ML workloads (operators, Helm, service mesh/gRPC) and multi-tenant clusters
  • Hands-on experience running GPU infrastructure at scale (NVIDIA ecosystem; scheduling/packing/optimization)
  • Strong distributed systems and API/service design fundamentals; experience with high-scale inference
  • Proficiency with infrastructure-as-code and automation (Terraform, Helm, GitOps) on major clouds (GCP/AWS/Azure)
  • Observability expertise (Prometheus/Grafana) and SLO-driven operations for ML systems
  • Proficient in Python/Go/Java; experience building developer tooling and self-service platforms

Nice to Haves:

  • Model serving and lifecycle tooling: KServe/Seldon/TensorFlow Serving, Kubeflow, MLflow/W&B, model registries, DVC
  • Feature store experience (Feast/Tecton) with online/offline parity and SLAs
  • Data infrastructure familiarity (Kafka, Spark/Flink) and stateful stores (Redis/MySQL); CI/CD for online/batch inference
  • Model performance optimization (batching, caching, quantization, distillation) and hardware-aware tuning
  • Experience with experimentation/A/B testing platforms and online evaluation frameworks

At Shopify, we pride ourselves on moving quickly—not just in shipping, but in our hiring process as well. If you're ready to apply, please be prepared to interview with us within the week. Our goal is to complete the entire interview loop within 30 days. You will be expected to complete a live pair programming session, come prepared with your own IDE.

This role may require on-call work

For more details click Apply Now.

About Shopify

Shopify is a global commerce company providing a leading e-commerce platform and ecosystem of tools that allows businesses of all sizes to build, manage, and grow their online and physical retail operations. Industry: E-Commerce Technology & Payments