GPU Software Architecture Engineer, Graphics, Games, & ML

Apple Inc

Cupertino, California, USA Posted 15 days ago

$181,100 - $318,400/year

Role Details

In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack—from low-level memory access patterns to high-level distributed algorithms—to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily. This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure. Design and implement tensor/data/expert parallelism strategies for large language model inference across distributed server cluster environments Drive hardware and software roadmap decisions for ML acceleration Expert in designing architectures that achieves peak compute utilizations and optimal memory throughput Develop and optimize distributed inference systems with focus on latency, throughput, and resource efficiency across multiple nodes Architect scalable ML serving infrastructure supporting dynamic model sharding, load balancing, and fault tolerance Collaborate with hardware teams on next-generation accelerator requirements and software teams on framework integration Lead performance analysis and optimization of ML workloads, identifying bottlenecks in compute, memory, and network subsystems Drive adoption of advanced parallelization techniques including pipeline parallelism, expert parallelism, and various other emerging approaches 10+ years of experience in GPU programming (CUDA, ROCm) and high-performance computing, successfully optimizing large-scale parallel workloads. Strong experience with inter-node communication technologies (InfiniBand, RDMA, NCCL) in the context of ML training/inference Must have excellent system programming skills in C/C+ Deep understanding of distributed systems and parallel computing architectures Understand how tensor frameworks (PyTorch, JAX, TensorFlow) are used in distributed training/inference Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field Familiar with model development lifecycle from trained model to large scale production inference deployment Proven track record in ML infrastructure at scale Python is a plus PhD in Computer Science, Engineering, Mathematics, or a related technical field

For more details click Job Post.

About Apple Inc

Apple Inc. is a multinational technology company known for designing and manufacturing consumer electronics, software, and online services, including the iPhone, Mac, iPad, and App Store. Industry: Consumer Electronics & Software