Research Scientist / Engineer, Foundation Model Evaluation

Apple Inc

Cupertino, California, USA Posted 14 days ago

$181,100 - $318,400/year

Role Details

This is a hands-on role focused on the models that power Apple products used daily by over a billion people. You will design evaluation systems where the outcome is not just a score, but an actionable signal - one that drives model improvement and predicts real user experience. Working alongside model training and product teams, you will close the loop between evaluation and improvement. Our work spans three areas: • Frontier capability assessment: benchmarking against the state of the art in reasoning, code, knowledge, and agentic workflows • Product-aligned evaluation: measuring model quality in ways that reflect real user experience • Evaluation-to-training integration: feeding actionable insights back into the model development cycle You may focus on one area or work across multiple, depending on your background and interests. We build frontier foundation models that power intelligent experiences at Apple. Our team works across the full training lifecycle: including pre-training foundation models, and developing mid-training approaches that bridge general capability and task-specific performance. What makes our work distinct is that we're engineering models specifically for Apple silicon and optimized for experiences that are private, personal, and deeply integrated into the OS. We're solving frontier problems in reward modeling to resist reward hacking, handling sparse and delayed rewards in agentic settings, and aligning models reliably across the spectrum from open-ended creative tasks to precise, action-taking workflows. If you're drawn to hard problems where the research and the product are inseparable, this is the team. Benchmark Design & Development: Design and implement evaluation benchmarks, metrics, and test suites that rigorously measure model capabilities across reasoning, knowledge, code, and agentic workflows. Product-Aligned Evaluation: Develop evaluation methods that capture how models behave in real product settings, and validate that evaluation metrics predict user- perceived quality and product outcomes. Evaluation Methodology Research & Tooling: Research and apply state-of-the-art evaluation techniques — including scoring frameworks, model-based judging, and contamination-resistant benchmark design. Build reusable tools, scorer libraries, and analysis frameworks that scale across the team's benchmark portfolio. Experimental Analysis: Design and execute rigorous experiments comparing model capabilities, engage with third-party vendors on benchmarking, and perform detailed gap analysis to guide model development priorities. Cross-Team Collaboration: Work closely with model training, training data, and product teams to ensure evaluation insights inform training strategies, data decisions, and product quality improvements. 3+ years of experience in AI model evaluation, NLP, or a related area (e.g., natural language generation, information retrieval, or conversational AI) Strong fundamentals in machine learning, natural language processing, and statistical analysis Proficiency in Python and experience with ML frameworks (PyTorch, JAX, or equivalent) Demonstrated ability to translate research insights into practical implementations Strong experimental design skills: ability to design rigorous comparisons and draw valid conclusions from results Clear technical communication: ability to distill evaluation results into actionable recommendations for cross-functional partners MS or PhD in Computer Science, Machine Learning, Natural Language Processing or a related technical field. Equivalent practical experience will be considered. PhD in Computer Science, Machine Learning, NLP, or a related field Direct experience evaluating large language models, e.g. benchmark design, model-based judging Track record of collaborating with model training and data teams to turn evaluation findings into training improvements Experience building reusable evaluation tooling or analysis frameworks adopted across teams Familiarity with human evaluation methodology and experience partnering with annotation teams or vendors to assess model quality

For more details click Job Post.

About Apple Inc

Apple Inc. is a multinational technology company known for designing and manufacturing consumer electronics, software, and online services, including the iPhone, Mac, iPad, and App Store. Industry: Consumer Electronics & Software