Research Scientist, Applied Machine Learning Security (Agent Systems), SEAR

Apple Inc

Cupertino, California, USA Posted 14 days ago

$181,100 - $318,400/year

Role Details

This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed at scale. You will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple’s ML platforms and products. You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies. Impact is measured by risk reduction in production, not theoretical results alone. Lead applied research on production agent systems: Conduct original security research on deployed agentic ML systems that interact with tools, APIs, memory, workflows, and sensitive data. Identify and characterize vulnerabilities such as indirect prompt injection, tool misuse, privilege escalation, goal hijacking, and cross-context data leakage, and develop defenses validated under production constraints. Design realistic adversarial evaluations: Build and maintain adversarial testing frameworks that reflect real attacker incentives and system complexity, including multi-step, cross-tool, and persistence-based attacks that surface failure modes missed by standard evaluations. Drive defenses into shipping systems: Develop mitigations that are compatible with production requirements around latency, reliability, debuggability, and privacy. Influence architectural choices such as capability scoping, isolation boundaries, execution control, and runtime enforcement. Own threat models for agent deployments: Define trust boundaries and threat models for agentic ML across Apple platforms and services, and translate them into actionable security requirements and release criteria. Bridge research and engineering: Partner deeply with ML platform teams, product engineering, and product security to ensure research insights become design guidance, test infrastructure, and launch blockers where appropriate. Provide technical leadership: Set standards for applied ML security research, mentor other researchers, and influence how agent systems are reviewed, built, and released across the organization. Ph.D. or equivalent experience in machine learning, security, systems, or a related field. Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact. Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance. Experience researching or securing LLM-based or tool-augmented ML systems. Ability to work fluidly across research, engineering, and security review processes. Track record of influencing production systems through research-driven insights. Publications in top venues are a plus, but production impact is the primary signal.

For more details click Job Post.

About Apple Inc

Apple Inc. is a multinational technology company known for designing and manufacturing consumer electronics, software, and online services, including the iPhone, Mac, iPad, and App Store. Industry: Consumer Electronics & Software