Applied Machine Learning Research Engineer - Multimodal LLMs for Human Understanding
$147,400 - $272,100/year
Role Details
You’ll work on ground breaking research projects to advance our AI and computer vision capabilities, contribute to both foundational research and practical applications on multimodal large language models, and design, implement, and evaluate algorithms and models for human understanding. You have a strong background in developing and exploring multimodal large language models that integrate diverse data modalities such as text, image, video, and audio. You’ll have the opportunity to collaborate with multi-functional teams, including researchers, data scientists, software engineers, human interface designers and application domain experts. You’ll stay up-to-date on the latest advancements in AI, machine learning, and computer vision and apply this knowledge to drive innovation within the company. Experience in developing, training/tuning multimodal LLMs. Programming skills in Python. Masters degree with a minimum of 3 years relevant industry experience. Expertise in one or more of: computer vision, NLP, multimodal fusion, Generative AI. Experience with at least one deep learning framework such as JAX, PyTorch, or similar. Publication record in relevant venues. PhD in Computer Science, Electrical Engineering, or a related field with a focus on AI, machine learning, or computer vision.
For more details click Job Post.
About Apple Inc
Apple Inc. is a multinational technology company known for designing and manufacturing consumer electronics, software, and online services, including the iPhone, Mac, iPad, and App Store. Industry: Consumer Electronics & Software