Howard Hughes Medical Institute (HHMI)

Visit Website

About

Howard Hughes Medical Institute (HHMI) is one of the largest private biomedical research organizations in the world, funding basic research and science education to advance human health and knowledge. Industry: Biomedical Research & Science Education

Open Positions (7)

AI Engineer - Metabolic Sensor Design

Janelia Research Campus, United States of America Posted 40 days ago

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

Intro:

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The AI initiative will be centered at HHMI’s Janelia Research Campus. Janelia has been at the forefront of AI-driven research in biology for more than 15 years. Its forward-thinking structure, centralized funding, and collaborative culture make it ideally suited to take this bold leap forward. To learn more about the initiative, visit here.

Please include a cover letter with your application detailing your qualifications and experience as they relate to this position. This should include a description of a deep learning project that you have executed, ideally a creative use of a transformer-based or related architecture that you trained yourself. If it is in the sequence or protein structure domain, even better! If possible, include a link to a code repository. If you are a contributor to a joint project, that is wonderful, but please describe specifics of your contribution to the project. Briefly discuss the results of the project as well as limitations and challenges you encountered.

Also, please include a link to your GitHub profile and/or links to relevant projects at the bottom of your cover letter.

About the role:

The CombinAItorial Sensor Design project is part of HHMI’s AI for Science Initiative (ai.hhmi.org) and brings together expertise in protein engineering, advanced microscopy, and machine learning. Our goal is to develop a protein biosensor optimization pipeline that integrates high-throughput functional screening with predictive deep learning to accelerate the directed evolution of protein biosensors for visualizing dynamic biochemical processes in living cells.

We are seeking a highly skilled AI Software Engineer to join our team and play a crucial role in advancing our AI-driven scientific initiatives. In this position, you will be responsible for developing and maintaining the computational infrastructure essential for AI-powered biosensor optimization. You will collaborate with data scientists and experimentalists to develop robust data flows from optical pooled screening outputs through to model training and deployment. You will implement cutting-edge tools for predicting fluorescence properties and biochemical performance based on protein sequence and structure. You will utilize generative models to produce new sequences for biosensor candidates that will be tested in the lab. This role will require deep knowledge of the underlying models as well as practical implementation skills to have the maximum biological impact. You will lead comparative studies, implement novel architectures, and ensure all work meets the highest standards of reproducible, open science.  This role requires close collaboration with our microscopy, sequencing, and protein engineering team to ensure the seamless integration of computational and experimental workflows.

Strong programming skills in Python, PyTorch, and JAX are required, along with the ability to reason about neural network behavior from first principles. We seek candidates who can think critically about model design, understand how architectural choices and regularization affect model behavior, and design rigorous experiments to evaluate models. Domain expertise in sequence or protein structure analysis will be highly valued. Because this is a team project, we value a clean shared codebase and git-based collaborative workflows. Familiarity with protein modeling or machine learning frameworks such as AlphaFold, ESM, Chai-1, or Boltz-1 is highly valued. We are looking for candidates with experience in ML model deployment, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in GPU-based computing environments. 

What we provide:

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts,  contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.

What you’ll do:

  • Develop and maintain computational infrastructure and predictive tools for AI biosensor optimization, including tool development for modeling fluorescence properties and biochemical performance from sequence and structure
  • Design and execute rigorous comparative experiments between model architectures.
  • Collaborate with other team members to ensure seamless integration of computational and experimental aspects of the project.
  • Apply machine learning, AI techniques, and software engineering best practices to deliver scalable, maintainable, and reproducible AI systems for protein engineering.
  • Carefully document data, code, and processing pipelines to enable seamless reproduction and extension of research results.
  • Actively contribute to the latest advancements in the field and continuously improve your skillset with the latest advances in AI research and technologies.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.

What you bring:

  • Minimum requirements: PhD in Bioengineering, Computer Science, Data Science, Statistics, Applied Mathematics, or a related field; or an equivalent combination of education and relevant experience.
  • 3+ years of experience in developing and fine-tuning deep learning models.
  • Strong programming skills in Python, PyTorch, and JAX.
  • Familiarity with protein modeling deep learning frameworks (e.g., AlphaFold, ESM, Chai-1, Boltz-1).
  • Familiarity with computer vision deep learning frameworks (e.g., SAM, cellpose).
  • Experience with ML model deployment, workflow orchestration, and high-throughput data processing.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

AI Engineer II: $123,125.60 (minimum) - $ 153,907.00 (midpoint) - $200,079.10 (maximum)

AI Engineer III: $149,515.20 (minimum) - $186,894.00 (midpoint) - $242,962.20 (maximum)

AI Engineer IV: $184,453.60 (minimum) - $230,567.00 (midpoint) - $299,737.10 (maximum)

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

AI Engineer - Vision Foundation Model Pretraining

Janelia Research Campus, United States of America Posted 40 days ago

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

TLDR: Build the model backbone for the next era of AI-powered spatial biology.

Please include a cover letter with your application detailing your qualifications and experience for this position. Describe a deep learning project you have executed—ideally a creative use of a vision transformer, U-Net architecture, or Diffusion model that you trained yourself. Projects in computer vision for microscopy image analysis are especially relevant. Include a link to a code repository if possible. If you contributed to a joint project, please describe your specific contributions. Briefly discuss the project's results, limitations, and challenges you encountered. Finally, include a link to your GitHub profile, personal website, or similar and/ or any relevant projects at the bottom of your cover letter.

About the role:

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The Foundational Microscopy Image Analysis (MIA) project sits at the heart of AI@HHMI. Our ambition is big: to create one of the world’s most comprehensive, multimodal 3D/4D microscopy datasets and use it to power a vision foundation model capable of accelerating discovery across the life sciences.

We are seeking a highly skilled AI Research Engineer to join our team and advance our AI-driven scientific initiatives. You will develop and deploy a self-supervised pre-training pipeline for learning from a large-scale microscopy dataset. You will work with expert computational scientists, data engineers, and experimentalists to train models that learn foundational embeddings that can be used across a wide range of microscopy modalities and applications. In collaboration with other engineers and scientists, you will use these models for scalable vision tasks, instance segmentation, tracking, classification, and more. You will utilize probabilistic models to produce uncertainty-aware predictions across scales. This role requires deep knowledge of the underlying models and practical implementation skills to maximize biological impact. You will lead rigorous model evaluations, implement novel architectures, and ensure all work meets the highest standards of reproducible open science. Success in this role requires close collaboration with our microscopy experts, cellular biologists, neuroscientists, and computer scientists to ensure models can be deployed in large data real-world scenarios.

Strong programming skills in Python, PyTorch, and/ or JAX are required, along with the ability to reason about neural network behavior from first principles. The role also requires knowledge of microscopy data formats and tools such as Zarr and Neuroglancer. We seek candidates who can think critically about model design, understand how architectural choices and regularization affect model behavior, and design rigorous experiments to evaluate models. Domain expertise in microscopy image analysis is not necessary, but will be highly valued. Because this is a team project, we value a clean shared codebase and git-based collaborative workflows. Familiarity with state-of-the-art vision frameworks such as DinoV3, SAM, CellPose, or Vision Transformers is required. We are looking for candidates with experience in ML model deployment, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in scalable GPU-based computing environments.

What we provide:

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • Access to a world-class computational infrastructure and high-quality datasets.
  • The opportunity to engage with world-class researchers, software engineers, and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance, such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.
  • Opportunity to partner with frontier AI labs on scientific applications of AI (see https://www.anthropic.com/news/anthropic-partners-with-allen-institute-and-howard-hughes-medical-institute).

What you’ll do:

  • Research and explore the model design space for vision foundation models of multi-modal biological microscopy data.
  • Build a self-supervised pre-training pipeline on a large-scale foundational dataset of multi-modal biological microscopy data.
  • Design and execute rigorous experiments to evaluate model performance on a wide distribution of microscopy images and model architectures.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.
  • Deploy models both at Janelia and in the broader scientific community and ensure downstream usability.

What you bring:

  • Master's or PhD degree in Computer Science, Applied Mathematics, Computational Neuroscience, or a related field—or an equivalent combination of education and relevant experience.
  • 3+ years of experience training and evaluating deep learning architectures such as Transformers or U-Nets, particularly on image or point cloud data.
  • Strong programming skills in Python, PyTorch, and JAX. Skills in Javascript are a plus.
  • Familiarity with computational tools in microscopy and connectomics data (Cellpose, CAVE, Flood Filling Networks, Neuroglancer, Zarr).
  • Familiarity with state of the art (self-supervised) computer vision algorithms (e.g., DINO, Masked Autoencoders, SAM).
  • Experience with ML model deployment, workflow orchestration, and high-throughput data processing and model training.
  • Keen interest to work in a truly interdisciplinary environment and learn about cellular/molecular biology (e.g. transcriptomics) & neuroscience.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

AI Engineer I: $96,325.60 (minimum) - $120,407.00 (midpoint) - $156,529.10 (maximum)

AI Engineer II: $123,125.60 (minimum) - $153,907.00 (midpoint) - $200,079.10 (maximum)

AI Engineer III: $149,515.20 (minimum) - $186,894.00 (midpoint) - $242,962.20 (maximum)

AI Engineer IV: $184,453.60 (minimum) - $230,567.00 (midpoint) - $299,737.10 (maximum)

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

Scientific Computing Associate - VR Technologies for Social Neuroscience

Janelia Research Campus, United States of America Posted 40 days ago

$90000 - $90000

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

Please include a cover letter with your application. Be sure to highlight your coding experience and explain how your enthusiasm and ability to learn quickly can help you succeed in this role—even if you don’t meet every listed requirement.

About the Role:

The Scientific Computing Associate (SCA) position represents an alternative to traditional scientific roles (e.g. postdoc) and provides an ideal environment to establish a career in computational research or software engineering. The position aims at developing qualifications and experience in computational research and professional software engineering in a research environment that enables the candidate to pursue their future career in science or industry. The SCA position is a time-limited appointment for 24 months, with discretionary renewal for a final 12-month term (maximally 36 months in total.)

We are seeking a talented and motivated computational scientist to develop and deploy cutting-edge experimental platforms that integrate dynamic virtual reality environments with precise neural and behavioral measurements in animal subjects (fish and flies) for the study of social learning and collective behaviors. This will require synchronization of and logging from many system components (video acquisition, animal tracking and pose estimation, microscopy image acquisition and/or physiological recordings, video game engines, multiple displays, etc.) as well as development of geometrically precise, reconfigurable, closed-loop virtual social paradigms that can be reproduced across animal subjects. By integrating real-time behavioral and neural measurements with virtual social environments, we enable neuroscientists to measure and model social behaviors in new and creative ways.

You’ll work in close collaboration with Scientific Computing, MCN-NET, and the Schulze and Otopalik Labs. As part of a highly interdisciplinary and collaborative team of computational scientists, software and AI engineers, and neuroscientists, you’ll have access to high-performance workstations, CPU/GPU clusters, and experimental systems tailored for fish and fly research. This role will necessarily involve both software development and software-hardware integration, with potential opportunities to collect key initial datasets and contribute to publication(s) with the Schulze and Otopalik labs.

What we provide:

  • A supportive team environment that promotes collaboration and knowledge sharing.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance, such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.

What you’ll do:

Develop Software Architecture for Fish & Fly Experimental Systems

  • Clock synchronization across heterogeneous data streams, determined by hardware/software.
  • Build a robust synchronization layer to ensure <8 ms end-to-end latency for sub-frame accuracy in high-frequency behavioral streams.
  • Rigorous metadata and I/O logging to guarantee reproducible analysis pipelines.
  • Debugging driver/firmware bottlenecks in DAQs, GPUs, and cameras. (Interfaces with diverse hardware: cameras, DAQ boards, GPUs, head-mounted displays, lasers/scanners.)
  • Performance optimization for low-latency, precise, & dynamic virtual environments.
  • Develop reproducible open- and closed-loop virtual social environments and interactions using virtual fish and flies with hard-coded, dynamic, or agent-based movement rules (working in close collaboration with experimentalists).

Document and Generalize Software Modules for Widespread Use

  • Organize software packages that generalize across fish and fly experimental setups with intuitive user interfaces that can be implemented within and beyond the Janelia Research Campus.
  • Compose documentation for reproducibility, clear metadata standards, and user-friendly interfaces (modular APIs and wrappers so experimentalists can use GUIs/editors instead of diving into C#/low-level code).
  • Strong Git-based version control workflows and containerization for reproducible deployments.
  • Potential opportunity for publication of this suite of tools in a methodological journal.

What you bring:

  • A degree in computational sciences or equivalent (M.Sc. or Ph.D).
  • Experience with C# and Python programming (async/multithreading) languages.
  • Experience with real-time programming.
  • Experience with machine learning, big data, signal processing preferred.
  • Experience in solving complex problems independently.
  • Good communication skills, comfortable working collaboratively in a team environment.
  • Experience with the following will be extremely useful: messaging frameworks (e.g. sockets; currently UDP client used to communicate between Bonsai RX, animal tracking software, DAQs, and Unity), ROS-like systems, machine vision, GPU programming, shared memory, agent-based modeling, and/or game development.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

A Scientific Computing Associate is compensated at a rate of $90,000.00 annually at HHMI's Janelia Research Campus.

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

AI Engineer - Multi-Modal Microscopy Representation Alignment & Post-Training

Janelia Research Campus, United States of America Posted 40 days ago

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

TLDR: Build the model backbone for the next era of AI-powered spatial biology.

Please include a cover letter with your application detailing your qualifications and experience for this position. Describe a deep learning project you have executed, ideally a creative use of supervised fine tuning of a pre-trained vision transformer, U-Net architecture, or related topic. Projects in computer vision for microscopy image analysis are especially relevant. Include a link to a code repository if possible. If you contributed to a joint project, please describe your specific contributions. Briefly discuss the project's results, limitations, and challenges you encountered. Finally, include a link to your GitHub profile, personal website, or similar and/ or any relevant projects at the bottom of your cover letter.

About the role:

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The Foundational Microscopy Image Analysis (MIA) project sits at the heart of AI@HHMI. Our ambition is big: to create one of the world’s most comprehensive, multimodal 3D/4D microscopy datasets and use it to power a vision foundation model capable of accelerating discovery across the life sciences.

We are seeking a highly skilled AI Research Engineer to join our team and advance our AI-driven scientific initiatives. You will build methods for supervised adaptation of pre-trained microscopy vision models and cross-modality representation learning/ alignment. You will build robust pipelines that adapt foundation models to specialized microscopy tasks and develop algorithms that align image level embeddings across modalities (e.g., fluorescence ↔ electron microscopy ↔ brightfield ↔ …).

In collaboration with other engineers and scientists, you will use these models for scalable vision tasks, instance segmentation, tracking, classification, and more. You will utilize probabilistic models to produce uncertainty-aware predictions across scales. This role requires deep knowledge of the underlying models and practical implementation skills to maximize biological impact. You will lead rigorous model evaluations, implement novel architectures, and ensure all work meets the highest standards of reproducible open science. Success in this role requires close collaboration with our microscopy experts, cellular biologists, neuroscientists, and computer scientists to ensure models can be deployed in large data real-world scenarios.

Strong programming skills in Python, PyTorch, and/ or JAX are required, along with the ability to reason about neural network behavior from first principles. The role also requires knowledge of microscopy data formats and tools such as Zarr and Neuroglancer. We seek candidates who can think critically about model design, understand how architectural choices and regularization affect model behavior, and design rigorous experiments to evaluate models. Domain expertise in microscopy image analysis is not necessary, but will be highly valued. Because this is a team project, we value a clean shared code base and git-based collaborative workflows. Familiarity with state-of-the-art vision frameworks such as DinoV3, SAM, CellPose, or Vision Transformers is required. We are looking for candidates with experience in ML model deployment, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in scalable GPU-based computing environments.

What we provide:

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • Access to a world-class computational infrastructure and large, high-quality datasets.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.
  • Opportunity to partner with frontier AI labs on scientific applications of AI (see https://www.anthropic.com/news/anthropic-partners-with-allen-institute-and-howard-hughes-medical-institute).

What you’ll do:

  • You’ll be responsible for the post-training process of our multi-modal microscopy vision foundation model, which includes model fine tuning on annotated data, and aligning learned representations across multiple microscopy modalities.
  • Design and execute rigorous experiments to evaluate model performance on a wide distribution of microscopy images and model architectures.
  • Collaborate with Scientists at Janelia and the broader academic community to integrate our model into their workflows across a wide variety of vision tasks.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.

What you bring:

  • Master's or PhD degree in Computer Science, Applied Mathematics, Computational Neuroscience, or a related field—or an equivalent combination of education and relevant experience.
  • 3+ years of experience fine-tuning spatial transformer networks, contrastive learning, model distillation, RLHF and/or cross-modal alignment methods.
  • Familiarity with state of the art vision fine tuning methods, such as low-rank adaptation (LoRA), linear probing etc.
  • Strong programming skills in Python, PyTorch, and JAX. Skills in Javascript are a plus.
  • Familiarity with computational tools in microscopy and connectomics data (Cellpose, CAVE, Flood Filling Networks, Neuroglancer, Zarr).
  • Experience with ML model deployment, workflow orchestration, and high-throughput data processing and model training.
  • Keen interest to work in a truly interdisciplinary environment and learn about cellular/molecular biology (e.g. transcriptomics) & neuroscience.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

AI Engineer I: $96,325.60 (minimum) - $120,407.00 (midpoint) - $156,529.10 (maximum)

AI Engineer II: $123,125.60 (minimum) - $153,907.00 (midpoint) - $200,079.10 (maximum)

AI Engineer III: $149,515.20 (minimum) - $186,894.00 (midpoint) - $242,962.20 (maximum)

AI Engineer IV: $184,453.60 (minimum) - $230,567.00 (midpoint) - $299,737.10 (maximum)

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

Data Engineer - Training Pipelines & Inference

Janelia Research Campus, United States of America Posted 40 days ago

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

TLDR: Build the data backbone for the next era of AI-powered spatial biology.

Please include a cover letter with your application detailing your qualifications and experience for this position. Describe a deep learning project you have executed. Projects in computer vision for microscopy image analysis are especially relevant. Include a link to a code repository if possible. If you contributed to a joint project, please describe your specific contributions. Briefly discuss the project's results, limitations, and challenges you encountered. Finally, include a link to your GitHub profile, personal website, or similar and/ or any relevant projects at the bottom of your cover letter.

About the Role:

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The Foundational Microscopy Image Analysis (MIA) project sits at the heart of AI@HHMI. Our ambition is big: to create one of the world’s most comprehensive, multimodal 3D/4D microscopy datasets and use it to power a vision foundation model capable of accelerating discovery across the life sciences.

We're seeking a skilled Data Engineer to drive scientific innovation through robust data infrastructure, model training, and inference systems. You'll design, develop, and optimize scalable data pipelines and build multi-node GPU training and inference pipelines for foundational models. You'll also develop tools for ingesting, transforming, and integrating large, heterogeneous microscopy image datasets—including writing production-quality Python code to parse, validate, and transform microscopy data from published research papers, public databases, and internal repositories.

This role requires technical excellence in data engineering and the ability to understand biological research contexts to ensure data integrity and scientific validity. Your work will directly support computational research initiatives, including machine learning and AI applications.

You'll collaborate closely with multidisciplinary teams of computational and experimental scientists to define and implement best practices in data engineering, ensuring data quality, accessibility, and reproducibility. You'll maintain detailed documentation, potentially mentor junior engineers, and automate workflows to streamline the path from raw data to scientific insight.

What we provide:

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.
  • Opportunity to partner with frontier AI labs on scientific applications of AI (see https://www.anthropic.com/news/anthropic-partners-with-allen-institute-and-howard-hughes-medical-institute).

What you’ll do:

  • Design and implement scalable, robust data, model training and inference pipelines for foundational microscopy datasets & vision foundation models. Deploy such pipelines on multi-node GPU environments and make data & trained models publicly available.
  • Stay up to date with scientific literature to understand data context and processing requirements
  • Document data provenance and transformation steps comprehensively
  • Apply statistical tools and programming languages (e.g., Python, R) to analyze large datasets, develop custom functions, and extract actionable insights through effective visualization.
  • Establish and maintain data standards, formats, workflows, and documentation to ensure data quality, accessibility, and reproducibility across projects.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.

What you bring:

  • Bachelor’s degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field with 3+ years of experience applying and customizing data mining, model training & inference methods and techniques. An equivalent combination of education and relevant experience will be considered.
  • Experience with data formats such as Zarr, Parquet, HDF5, and efficient IO (e.g., webdataset).
  • Experience with volumetric 3D/4D microscopy data analysis tools.
  • Experience with high performance compute environments (cloud-based and slurm/lsf clusters) and model deployment platforms (e.g., Kubernetes, AWS SageMaker, Google Vertex AI, HF Inference).
  • Experience with distributed data processing, Multi-node GPU processing and ML development frameworks such as PyTorch and/or JAX
  • Excellent technical documentation and communication skills
  • Experience in building scalable data solutions, working with big data technologies, and ensuring data quality and accessibility.
  • Expertise in utilizing data visualization libraries and software (e.g., Matplotlib, R, Jupyter notebooks).
  • Detail-oriented, creative, and organized team player with strong communication skills and a collaborative mindset.
  • Able to effectively manage time, prioritize tasks, and clearly convey complex data concepts to technical and non-technical audiences.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

Data Engineer I: $86,181.60 (minimum) - $107,727.00 (midpoint) - $140,045.10 (maximum)

Data Engineer II: $98,039.20 (minimum) - $122,549.00 (midpoint) - $159,313.70 (maximum)

Data Engineer III: $112,629.60 (minimum) - $140,787.00 (midpoint) - $183,023.10 (maximum)

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

Data Engineer - Foundational Microscopy Data

Janelia Research Campus, United States of America Posted 40 days ago

Primary Work Address: 19700 Helix Drive, Ashburn, VA, 20147

Current HHMI Employees, click here to apply via your Workday account.

TLDR: Build the data backbone for the next era of AI-powered spatial biology.

Please include a cover letter with your application detailing your qualifications and experience for this position. Describe a deep learning project you have executed. Projects in computer vision for microscopy image analysis are especially relevant. Include a link to a code repository if possible. If you contributed to a joint project, please describe your specific contributions. Briefly discuss the project's results, limitations, and challenges you encountered. Finally, include a link to your GitHub profile, personal website, or similar and/ or any relevant projects at the bottom of your cover letter.

About the Role:

AI@HHMI: HHMI is investing $500 million over the next 10 years to support AI-driven projects and to embed AI systems throughout every stage of the scientific process in labs across HHMI. The Foundational Microscopy Image Analysis (MIA) project sits at the heart of AI@HHMI. Our ambition is big: to create one of the world’s most comprehensive, multimodal 3D/4D microscopy datasets and use it to power a vision foundation model capable of accelerating discovery across the life sciences.

We're seeking a skilled Data Engineer to drive scientific innovation through robust data infrastructure. You'll build a large-scale foundational microscopy image dataset and develop scalable data processing pipelines. This includes collaborating with internal and external partners on data sharing and writing production-quality Python code to parse, validate, and transform microscopy image data from published research papers, public databases, and internal repositories.

This role requires technical excellence in data engineering and the ability to communicate clearly and proactively with collaborators who contribute multimodal microscopy data to the project. Your work will directly support computational research initiatives, including machine learning and AI applications.

Working closely with multidisciplinary teams of computational and experimental scientists, you'll help define and implement best practices in data engineering—ensuring data quality, accessibility, and reproducibility. You'll maintain detailed documentation, potentially mentor junior engineers, and automate workflows to streamline the path from raw data to scientific insight.

What we provide:

  • A competitive compensation package, with comprehensive health and welfare benefits.
  • A supportive team environment that promotes collaboration and knowledge sharing.
  • The opportunity to engage with world-class researchers, software engineers and AI/ML experts, contribute to impactful science, and be part of a dynamic community committed to advancing humanity’s understanding of fundamental scientific questions.
  • Amenities that enhance work-life balance such as on-site childcare, free gyms, available on-campus housing, social and dining spaces, and convenient shuttle bus service to Janelia from the Washington D.C. metro area.
  • Opportunity to partner with frontier AI labs on scientific applications of AI (see https://www.anthropic.com/news/anthropic-partners-with-allen-institute-and-howard-hughes-medical-institute).

What you’ll do:

  • Use AI coding agents to develop ad-hoc APIs to mine diverse microscopy datasets from public and internal sources.
  • Work with internal and external experimental labs to collect large multi-modal microscopy image datasets.
  • Collect and curate multi-modal foundational datasets for 3D and 4D microscopy data and other modalities.
  • Continuously asses quality and assure correctness of the aggregated data.
  • Collaborate closely with experimental scientists and shared resources teams to develop efficient annotation and metadata workflows.
  • Design and implement scalable, robust data pipelines for microscopy data using workflow managers that perform data validation and quality control at every pipeline stage through tests and clear data visualization.
  • Stay up to date with scientific literature to understand data context and processing requirements.
  • Document data provenance and transformation steps comprehensively.
  • Apply statistical tools and programming languages (e.g., Python, R) to analyze large datasets, develop custom functions, and extract actionable insights through effective visualization.
  • Establish and maintain data standards, formats, workflows, and documentation to ensure data quality, accessibility, and reproducibility across projects.
  • Make foundational microscopy dataset accessible to collaborators and the public as open-data and open source services and act as a point of contact for engineers/researcher who would like to use the dataset.
  • Collaborate with interdisciplinary teams, potentially mentor junior engineers, and direct or assist in directing the work of others to meet project goals while advising stakeholders on data strategies and best practices.

What you bring:

  • Bachelor’s degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field with 3+ years of experience applying and customizing data mining and data analysis methods and techniques. An equivalent combination of education and relevant experience will be considered.
  • Experience with data formats such as Zarr, Parquet, and HDF5 and efficient IO (e.g., webdataset).
  • Experience with volumetric 3D/4D microscopy data analysis tools.
  • Experience with high performance compute environments (cloud-based and slurm/lsf clusters).
  • Clear, proactive, and efficient communication style to manage multiple needs and stakeholders involved in the creation of our foundational microscopy dataset.
  • Excellent technical documentation and communication skills.
  • Expertise in utilizing data visualization libraries and software (e.g., Matplotlib, R, Jupyter notebooks).
  • Detail-oriented, creative, and organized team player with strong communication skills and a collaborative mindset.
  • Able to effectively manage time, prioritize tasks, and clearly convey complex data concepts to technical and non-technical audiences.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act.

Compensation Range

Data Engineer I: $86,181.60 (minimum) - $107,727.00 (midpoint) - $140,045.10 (maximum)

Data Engineer II: $98,039.20 (minimum) - $122,549.00 (midpoint) - $159,313.70 (maximum)

Data Engineer III: $112,629.60 (minimum) - $140,787.00 (midpoint) - $183,023.10 (maximum)

Pay Type: Salary

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

#LI-BG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.

Director, Cybersecurity

Headquarters, United States of America Posted 40 days ago

$213319 - $213319

Primary Work Address: 4000 Jones Bridge Road, Chevy Chase, MD, 20815

Current HHMI Employees, click here to apply via your Workday account.

HHMI is focused on supporting and moving science forward in a variety of different ways ranging from conducting basic biomedical research, empowering educators, inspiring students, developing the next generation of scientists – even stretching into film and media production.  Our Headquarters is in the greater Washington, DC metro area and is home to over 300 employees with expertise in investments, communications, digital production, biomedical sciences, and everything in between.  The work housed here supports and augments the groundbreaking research conducted in HHMI labs across the nation.  As HHMI scientists continue to push boundaries in laboratories and classrooms, you can be sure that your contributions while working here are making a difference.

Summary:

Howard Hughes Medical Institute (HHMI) advances scientific discovery and education in the life sciences. The Technology & Systems Management (TSM) team supports that mission by delivering secure, resilient, and forward-looking technology solutions across the Institute.

We are seeking a Director, Cybersecurity to lead HHMI’s enterprise information security program and strengthen the Institute’s overall security posture in an evolving threat landscape.

The Director, Cybersecurity serves as the Institute’s senior cybersecurity leader and trusted advisor to the CTO and executive leadership on risk posture and emerging threats. This role is responsible for ensuring the confidentiality, integrity, and availability of digital assets across enterprise systems, infrastructure, and applications.

The Director leads internal cybersecurity and identity and access management (IAM) teams, partners with an external SOC/MSSP for continuous monitoring and response, and collaborates across TSM and Institute leadership to embed security into technology strategy and operations. This role also works closely with Risk and Compliance and the Office of General Counsel to align cybersecurity governance with regulatory requirements and the protection of sensitive research and regulated data.

This position reports to the Chief Technology Officer and is based at HHMI’s headquarters in Chevy Chase, Maryland. It follows a hybrid schedule with three in-office days per week and will have occasional travel to our Janelia Research Campus in Ashburn, VA.

What You’ll Get:

  • Mission-Focused Work: The opportunity to safeguard world-class scientific research by leading security efforts in a research-intensive, innovation-driven environment

  • Strategic Partnership in Cutting-Edge Work: Working directly with senior leadership to shape enterprise-wide strategy and influence AI governance and emerging technology security.

  • Competitive Total Rewards Package:Comprehensive healthcare, generous retirement contributions, paid leave, and additional programs that support well-being and professional development.

What You’ll Do:

  • Develop, implement, and continuously evolve a comprehensive cybersecurity strategy aligned with organizational priorities and risk appetite.

  • Serve as senior advisor to executive leadership on cybersecurity risk, posture, and emerging threats.

  • In coordination with the EverydayAI team, lead development of governance frameworks and security practices for emerging technologies, including artificial intelligence and machine learning systems.

  • Lead and develop cybersecurity and IAM teams across two locations, setting priorities, guiding technical direction, and fostering professional growth.

  • Oversee enterprise security operations, including monitoring, vulnerability management, threat intelligence, and incident response.

  • Direct and optimize relationships with external SOC and managed security partners to ensure effective 24/7 coverage.

  • Partner with Risk and Compliance, the Office of General Counsel and other stakeholders to develop and enforce security policies, standards, and procedures; lead internal assessments and coordinate external audits.

  • Establish and communicate security metrics to senior leadership that reflect performance, maturity, and risk reduction.

  • Embed security principles into infrastructure, applications, and business systems design, including secure architecture, network segmentation, and identity and access management best practices.

  • Provide strategic guidance and leadership for a team responsible for internal security/access assessments, coordinating external audits, and supporting regulatory and compliance initiatives across financial systems and other technology areas.

  • Lead enterprise incident response and recovery efforts, and develop and test disaster recovery and business continuity plans from a security perspective.

  • Oversee cybersecurity budgeting, including operational expenses, service agreements, equipment, and special projects.

What You Bring:

Education & Certifications

  • Bachelor’s degree

  • CISSP, CISM, CISA, or equivalent advanced security certification

Experience

  • 12+ years of progressive experience in information security

  • 5+ years of leadership experience managing teams and vendors

  • Knowledge of emerging technologies, including Artificial Intelligence

Skills & Expertise

  • Deep understanding of cybersecurity frameworks (NIST, CIS Controls) and risk management methodologies

  • Experience with SOC operations, IAM platforms, cloud security, and endpoint protection technologies

  • Strong understanding of identity governance, privileged access management, and authentication technologies

  • Experience developing security governance frameworks for AI/ML systems and third-party AI tools

  • Proven ability to build high-performing teams and foster a culture of accountability, transparency, and continuous improvement

  • Excellent communication skills with the ability to translate technical risks into business context

  • Demonstrated problem-solving ability with strong communication, interpersonal, and organizational skills, and a high level of initiative.

Physical Requirements:

Remaining in a normal seated or standing position for extended periods of time; reaching and grasping by extending hand(s) or arm(s); dexterity to manipulate objects with fingers, for example using a keyboard; communication skills using the spoken word; ability to see and hear within normal parameters; ability to move about workspace. The position requires mobility, including the ability to move materials weighing up to several pounds (such as a laptop computer or tablet).

Persons with disabilities may be able to perform the essential duties of this position with reasonable accommodation. Requests for reasonable accommodation will be evaluated on an individual basis.

Please Note:

This job description sets forth the job’s principal duties, responsibilities, and requirements; it should not be construed as an exhaustive statement, however.  Unless they begin with the word “may,” the Essential Duties and Responsibilities described above are “essential functions” of the job, as defined by the Americans with Disabilities Act. #LI-EG1

Compensation and Benefits

Our employees are compensated from a total rewards perspective in many ways for their contributions to our mission, including competitive pay, exceptional health benefits, retirement plans, time off, and a range of recognition and wellness programs. Visit our Benefits at HHMI site to learn more.

Compensation Range

$213,319.20 (minimum) - $266,649.00 (midpoint) - $346,643.70 (maximum)

Pay Type:

Annual

HHMI’s salary structure is developed based on relevant job market data. HHMI considers a candidate's education, previous experiences, knowledge, skills and abilities, as well as internal consistency when making job offers. Typically, a new hire for this position in this location is compensated between the minimum and the midpoint of the salary range.

HHMI is an Equal Opportunity Employer

We use E-Verify to confirm the identity and employment eligibility of all new hires.