High Performance Computing

Partnering with scientists to accelerate scientific discovery

The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024 and Feb. 2026) and has over 20 petaflops of computing power. It consists of 25,584 Intel Platinum processors in different generations including 2.1 GHz, 2.3 GHz, 2.6 GHz, and 2.9 GHz computing cores (112 cores or 96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5 terabytes (TB) or 2 terabytes (TB) of memory per node, 408 graphical processing units (GPUs), including 48 Nvidia B200s, 236 Nvidia H100s, 32 Nvidia L40S, 44 Nvidia A100s, 48 Nvidia V100s, 452 TB of total memory, and 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 2,100 peer-reviewed publications since 2012. More details here.

Announcements


NEW!! Spring 2026 AIR·MS Training Sessions

We will be holding 3 AIR·MS training sessions this Spring. These sessions will introduce you to the AIR·MS environment and AI-related tools through live demonstrations you can follow along with. Each session will be offered in a hybrid format, with our team onsite to provide support and answer questions. Material will be provided 1 week prior to each session for registered users.

Register and attend the Spring 2026 training and use AIR·MS tools and you will be eligible to receive a free exclusive AIR·MS jacket!

 

Session 1: Getting Started with AIR·MS: Health Data Fundamentals

Tuesday April 7 (9:00 am – 10:00 am) | Hybrid In-Person & Zoom

PLEASE NOTE: THE MATERIAL FOR AIR·MS TRAINING SESSION 1 IS HERE (2026 UPDATED MATERIAL COMING SOON).

THE DIRECT LINK TO THE UPDATED JUPYTER NOTEBOOK IS HERE

You will need to have prior access to Minerva and AIR·MS OMOP De-ID dataset to participate in certain areas of this training. If you don’t have yet, you can follow these steps (NB: to access SailPoint you will need to be connected to the Mount Sinai network either by being onsite or by using VPN):

  1. Check that you have an ISMMS (School Network) Account. If you don’t, you can request it on SailPoint by selecting “School Network Account”
  2. If you haven’t yet, request a Minerva Account filling out this formPlease note that if you are not a PI, your PI should provide approval via email
  3. Request access to AIR·MS OMOP De-ID dataset via SailPoint by selecting “AIR.MS Production MSDW OMOP De-ID (MSSM)”. A detailed how to guide can be found here.

More details of how to gain access can be found here: AIR‧MS: Getting Started | Scientific Computing and Data

  • In person attendees: In person sessions will take place in Annenberg Classroom 10-70.
  • Virtual attendees: Zoom links will be provided following registration for each session.

In session 1, participants will learn how to:

  • Understand data flow from EPIC to AIR·MS.
  • Avoid common pitfalls in working with clinical data.
  • Perform exploratory data analysis (EDA) in AIR·MS.

Includes a live demo notebook shared in advance. 

Audience: Researchers, data scientists, and clinicians new to AIR·MS. 

 

Session 2: From AI Agents to AIR·MS: Leveraging Large Language Models

Tuesday April 14 (9:00 am – 10:00 am) | Hybrid In-Person & Zoom 

PLEASE NOTE: THE MATERIAL FOR AIR·MS TRAINING SESSION 2 IS HERE (2026 UPDATED MATERIAL COMING SOON).

THE DIRECT LINK TO THE UPDATED JUPYTER NOTEBOOK IS HERE

REMINDER: Participants will need to request access to Minerva and AIR·MS by Friday April3rd at the latest so that there is sufficient time to process applications. Participants will need to have a working SYMANTEC VIP 2-factor authentication.

Discover how on-premises open-source LLMs can support data exploration in AIR·MS. This session covers: 

  • ChatAI demo. 
  • Using Ollama and AIR·MS within Python.
  • Introduction to SQL
  • How LLMs can help write and refine SQL queries. 

Audience: Introductory users interested in AI tools for health data. 

 

Session 3: Advanced AIR·MS: Deep Dive into Data Modalities and AI/Machine Learning (ML) Applications

Tuesday April 21 (9:00 am – 10:00 am) | Hybrid In-Person & Zoom 

Explore advanced AIR·MS capabilities for multimodal research, with lightning talks on OMOP, pathology, radiology, echocardiography, and Mount Sinai Million.

Audience: Advanced researchers and technical users. 

Please register ahead of time for sessions. Direct any questions through our new ticketing support system.

 


UPDATE: Scheduled Maintenance on Minerva DMZ Web Server, Posit-Connect, and Open OnDemand Has Been Completed As Planned

The scheduled maintenance on our DMZ Web Server, Posit-Connect, and Open OnDemand servers has been completed as planned.

 


Postponed: Onboarding Minerva’s New Multi-Factor Authentication

  • The Digital and Technology Partners (DTP) department is phasing out the Symantec VIP token as the two-factor authentication method for Minerva.
  • We will switch to Microsoft Authenticator as the new MFA at 9:00AM on March 26 2026.
  • A Minerva test server is now available for HPC users to begin onboarding with the new MFA method.
 
Please email us at hpchelp@hpc.mssm.edu if you have any problems.

 
Minerva/HPC Spring Training 2026: 

 


UPDATE! AIR•MS USER TICKETING SUPPORT SYSTEM!

  • Moving forward requests for technical support with AIR‧MS only will no longer be accepted by email and must go through the AIR‧MS Ticketing support system for users at this link:
  • https://hpims.atlassian.net/servicedesk/customer/portal/67
  • Please note: This ticketing system pertains ONLY to AIR‧MS. Any issues you have for High Performance Computing related problems will still go through hpchelp@hpc.mssm.edu

 


REMINDER: Minerva Data Backup Policy

  • This is a routine reminder on Minerva data backup policy.
  • We do not backup user files. Please archive/backup your important files by yourselves.
  • Please don’t set the permission of your Minerva files as rwx (read, write and execute) for everyone/others. 
    • This can result in file deletion by others. Please double check your file permission on Minerva especially for your project directory
  • If you have any questions or need help manage your file permission, please contact hpchelp@hpc.mssm.edu immediately.

 


RESEARCH ALERT!! New Guidance Document on Data Access and Sharing and NIST SP 800-171 Compliance

  • December 30th, 2025
  • At the December 2025 GCO Grants Forum, the Grants and Contracts Office (GCO) announced that a new guidance document would be issued to assist investigators with compliance related to genomic data access and sharing. This guidance aligns with NIH Notice NOT-OD-24-157, which mandates that all data accessed from NIH-controlled repositories, including dbGaP, must be supported within a NIST SP 800-171 compliant environment. The designated ISMMS NIST SP 800-171 environment is Minerva.
  • This guidance document entitled, “GCO Requirements for Restricted Access to Incoming Data from a Federal Genomic Data Repository or a…” is NOW AVAILABLE!! This document outlines the steps needed for institutional approval of the agreements, namely Data Use Certifications, Data Use Agreements, Data Transfer Use Agreements (DTUAs), and/or Data Provision Agreements (DPAs). The document is organized in the following sections:
  • REDCap attestation form instructions confirming data storage in Minerva.
  • Updating the eDMS Conflict of Interest (COI) Triggering Event (TE) for New Individuals Meeting the COI Regulatory Definition of Investigator.
  • InfoEd Submission if the Data Sharing Is Not in Support of An Already Existing Project Submitted to the GCO.
  • Information Required by Your Authorized Organization Representative (AOR) Prior to Signing/Certifying the Agreement.
  • Unauthorized Sharing of and Access to Data.
  • Adding Individuals to an Agreement.
  • External Collaborators and Investigators with Multiple Research Appointments.
  • Storage and End of Usage Data Destruction or Return Terms.
  • Procedure to Access Genomic Data from Federal Data Repositories.
  • Resources.
  • Please review this document carefully and incorporate its requirements into your research practices.
    Please contact your designated GCO Contracts Specialist or AOR with any questions and for further guidance.

 


NOVEMBER 2025 RESEARCH ALERT

Please Note the Following from the Grants and Contracts Office (GCO) Concerning Unauthorized Sharing and Access of Incoming Data with Others and Additional Compliance with Your Agreement for Data Access:

  • The following applies to Principal Investigators (PIs) whose research involves accessing data from an external entity.
  • Unless explicitly stated otherwise, data providers and repositories restrict data access to the individuals listed in the agreement for data access (Agreement). Your access to data does not automatically grant permission to share it with others in your lab.
  • Any individual who needs access must be explicitly listed as a recipient in the agreement either by name or by inclusion in an authorized personnel category (e.g., recipient’s faculty, employees, fellows, students, and agents (“Recipient Personnel”)). It is a serious violation with potential financial and legal consequences if you share data with individuals who are not authorized. To add ISMMS individuals, follow the steps in accordance with the requirements of the data repository and the terms of your Agreement.
  • The PI is responsible for ensuring that all individuals with access to the data are aware of the Agreement terms, especially any restrictions on further data access and sharing.
  • External collaborators need to request access to the data repository through their own institution. The external collaborator within their institution needs to establish the Agreement.
  • If you have a multiple research appointment, the data access is for you at ISMMS only. Do not send the data to your other institution or store it on their servers.
  • Likewise, do not send data to a personal email account or device.
  • In addition to restrictions on data sharing and access, you must ensure compliance with all other terms of the Agreement, especially those related to end of usage data destruction or return. Failure to comply may constitute a serious violation.
Please contact your designated GCO Contracts Specialist or AOR with any questions and for further guidance.

Minerva High Performance Computing 2025 Charge Rate

The 2025 charge rate for the Minerva High Performance Computing (HPC) service is now $155/TiB/year, effective on September 1, 2025. The new rate will be reflected in the December quarterly charges. For questions, please reach out to ranjini.kottaiyan@mssm.edu.


July 2025 – National Institutes of Health (NIH) S10 Instrumentation Grant Award

Dean Patricia Kovatch, head of Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai, was awarded $2M from the National Institutes of Health (NIH) as part of an S10 Instrumentation Grant (for advanced instrumentation) to provide state-of-the-art GPU capability and capacity.

These funds will support the creation of 48 NVIDIA B200 GPUs in 6 x DGX compute nodes, with the following specifications:

  • 8x NVLinked B200 GPUs per node, 192 GB of memory per GPU, for a total of 48 xB200 GPUs and 9 TB of memory available on B200.
  • 112 Intel Xeon Platinum 8570 2.1GHz cores, 2 TB memory, 25 TB high-speed NVME local storage per node for a total of 672 cores and 12 terabytes of memory on servers.
  • B200 introduced new format FP4 (floating point 4-bit) capability, enabling the Minerva computing infrastructure to provide nearly an exaflop with FP4 for AI inference as shown in the NVIDIA B200 DGX datasheet.

This award was only possible with the support of the many researchers at Mount Sinai, and Icahn School of Medicine leadership.


Top 10 Users

01 February 2026 through 28 February 2026

PI Department Total Hours
Raj, Towfique Neuroscience 967,420
Roussos, Panagiotis Neuroscience 829,632
Buxbaum, Joseph Psychiatry 746,327
Filizola, Marta Structural and Chemical Biology 712,668
Davis, Lea AI and Human Health 666,223
Campanella, Gabriele AI and Human Health 637,479
Shen, Li Neuroscience 594,771
Lowther, Chelsea Genetics and Genomic Sciences 526,226
Tsankov, Alexander Genetics and Genomic Sciences 485,569
Shi, Yi Pharmacological Sciences 426,302

 

Minerva High Performance Computer

Leverage the compute power of Minerva to advance your science

Technical Specifications

Over 20 petaflops of compute power, 452 TB of random access memory (RAM), 32 petabytes of spinning storage, and over 25,000 cores. See more.

Chimera Partition

Chimera Partition

  • 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
  • 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
    • 13,152 cores (48 per node (2 sockets/node))
  • 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
  • 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
  • 44 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
    • 1.92TB SSD (1.8 TB usable) per node
  • 10 gateway nodes
  • New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
  • Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition (Decommissioned)

BODE2 Partition

(Note: this partition was recently decommissioned in 2024.)

$2M S10 BODE2 awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
  • 192 GB of memory per node
  • 240 GB of SSDs per node
  • 15 TB total memory
  • Before decommissioning, this partition is open to all NIH funded projects
CATS Partition

CATS Partition

$2M CATS awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
  • 1.5 TB of memory per node
  • 82.5 TB memory (collectively)
  • This partition is open to eligible NIH funded projects

Account Request

All Minerva users, including external collaborators, must have an account to access. See more.

Mount Sinai User

Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.

External Collaborators

Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.

Group Collaborator

Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.

Project Allocation

Request for Project Allocation. Request allocation on Minerva for a new or existing project.

Connect to Minerva

Minerva uses the Secure Shell (SSH) protocol and two factor authentication. Minerva is HIPAA compliant. See more.

Quick Start Guide

Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more by clicking here.

Acceptable Use Policy

When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. Click here to see more.

Usage Fee Policy

Please refer to our comprehensive fee schedule based on the resources used. See more.

  • The 2024 charging rate is now $119/terabyte/yr calculated monthly at a rate of $9.92/terabyte/mo
  • Charges are determined yearly by the Mount Sinai Compliance and Finance Departments and include all Minerva services, i.e., cpu and gpu utilization; the storage, itself; archive storage; etc.

We are HIPAA Compliant

All users are required to read the HIPAA policy and complete the Minerva HIPAA compliance form on an annual basis. Click here to read more about HIPAA compliance.

Research Data

Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.

Mount Sinai Data Warehouse

The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.

More about MSDW

Data Ark: Data Commons

The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.

More about Data Ark

Acknowledge Mount Sinai in Your Work

Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.

Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.