High Performance Computing

Partnering with scientists to accelerate scientific discovery

The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of computing power. It consists of 24,912 Intel Platinum processors in different generations including 2.3 GHz, 2.6 GHz, and 2.9 GHz computing cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5 terabytes (TB) of memory per node, 356 graphical processing units (GPUs), including 236 Nvidia H100s, 32 Nvidia L40S, 40 Nvidia A100s, 48 Nvidia V100s, 440 TB of total memory, and 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 1,900 peer-reviewed publications since 2012. More details here.

Announcements


 

De-identified Digital Pathology Training, Tuesday December 2nd, 10.30-11.30 am EST.

>2.6 MILLION De-identified Mount Sinai Digital Pathology Slides Are Now Available for Research!

  • Mount Sinai is providing the largest pathology repository in the United States.
  • Encompasses biopsies, resections, and autopsies from virtually every organ system and disease
  • This training session guides you to include de-identified digital pathology slides in your research!
  • This work is supported by the Department of Pathology, Molecular and Cell-Based Medicine, the Windreich Department of Artificial Intelligence and Human Health, Scientific Computing and Data, and Digital Technology Partners

UPDATE: Slides and a video recording of this session now available to view here.


 

NOVEMBER 2025 RESEARCH ALERT

Please Note the Following from the Grants and Contracts Office (GCO) Concerning Unauthorized Sharing and Access of Incoming Data with Others and Additional Compliance with Your Agreement for Data Access:

  • The following applies to Principal Investigators (PIs) whose research involves accessing data from an external entity.
  • Unless explicitly stated otherwise, data providers and repositories restrict data access to the individuals listed in the agreement for data access (Agreement). Your access to data does not automatically grant permission to share it with others in your lab.
  • Any individual who needs access must be explicitly listed as a recipient in the agreement either by name or by inclusion in an authorized personnel category (e.g., recipient’s faculty, employees, fellows, students, and agents (“Recipient Personnel”)). It is a serious violation with potential financial and legal consequences if you share data with individuals who are not authorized. To add ISMMS individuals, follow the steps in accordance with the requirements of the data repository and the terms of your Agreement.
  • The PI is responsible for ensuring that all individuals with access to the data are aware of the Agreement terms, especially any restrictions on further data access and sharing.
  • External collaborators need to request access to the data repository through their own institution. The external collaborator within their institution needs to establish the Agreement.
  • If you have a multiple research appointment, the data access is for you at ISMMS only. Do not send the data to your other institution or store it on their servers.
  • Likewise, do not send data to a personal email account or device.
  • In addition to restrictions on data sharing and access, you must ensure compliance with all other terms of the Agreement, especially those related to end of usage data destruction or return. Failure to comply may constitute a serious violation.
Please contact your designated GCO Contracts Specialist or AOR with any questions and for further guidance.
 

 
 
UPDATE: MATERIALS NOW AVAILABLE!
Fall 2025 High Performance Computing (HPC) and Data Ark Town Hall – Friday November 21st at 12:00 PM (noon) – 1:00 PM Eastern
 
The Fall 2025 Minerva HPC and Data Ark Town Hall materials are now available on the training web page and also at the easy-to-follow links below:
For any questions, please contact us at hpchelp@hpc.mssm.edu.

 

REMINDER: Minerva Data Backup Policy

This is a routine reminder on Minerva data backup policy.
  • We do not backup user files. Please archive/backup your important files by yourselves.
  • Please don’t set the permission of your Minerva files as rwx (read, write and execute) for everyone/others. 
    • This can result in file deletion by others. Please double check your file permission on Minerva especially for your project directory
If you have any questions or need help manage your file permission, please contact hpchelp@hpc.mssm.edu immediately.

 

Town Hall: New GPUs/AI resources available on Empire AI Computing Center (https://www.empireai.edu/)

SLIDES NOW AVAILABLE HERE AND VIDEO RECORDING HERE.

A Town Hall on the new GPUs/AI resources available on Empire AI Computing Center (https://www.empireai.edu/) was held on October 10 at 12:00 PM (noon).

  • Topics covered:
    • What is Empire AI
    • Empire AI GPU Hardware Resources
    • How to Access
    • Discussion & Questions

 

NEW: AIR·MS Fall Training – Recordings and slides now available!

Three new AIR·MS training sessions took place this Fall aimed at introducing users to the AIR·MS environment and AI-related tools through live demonstrations.

Further training sessions will commence in Spring 2026. Please be sure to check back here for details when they are announced.

Recordings and slides from the Fall sessions are now available for viewing here.


 

NEW: Minerva High Performance Computing Fall Training

The Minerva High Performance Computing Fall Training has concluded. Recordings and slides from each session are available here.

Further training sessions will commence in Spring 2026. Please be sure to check back here for details when they are announced.


 

Minerva High Performance Computing 2025 Charge Rate

The 2025 charge rate for the Minerva High Performance Computing (HPC) service is now $155/TiB/year, effective on September 1, 2025. The new rate will be reflected in the December quarterly charges. For questions, please reach out to ranjini.kottaiyan@mssm.edu.


 

July 2025 – National Institutes of Health (NIH) S10 Instrumentation Grant Award

Dean Patricia Kovatch, head of Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai, was awarded $2M from the National Institutes of Health (NIH) as part of an S10 Instrumentation Grant (for advanced instrumentation) to provide state-of-the-art GPU capability and capacity.

These funds will support the creation of 48 NVIDIA B200 GPUs in 6 x DGX compute nodes, with the following specifications:

  • 8x NVLinked B200 GPUs per node, 192 GB of memory per GPU, for a total of 48 xB200 GPUs and 9 TB of memory available on B200.
  • 112 Intel Xeon Platinum 8570 2.1GHz cores, 2 TB memory, 25 TB high-speed NVME local storage per node for a total of 672 cores and 12 terabytes of memory on servers.
  • B200 introduced new format FP4 (floating point 4-bit) capability, enabling the Minerva computing infrastructure to provide nearly an exaflop with FP4 for AI inference as shown in the NVIDIA B200 DGX datasheet.

This award was only possible with the support of the many researchers at Mount Sinai, and Icahn School of Medicine leadership.


 

Top 10 Users

01 November 2025 through 30 November 2025

PI Department Total Hours
Raj, Towfique Neuroscience 3,526,107
Roussos, Panagiotis Psychiatry 976,060
Nadkarni, Girish Medicine 852,297
Cho, Judy Genetics and Genomic Sciences 786,953
Goate, Alison Genetics and Genomic Sciences 631,108
Filizola, Marta Structural and Chemical Biology 596,363
Pejaver, Vikas Institute for Genomic Health 542,874
Campanella, Gabriele AI and Human Health 413,613
Bunyavanich, Supinda Genetics and Genomic Sciences 383,983
Mei, Xueyan BioMedical Engineering and Imaging Institute 310,817

 

Minerva High Performance Computer

Leverage the compute power of Minerva to advance your science

Technical Specifications

Over 11 petaflops of compute power, 440 TB of random access memory (RAM), 32 petabytes of spinning storage, and over 24,000 cores. See more.

Chimera Partition

Chimera Partition

  • 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
  • 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
    • 13,152 cores (48 per node (2 sockets/node))
  • 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
  • 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
  • 32 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
    • 1.92TB SSD (1.8 TB usable) per node
  • 10 gateway nodes
  • New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
  • Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition (Decommissioned)

BODE2 Partition

(Note: this partition was recently decommissioned in 2024.)

$2M S10 BODE2 awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
  • 192 GB of memory per node
  • 240 GB of SSDs per node
  • 15 TB total memory
  • Before decommissioning, this partition is open to all NIH funded projects
CATS Partition

CATS Partition

$2M CATS awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
  • 1.5 TB of memory per node
  • 82.5 TB memory (collectively)
  • This partition is open to eligible NIH funded projects

Account Request

All Minerva users, including external collaborators, must have an account to access. See more.

Mount Sinai User

Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.

External Collaborators

Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.

Group Collaborator

Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.

Project Allocation

Request for Project Allocation. Request allocation on Minerva for a new or existing project.

Connect to Minerva

Minerva uses the Secure Shell (SSH) protocol and two factor authentication. Minerva is HIPAA compliant. See more.

Quick Start Guide

Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more by clicking here.

Acceptable Use Policy

When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. Click here to see more.

Usage Fee Policy

Please refer to our comprehensive fee schedule based on the resources used. See more.

  • The 2024 charging rate is now $119/terabyte/yr calculated monthly at a rate of $9.92/terabyte/mo
  • Charges are determined yearly by the Mount Sinai Compliance and Finance Departments and include all Minerva services, i.e., cpu and gpu utilization; the storage, itself; archive storage; etc.

We are HIPAA Compliant

All users are required to read the HIPAA policy and complete the Minerva HIPAA compliance form on an annual basis. Click here to read more about HIPAA compliance.

Research Data

Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.

Mount Sinai Data Warehouse

The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.

More about MSDW

Data Ark: Data Commons

The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.

More about Data Ark

Acknowledge Mount Sinai in Your Work

Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.

Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.