High Performance Computing
Partnering with scientists to accelerate scientific discovery
The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of computing power. It consists of 24,912 Intel Platinum processors in different generations including 2.3 GHz, 2.6 GHz, and 2.9 GHz computing cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5 terabytes (TB) of memory per node, 356 graphical processing units (GPUs), including 236 Nvidia H100s, 32 Nvidia L40S, 40 Nvidia A100s, 48 Nvidia V100s, 440 TB of total memory, and 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 1,900 peer-reviewed publications since 2012. More details here.

Announcements
IMPORTANT UPDATE: The maintenance on the Open OnDemand Server originally scheduled for Thursday, OCTOBER 9th from 9:00 AM – 12:00 PM has been CANCELED!
Town Hall: New GPUs/AI resources available on Empire AI Computing Center (https://www.empireai.edu/) – Friday October 10 at 12:00 PM (noon)
There will be a Town Hall on the new GPUs/AI resources available on Empire AI Computing Center (https://www.empireai.edu/) scheduled for October 10 at 12:00 PM (noon).
- Topics covered:
- What is Empire AI
- Empire AI GPU Hardware Resources
- How to Access
- Discussion & Questions
Register here.
NEW: AIR·MS Fall Training
We will be holding 3 AIR·MS training sessions this Fall. These sessions will introduce you to the AIR·MS environment and AI-related tools through live demonstrations you can follow along with. Each session will be offered in a hybrid format, with our team onsite to provide support and answer questions. Material will be provided 1 week prior to each session for registered users.
Further details including registration links and course content can be found here.
NEW: Minerva High Performance Computing Fall Training
Every Tuesday and Friday starting Sep 16 and ending Oct. 14.
-
We will be holding 9 training sessions this Fall. These sessions are intended to familiarize you with the Minerva environment and AI related tools. Basic understanding of the general Unix operating environment and Linux commands is expected.
-
There is also a training session for Data Ark to get you familiarized with the Data Ark Data Sets and environment.
-
All sessions will be offered in person in the following room: Icahn L3-41.
-
Zoom links for virtual attendees are provided following registration for each session.Every Tuesday and Friday starting Sep 16 and ending Oct. 14RECORDING NOW AVAILABLE (click to view) — Session 1: Introduction to Minerva – Tuesday, Sep 16, 2025, 1-2 pmThis session covers:
- Minerva resources
- Account and logging in
- User software environment
- Preview of service on Minerva
RECORDING NOW AVAILABLE (click to view): Session 2: Essential Services on Minerva – Friday, Sep 19, 2025, 1-2 pmThis session will cover the following and focus on a live demonstration:- Globus file transfers
- Web server usage
- TSM archive service
- Posit Connect server
RECORDING NOW AVAILABLE: Session 3: Load Sharing Facility (LSF) Job Scheduler – Tuesday, Sep 23, 2025, 1-2 pmThis session will cover:- LSF introduction and basic/helpful LSF commands
- Job submission and monitoring
- Parallel jobs (parallel processing and GPUs)
- Job Arrays and Self-scheduler
- DOs and DON’Ts
RECORDING NOW AVAILABLE: Session 4. Introduction to GPU/AI resources on Minerva – Friday, Sep 26, 2025, 1-2 pmThis session will cover:- What is a GPU
- GPU resources on Minerva
- User GPU/AI Software environment on Minerva
- Running GPU/AI jobs in LSF
RECORDING NOW AVAILABLE: Session 5. Accelerating Biomedical Data Science with GPUs: Practical Approaches and Tools – Tuesday, Sep 30, 2025, 1-2 pmThis session will cover:- GPU fundamentals
- Ways to accelerate with GPUs
- GPU-Accelerated Numerical Computing with CuPy
- GPU-Accelerated Data Science with RAPIDS
RECORDING NOW AVAILABLE: Session 6. Access Minerva via web browser Open OnDemand – Friday, Oct. 3, 2025, 1-2 pmThis session will cover the following and focus on live demonstration:- Login via Open OnDemand
- File Access via Open OnDemand
- Submit jobs via Open OnDemand
- Access Interactive Apps within Open OnDemand: Desktop, Rstudio, Jupyter, Code Server, Matlab, SAS etc.
RECORDING NOW AVAILABLE: Session 7. Leveraging Large Language Models in Biomedical Research – Tuesday, Oct. 7, 2025, 1-2 pmThis session will cover:- Introduction to Large Language Models (LLMs)
- Transformer Architecture
- Key LLM Models
- Training and Fine-Tuning LLMs
- Practical Implementation on GPUs
Session 8: Introduction to Data Ark – Mount Sinai Data Commons – Friday Oct. 10, 2025, 1-1:30 pmIn-Person Attendees:Register hereThis session will cover:- Introduction to Data Ark
- Accessing datasets through Data Ark
- Digital pathology slides access through Data Ark
Session 9: Using Containers on Minerva and Accelerating Genome Analysis with Parabricks, Tuesday, Oct. 14, 2025, 1-2 pmZoom attendees: Register here for Using Containers on Minerva and Accelerating Genome Analysis with ParabricksIn-Person Attendees:Register hereThis session will cover the following with a live demonstration:- Singularity/Apptainer access on Minerva
- Capabilities and Performance of Parabricks
- Parabricks for secondary analysis
Please register ahead of time for sessions. Please check our website for additional details and updates. Direct any questions to hpchelp@hpc.mssm.eduThank you, and we look forward to seeing you.
Minerva High Performance Computing 2025 Charge Rate
The 2025 charge rate for the Minerva High Performance Computing (HPC) service is now $155/TiB/year, effective on September 1, 2025. The new rate will be reflected in the December quarterly charges. For questions, please reach out to ranjini.kottaiyan@mssm.edu.
July 2025 – National Institutes of Health (NIH) S10 Instrumentation Grant Award
Dean Patricia Kovatch, head of Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai, was awarded $2M from the National Institutes of Health (NIH) as part of an S10 Instrumentation Grant (for advanced instrumentation) to provide state-of-the-art GPU capability and capacity.
These funds will support the creation of 48 NVIDIA B200 GPUs in 6 x DGX compute nodes, with the following specifications:
- 8x NVLinked B200 GPUs per node, 192 GB of memory per GPU, for a total of 48 xB200 GPUs and 9 TB of memory available on B200.
- 112 Intel Xeon Platinum 8570 2.1GHz cores, 2 TB memory, 25 TB high-speed NVME local storage per node for a total of 672 cores and 12 terabytes of memory on servers.
- B200 introduced new format FP4 (floating point 4-bit) capability, enabling the Minerva computing infrastructure to provide nearly an exaflop with FP4 for AI inference as shown in the NVIDIA B200 DGX datasheet.
This award was only possible with the support of the many researchers at Mount Sinai, and Icahn School of Medicine leadership.
Top 10 Users
01 September 2025 through 30 September 2025
PI | Department | Total Hours |
Faith, Jeremiah | Genetics and Genomic Sciences | 971,057 |
Beck, Erin | Neurology | 810,534 |
Roussos, Panagiotis | Psychiatry | 705,466 |
Campanella, Gabriele | AI and Human Health | 581,410 |
Pejaver, Vikas | Institute for Genomic Health | 523,738 |
Lippert, Christoph | AI and Human Health | 459,896 |
Sebra, Robert | Genetics and Genomic Sciences | 452,974 |
Reva, Boris | Genetics and Genomic Sciences | 441,300 |
Filizola, Marta | Structural and Chemical Biology | 402,961 |
Raj, Towfique | Neuroscience | 348,962 |
Minerva High Performance Computer
Leverage the compute power of Minerva to advance your science
Technical Specifications
Over 11 petaflops of compute power, 440 TB of random access memory (RAM), 32 petabytes of spinning storage, and over 24,000 cores. See more.
Chimera Partition
Chimera Partition
- 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
- 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
- 13,152 cores (48 per node (2 sockets/node))
- 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
- 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
- 32 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
- 1.92TB SSD (1.8 TB usable) per node
- 10 gateway nodes
- New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
- Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition (Decommissioned)
BODE2 Partition
(Note: this partition was recently decommissioned in 2024.)
$2M S10 BODE2 awarded by NIH (Grant PI: Patricia Kovatch)
- 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
- 192 GB of memory per node
- 240 GB of SSDs per node
- 15 TB total memory
- Before decommissioning, this partition is open to all NIH funded projects
CATS Partition
CATS Partition
$2M CATS awarded by NIH (Grant PI: Patricia Kovatch)
- 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
- 1.5 TB of memory per node
- 82.5 TB memory (collectively)
- This partition is open to eligible NIH funded projects

Account Request
All Minerva users, including external collaborators, must have an account to access. See more.
Mount Sinai User
Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.
External Collaborators
Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.
Group Collaborator
Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.
Project Allocation
Request for Project Allocation. Request allocation on Minerva for a new or existing project.

Connect to Minerva
Minerva uses the Secure Shell (SSH) protocol and two factor authentication. Minerva is HIPAA compliant. See more.
Quick Start Guide
Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more by clicking here.
Acceptable Use Policy
When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. Click here to see more.
Usage Fee Policy
Please refer to our comprehensive fee schedule based on the resources used. See more.
- The 2024 charging rate is now $119/terabyte/yr calculated monthly at a rate of $9.92/terabyte/mo
- Charges are determined yearly by the Mount Sinai Compliance and Finance Departments and include all Minerva services, i.e., cpu and gpu utilization; the storage, itself; archive storage; etc.
We are HIPAA Compliant
All users are required to read the HIPAA policy and complete the Minerva HIPAA compliance form on an annual basis. Click here to read more about HIPAA compliance.
Research Data
Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.

Mount Sinai Data Warehouse
The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.

Data Ark: Data Commons
The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.
Acknowledge Mount Sinai in Your Work
Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.
Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.