High Performance Computing
Partnering with scientists to accelerate scientific discovery
The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of compute power. It consists of 24,912 Intel Platinum processors in different generations including 2.3 GHz, 2.6 GHz, and 2.9 GHz compute cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5TB of memory per node, 236 Nvidia H100 graphical processing units (GPUs), 32 Nvidia L40S servers, 40 Nvidia A100 GPUs, 48 Nvidia V100 GPUs, 440 terabytes of total memory, and 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 1,800 peer-reviewed publications since 2012. More details here.

Announcements
The Inaugural Mount SinAI Retreat will take place on May 19th and registration has been extended – register here!
This will be the first annual showcase of Mount Sinai’s transformation into a true learning health system and will include:
- An exciting day of talks and panels in the Stern Auditorium, with a special keynote address by Dean Charney on inventing the future of medicine
- Interactive stations in the Atrium where attendees can test the latest AI tech such as augmented reality goggles and cool robots
- A poster session and cocktail hour in the new James Building for AI at 3 E 101st St
New Training Session – THURSDAY APRIL 24, NOON-1 PM; Introduction to AIR·MS (ARTIFICIAL INTELLIGENCE-READY MOUNT SINAI) Database
- What is AIR·MS?
- Data Modalities in AIR·MS
- Request Access to AIR·MS
- Use AIR·MS on Minerva HPC
- Documentation and Support
Minerva Upgrade
November 2024 – Mount Sinai has recently completed a $2 million upgrade to the Minerva supercomputer, which now increases the computational power to 11.4 petaflops of total peak computational power, including 8.7 petaflops of 64-bit peak computational power, supported 450 TB total memory, and 50 petabyte parallel file system. Highlights include:
- 146 compute nodes (14,016 cores in total)
- 188 H100 in 47 nodes
- 32 L40s GPUs in 4 nodes
- Click here for more information.
Top 10 Users
01 January 2023 through 31 December 2024
PI | Department | Total Hours |
Roussos, Panagiotis | Psychiatry | 27,405,757 |
Buxbaum, Joseph | Psychiatry | 22,208,338 |
Raj, Towfique | Neuroscience | 21,025,036 |
Sharp, Andrew | Genetics and Genomic Sciences | 16,748,020 |
Pejaver, Vikas | Institute for Genomic Health | 15,351,189 |
Zhang, Bin | Genetics and Genomic Sciences | 14,664,897 |
Charney, Alexander | Genetics and Genomic Sciences | 13,596,314 |
Goate, Alison | Genetics and Genomic Sciences | 12,765,089 |
Kenny, Eimear | Institute for Genomic Health | 6,853,099 |
Schlessinger, Avner | Pharmacology | 5,716,077 |
Minerva High Performance Computer
Leverage the compute power of Minerva to advance your science
Technical Specifications
Over 11 petaflops of compute power, 440 TB of random access memory (RAM), 32 petabytes of spinning storage, and over 24,000 cores. See more.
Chimera Partition
Chimera Partition
- 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
- 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
- 13,152 cores (48 per node (2 sockets/node))
- 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
- 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
- 32 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
- 1.92TB SSD (1.8 TB usable) per node
- 10 gateway nodes
- New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
- Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition (Decommissioned)
BODE2 Partition
(Note: this partition was recently decommissioned in 2024.)
$2M S10 BODE2 awarded by NIH (Grant PI: Patricia Kovatch)
- 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
- 192 GB of memory per node
- 240 GB of SSDs per node
- 15 TB total memory
- Before decommissioning, this partition is open to all NIH funded projects
CATS Partition
CATS Partition
$2M CATS awarded by NIH (Grant PI: Patricia Kovatch)
- 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
- 1.5 TB of memory per node
- 82.5 TB memory (collectively)
- This partition is open to eligible NIH funded projects

Account Request
All Minerva users, including external collaborators, must have an account to access. See more.
Mount Sinai User
Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.
External Collaborators
Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.
Group Collaborator
Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.
Project Allocation
Request for Project Allocation. Request allocation on Minerva for a new or existing project.

Connect to Minerva
Minerva uses the Secure Shell (SSH) protocol and two factor authentication. Minerva is HIPAA compliant. See more.
Quick Start Guide
Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more by clicking here.
Acceptable Use Policy
When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. Click here to see more.
Usage Fee Policy
Please refer to our comprehensive fee schedule based on the resources used. See more.
- The 2024 charging rate is now $119/terabyte/yr calculated monthly at a rate of $9.92/terabyte/mo
- Charges are determined yearly by the Mount Sinai Compliance and Finance Departments and include all Minerva services, i.e., cpu and gpu utilization; the storage, itself; archive storage; etc.
We are HIPAA Compliant
All users are required to read the HIPAA policy and complete the Minerva HIPAA compliance form on an annual basis. Click here to read more about HIPAA compliance.
Research Data
Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.

Mount Sinai Data Warehouse
The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.

Data Ark: Data Commons
The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.
Acknowledge Mount Sinai in Your Work
Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.
Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.