Scientific Computing and Data / High Performance Computing / Hardware and Technical Specs
Hardware and Technical Specs
- The Minerva supercomputer is maintained by Scientific Computing and Data (SCD) at the Icahn School of Medicine, Mount Sinai.
- Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of compute power.
- It consists of 24,912 Intel Platinum processors in different generations including 2.3GHz, 2.6 GHz, and 2.9 GHz compute cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5TB of memory per node. In addition, Minerva also includes 236 H100 Nvidia graphical processing units (GPUs), 32 L40S Nvidia servers, 40 A100 Nvidia GPUs, 48 V100 Nvidia GPUs, 440 terabytes of total memory, 40 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS).
- Minerva has contributed to over 1,800 peer-reviewed publications since 2012.
The following diagram shows the overall Minerva configuration:

Compute Nodes
Chimera Partition
Added in Nov. 2024. | Nodes purchased prior to 2024 and integrated to new NDR network via HDR 100Gb/s. |
|
|
BODE2 Partition |
CATS Partition |
Private Nodes |
[Decommissioned on July 17th and Nov. 5th, 2024].$2M S10 BODE2 awarded by NIH (Kovatch PI).
|
$2M CATS awarded by NIH (Kovatch PI).
|
|
Summary
Total system memory (computes + GPU) = 440 TB | Total number of cores (computes + GPU) = 24,912 cores |
CPU Peak performance of all nodes = > 1.8 PFLOPS | H100 Peak performance based FP64 Tensor cores = 12.5 PFLOPS. |
Max performance from HPL LINPACK run is 7.9 PFLOPS. | Private nodes are not counted in the calculation. |
File System Storage
Minerva uses IBM’s General Parallel File System (GPFS) because it has advantages that are specifically useful for informatics workflows that involve high speed metadata access, tiered storage, and sub-block allocation. Metadata is the information about the data in the file system, and it is stored in flash memory for fast access. A parallel file system was used for Minerva because NFS and other file systems cannot scale to the number of nodes or provide performance for the large number of files involved in typical genomics workflows.
Currently we have one parallel file system on Minerva, Arion, which users can access at /sc/arion. The Hydra file system was retired at the end of 2020.
GPFS Name | Lifetime | Storage Type | Raw PB | Usable PB |
Arion | 2019 – | Lenovo DSS | 14 | 9.6 |
Arion | 2019 – | Lenovo G201 flash | 0.12 | 0.12 |
Arion | 2020 – | Lenovo DSS | 16 | 11.2 |
Arion | 2021 – | Lenovo DSS | 16 | 11.2 |
Total | 46 | 32 |
Acknowledging Mount Sinai in Your Work
This work was supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.
Using the S10 BODE and CATS Minerva partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for use in publications and other work. Click here to learn how to acknowledge Minerva and NIH support in your publications.