Scientific Computing and Data / High Performance Computing / Hardware and Technical Specs
Hardware and Technical Specs
- The primary asset for Scientific Computing is the supercomputer, Minerva.
- The HPC resource, Minerva, was created in 2012 and has been upgraded several times, most recently in Nov. 2024.
- It utilizes 24,912 Intel Platinum in different generations including 8568Y+ 2.3GHz, 8358 2.6 GHz, and 8268 2.9 GHz compute cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5TB of memory per node, 196 H100 GPUs, 32 L40S, 40 A100 GPUs, 48 V100 GPUs, 440 terabytes of total memory, 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS) for a total of > 2 petaflops of CPU compute power and ~ 8 petaflops for GPU compute power.
- Minerva has contributed to over 1,700 peer-reviewed publications since 2012.
- Minerva cluster design is driven by the research demand performed by Minerva users (i.e. the number of nodes, the amount of memory per node, and the amount of disk space for storage).
The following diagram shows the overall Minerva configuration.

Compute Nodes
Chimera Partition
Added in Nov. 2024. | Nodes purchased prior to 2024 and integrated to new NDR network via HDR 100Gb/s. |
|
|
BODE2 Partition |
CATS Partition |
Private Nodes |
[Decommissioned on July 17th and Nov. 5th, 2024].$2M S10 BODE2 awarded by NIH (Kovatch PI).
|
$2M CATS awarded by NIH (Kovatch PI).
|
|
Summary
Total system memory (computes + GPU) = 440 TB | Total number of cores (computes + GPU) = 24,912 cores |
CPU Peak performance of all nodes = > 2 PFLOPS | H100 Peak performance based FP64 Tensor cores = 12.5 PFLOPS. |
Max performance from HPL LINPACK run is 7.9 PFLOPS. | Private nodes are not counted in the calculation. |
Acknowledging Mount Sinai in Your Work
Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use. Click here for acknowledgements.
File System Storage
For Minerva, we focused on parallel file systems because NFS and other file systems simply cannot scale to the number of nodes or provide performance for the sheer number of files that the genomics workload entails. Specifically, Minerva is using IBM’s General Parallel File System (GPFS) because it has advantages that are specifically useful for this workload such as parallel metadata, tiered storage, and sub-block allocation. Metadata is the information about the data in the file system. The flash storage is utilized to hold the metadata and tiny files for fast access.
Currently we have one parallel file system on Minerva, Arion, which users can access at /sc/arion. The Hydra file system was retired at the end of 2020.
GPFS Name | Lifetime | Storage Type | Raw PB | Usable PB |
Arion | 2019 – | Lenovo DSS | 14 | 9.6 |
Arion | 2019 – | Lenovo G201 flash | 0.12 | 0.12 |
Arion | 2020 – | Lenovo DSS | 16 | 11.2 |
Arion | 2021 – | Lenovo DSS | 16 | 11.2 |
Total | 46 | 32 |
Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.