Hardware and Technical Specs

Minerva cluster design is driven by the research demand performed by Minerva users .i.e., the number of nodes, the amount of memory per node, and the amount of disk space for storage.

The following diagram shows the overall Minerva configuration.

Compute nodes

Chimera partition

  • 4x login nodes – Intel Skylake 8168 24C, 2.7GHz – 384 GB memory
  • 274 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
    • 13,152 cores (48 per node (2 sockets/node))
  • 4x high memory nodes – Intel 8168 24C, 2.7GHz – 1.5 TB memory
  • 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
  • 10x gateway nodes
  • New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
  • Mellanox EDR Infiniband fat tree fabric (100Gb/s)

Total system memory (computes + GPU + high mem) = 61 TB

Total number of cores (computes + GPU + high mem) = 14,016 cores

Peak performance (computes + GPU + high mem, CPU only) = 1.2 PFlops/s

*Compute Node —where you run your applications. Users do not have direct access to these machines. Access is managed through the LSF job scheduler.

BODE2 partition

$2M S10 BODE2 awarded by NIH (Kovatch PI)

  • 3,744 24-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
  • 192 GB of memory per node
  • 240 GB of SSDs per node
  • 15 TB memory (collectively)
  • Open to all NIH funded projects

File system storage

For Minerva, we focused on parallel file systems because NFS and other file systems simply cannot scale to the number of nodes or provide performance for the sheer number of files that the genomics workload entails. Specifically, Minerva is using IBM’s General Parallel File System (GPFS) because it has advantages that are specifically useful for this workload such as parallel metadata, tiered storage, and sub-block allocation. Metadata is the information about the data in the file system. The flash storage is utilized to hold the metadata and tiny files for fast access.

Currently we have two file systems Arion and Hydra on Minerva, and users can access at /sc/arion and /sc/hydra. Hydra file system will be out of warranty by the end of 2020. We will have only Arion as the primary storage file system.

GPFS Name Lifetime Storage Type Raw PB Usable PB
Hydra 2017- 2020 IBM ESS LE 4 3.5
Arion 2019- Lenovo DSS 14 9.6
Arion 2019- Lenovo G201 flash 0.15 0.15