High Performance Computing
Partnering with scientists to accelerate scientific discovery
The High Performance Computing resource Minerva has over 2 petaflops of compute power. Minerva was created in 2012 and has been upgraded several times since then, most recently in January 2022, and utilizes 24,214 Intel Platinum in three generations including 8358 2.6 GHz, 8268 2.9 GHz and 8168 2.7 GHz compute cores (48 cores or 64 cores per node with two sockets in each node), 92 nodes with 1.5 terabytes of memory per node, 353 nodes with 192 GB of memory per node, 48 V100 GPUs, 40 A100 GPUs, 210 terabytes of total memory, 350 terabytes of solid-state storage, 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 1,400 peer-reviewed publications in ten years. More details here.
Announcements
October 9 2024 – Minerva full preventative maintenance (PM) is scheduled for MAJOR Minerva infrastructure upgrades from 8AM Tuesday Nov. 5th to 5:00PM Thursday Nov. 7th. This is Minerva site-wide full PM. During the PM, no access to any part of Minerva. No jobs will be running.
September 2024 – Minerva Fall Training Sessions are now scheduled. Click here to register.
May 2024 – As of July 1, 2024 the Minerva chargeback rate will be $119/TiB. The fee will be reflected in the September 2024 charge. For any questions, please reach out to ranjini.kottaiyan@mssm.edu.
March 2024 – Minerva Spring Training sessions were held. Click here for access to the training materials.
March 4, 2024 – The Spring 2024 Minerva HPC and Data Ark Town Hall was held Tuesday, April 16, 3 pm- 4 pm. The slides and presentation recording are now available.
December 21, 2023 – The HPC team will be on holiday schedule starting Thursday, December 21, 2023 through the end of the year. During this period, the responses to the tickets (including both Minerva and Data Ark) will be slow and limited, as only one HPC staff will be on duty per day to address and respond to urgent tickets.
Thanks for your understanding, and we wish you a good holiday!
August 2, 2023 – Scientific Computing and Data make available the NSF-supported Open OnDemand service portal to access Minerva today. This product offers a fully-compliant job management and desktop portal requiring minimal knowledge of Linux high-performance computing (HPC) environments with no end-user installation requirements other than an up-to-date web browser (Chrome or Firefox recommended). The service portal is accessed at URL: https://ondemand.hpc.mssm.edu
Documentation is available at here
April 11, 2023- Minerva TSM Archival Storage LTO-5 Tape Solution will be out of Support on 12/31/2023. If you have data archived/backed up to LTO-5 tapes (i.e., prior to 05/10/2022), you will need to retrieve and re-archive any data that you still wish to retain. See TSM Guide on how to retrieve and archive. For questions please reach out to hpchelp@hpc.mssm.edu
March 8, 2023 – Computational and data science ecosystem is available for Oncology researchers to gain insights to further their research. There are HP resources, EHR cohort query tools, data resources and electronic data capture systems available here. Expertise is available to support researchers with tools, data access, and consultations.
March 3, 2023 – Thank you to everyone who participated in the Annual User Surveys. To see Minerva HPC’s feedback, click here. To see Data Ark’s feedback, click here.
Top 10 Users
09 June through 15 June 2024
PI | Department | Total Hours |
Raj, Towfique | Neuroscience | 592,330 |
Reva, Boris | Genetics and Genomic Sciences | 172,574 |
Pejaver, Vikas | Institute for Genomic Health | 168,582 |
Charney, Alexander | Genetics and Genomic Sciences | 139,770 |
Faith, Jeremiah | Genetics and Genomic Sciences | 136,461 |
Buxbaum, Joseph | Psychiatry | 136,034 |
Zhang, Bin | Genetics and Genomic Sciences | 96,569 |
Kenny, Eimear | Institute for Genomic Health | 91,229 |
Roussos, Panagiotis | Psychiatry | 61,667 |
Pinto, Dalila | Genetics and Genomic Sciences | 60,747 |
Minerva High Performance Computer
Leverage the compute power of Minerva to advance your science
Technical Specifications
Over 2 petaflops of compute power, 210 TB of total system memory, and over 24,000 cores. See more.
Chimera Partition
Chimera partition
- 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
- 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
- 13,152 cores (48 per node (2 sockets/node))
- 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
- 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
- 32 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
- 1.92TB SSD (1.8 TB usable) per node
- 10 gateway nodes
- New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
- Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition
BODE2 partition
$2M S10 BODE2 awarded by NIH (Kovatch PI)
- 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
- 192 GB of memory per node
- 240 GB of SSDs per node
- 15 TB memory (collectively)
- Open to all NIH funded projects
CATS Partition
CATS partition
$2M CATS awarded by NIH (Kovatch PI)
- 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
- 1.5 TB of memory per node
- 82.5 TB memory (collectively)
- Under installation. will be open to eligible NIH funded projects
Account Request
All Minerva users, including external collaborators, must have an account to access. See more.
Mount Sinai User
Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.
External Collaborator
Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.
Group Collaborator
Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.
Project Allocation
Request for Project Allocation. Request allocation on Minerva for a new or existing project.
Connect to Minerva
Minerva uses the Secure Shell (ssh) protocol and Two Factor authentication. Minerva is HIPAA compliant. See more.
Quick Start Guide
Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more.
Acceptable Use Policy
When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. See more.
Usage Fee Policy
Please refer to our comprehensive fee schedule based on the resources used. See more.
- The current charging rate is $100/TiB/yr calculated monthly at a rate of $8.33/TiB/mo
- PI’s receive a monthly report showing their group’s usage alongside the cost
- Charges are processed automatically every two months via internal invoicing through central finance
We are HIPAA Compliant
All users are required to read the HIPAA policy and complete the Minerva HIPAA Compliance Form on an annual basis. For 2022 users, HIPAA Compliance Forms must be completed and submitted by January 31, 2022 or accounts will be locked. Click here to read more about HIPAA Compliance.
Research Data
Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.
Mount Sinai Data Warehouse
The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.
Data Ark: Data Commons
The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.
HHEAR
The Human Health Exposure Analysis Resource (HHEAR) Data Center was established as a continuation of the CHEAR Data Center, expanding to include health outcomes at all ages. The goal is to provide approved HHEAR investigators their laboratory analysis results and incorporate them in statistical analyses of their study data.
Acknowledge Mount Sinai in Your Work
Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.
Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.