High Performance Computing

Partnering with scientists to accelerate scientific discovery

The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of computing power. It consists of 24,912 Intel Platinum processors in different generations including 2.3 GHz, 2.6 GHz, and 2.9 GHz computing cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5 terabytes (TB) of memory per node, 356 graphical processing units (GPUs), including 236 Nvidia H100s, 32 Nvidia L40S, 40 Nvidia A100s, 48 Nvidia V100s, 440 TB of total memory, and 32 petabytes of spinning storage accessed via IBM’s Spectrum Scale/General Parallel File System (GPFS). Minerva has contributed to over 1,900 peer-reviewed publications since 2012. More details here.

Announcements

The 2025 charge rate for the Minerva High Performance Computing (HPC) service is now $155/TiB/Year, effective on September 1, 2025. The new rate will be reflected in the December quarterly charges. For questions, please reach out to ranjini.kottaiyan@mssm.edu

Scientific Computing and Data (SCD) have received the Notice of Award for the $2M AI Mount Sinai (AIMS) supercomputer to provide state-of-the-art GPU capability and capacity!

The final AIMS machine will include 48 NVIDIA B200 GPUs in 6 x DGX compute nodes:

  • 8x NVLinked B200 GPUs per node, 192 GB of memory per GPU, for a total of 48 xB200 GPUs and 9 TB of memory available on B200
  • 112 Intel Xeon Platinum 8570 2.1GHz Cores, 2 TB memory, 25 TB high-speed NVME local storage per node for a total of 672 cores and 12 terabytes of memory on servers.
  • B200 introduced new format FP4 (floating point 4-bit), enabling AIMS to provide nearly an exaflop with FP4 for AI inferenceas shown in NVIDIA B200 DGX datasheet.

Timeline

The lead time for B200 is ~ 8 weeks. The aim is to get this into production in Q4 2025. Please stay tuned for later updates.

 

Thank you!!

This is the 4th S10 for computational science and data infrastructure that SCD has been awarded, for a total of $8M. Minerva and the other S10 awards have enabled over 1,900 publications and $100M in research, and this new machine will continue the trend!

All these awards have only been possible with your partnership, with special thanks to:

1) The following PIs provided us their NIH-funded science story to make this proposal strong:

Aneel Aggarwal Marta Filizola Girish Nadkarni James Sumowski
Goran Bajic Ronald Hoffman Dalila Pinto Harm van Bakel
Supinda Buyavanich Amir Horowitz Vikas Pejaver Daniel Wacker
Joseph Buxbaum Eimear Kenny Panagiotis Roussos Martin Walsh
Alexander Charney Paul Kenny Towfique Raj Guo-Cheng Yuan
Judy Cho Seunghee Kim-Schulze Avner Schessinger Peng Yuan
John Crary Michael Lazarus Andrew Sharp Bin Zhang
Ron Do Avi Ma’ayan Li Shen Ming-Ming Zhou
Gang Fang Helen Mayberg Yi Shi  
Zahi Fayad Miriam Merad Paul Slesinger  

2) Facilities (Colin Barrett, Frank Grzanov, Carol Ann Gennaro, Svein Amundsen, Ed Vega) who will be developing an environment that will install, power and cool the machine.

3) The whole HPC Team (Wei Guo, Hyung Min Cho, Shamimul Hasan, Tejas Rao, Eric Rosenberg, Jielin Yu, Yiyuan Liu, Sumit Saluja, Rupan Hossain) who helped this proposal and will support the operational needs of this machine.

4) Dr. Eric Nestler for his support and guidance.

It is a real pleasure providing the foundation for your computational and data science. SCD looks forward to continuing our partnership and leveraging this new resource!!

The Inaugural Mount SinAI Retreat 

This first annual showcase of Mount Sinai’s Artificial Intelligence (AI) capabilities and applications took place on May 19, 2025, and included an exciting day of talks and panels in the Stern Auditorium, with a special keynote address by Dean Charney on inventing the future of medicine. In addition, interactive stations were present in the Atrium, introducing attendees to the latest AI tech such as augmented reality goggles and cool robots!

After the talks, a poster session was held in the new James Building for AI at 3 E 101st St where researchers addressed questions from attendees about their work and engaged in fruitful discussions. More details from this event will be forthcoming.

 

Recap of inaugural Introduction to AIR·MS (ARTIFICIAL INTELLIGENCE-READY MOUNT SINAI) Database training session on April 24th, 2025.

This one-hour training session introduced attendees to AIR·MS, an extremely high-speed database that contains the Mount Sinai Data Warehouse clinical data, with memory optimizations that make it ideal for artificial intelligence applications.
 We provided an overview of how to use this exciting new resource for research, and covered the following areas:
  • What is AIR·MS?
  • Data Modalities in AIR·MS
  • Request Access to AIR·MS
  • Use AIR·MS on Minerva HPC
  • Documentation and Support

 

Minerva Upgrade

November 2024Mount Sinai has recently completed a $2 million upgrade to the Minerva supercomputer, which now increases the computational power to 11.4 petaflops of total peak computational power, including 8.7 petaflops of 64-bit peak computational power, supported 450 TB total memory, and 50 petabyte parallel file system. Highlights include:

  • 146 compute nodes (14,016 cores in total)
  • 188 H100 in 47 nodes
  • 32 L40s GPUs in 4 nodes
  • Click here for more information.

 

Top 10 Users

01 January 2023 through 31 December 2024

PI Department Total Hours
Roussos, Panagiotis Psychiatry 27,405,757
Buxbaum, Joseph Psychiatry 22,208,338
Raj, Towfique Neuroscience 21,025,036
Sharp, Andrew Genetics and Genomic Sciences 16,748,020
Pejaver, Vikas Institute for Genomic Health  15,351,189
Zhang, Bin Genetics and Genomic Sciences 14,664,897
Charney, Alexander Genetics and Genomic Sciences 13,596,314
Goate, Alison Genetics and Genomic Sciences 12,765,089
Kenny, Eimear Institute for Genomic Health 6,853,099
Schlessinger, Avner Pharmacology 5,716,077

 

 

Minerva High Performance Computer

Leverage the compute power of Minerva to advance your science

Technical Specifications

Over 11 petaflops of compute power, 440 TB of random access memory (RAM), 32 petabytes of spinning storage, and over 24,000 cores. See more.

Chimera Partition

Chimera Partition

  • 4 login nodes – Intel Xeon(R) Platinum 8168 24C, 2.7GHz – 384 GB memory
  • 275 compute nodes* – Intel 8168 24C, 2.7GHz – 192 GB memory
    • 13,152 cores (48 per node (2 sockets/node))
  • 37 high memory nodes – Intel 8168/8268 24C, 2.7GHz/2.9GHZ – 1.5 TB memory
  • 48 V100 GPUs in 12 nodes – Intel 6142 16C, 2.6GHz – 384 GB memory – 4x V100-16 GB GPU
  • 32 A100 GPUs in 8 nodes – Intel 8268 24C, 2.9GHz – 384 GB memory – 4x A100-40 GB GPU
    • 1.92TB SSD (1.8 TB usable) per node
  • 10 gateway nodes
  • New NFS storage (for users home directories) – 192 TB raw / 160 TB usable RAID6
  • Mellanox EDR InfiniBand fat tree fabric (100Gb/s)
BODE2 Partition (Decommissioned)

BODE2 Partition

(Note: this partition was recently decommissioned in 2024.)

$2M S10 BODE2 awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes
  • 192 GB of memory per node
  • 240 GB of SSDs per node
  • 15 TB total memory
  • Before decommissioning, this partition is open to all NIH funded projects
CATS Partition

CATS Partition

$2M CATS awarded by NIH (Grant PI: Patricia Kovatch)

  • 3,520 64-core 2.6 GHz Intel IceLake processors in 55 nodes
  • 1.5 TB of memory per node
  • 82.5 TB memory (collectively)
  • This partition is open to eligible NIH funded projects

Account Request

All Minerva users, including external collaborators, must have an account to access. See more.

Mount Sinai User

Request a Minerva User Account. You’ll need your Sinai Username, PI name, and Department.

External Collaborators

Request an External Collaborator User Account. PI’s can request an account for non-Mount Sinai Users.

Group Collaborator

Request a Group Collaboration. Collaboration accounts for group-related activities require PI approval.

Project Allocation

Request for Project Allocation. Request allocation on Minerva for a new or existing project.

Connect to Minerva

Minerva uses the Secure Shell (SSH) protocol and two factor authentication. Minerva is HIPAA compliant. See more.

Quick Start Guide

Connect to Minerva from on-site or off-site, utilizing Unix or Windows. See more by clicking here.

Acceptable Use Policy

When using resources at Icahn School of Medicine at Mount Sinai, all users agree to abide by specified user responsibilities. Click here to see more.

Usage Fee Policy

Please refer to our comprehensive fee schedule based on the resources used. See more.

  • The 2024 charging rate is now $119/terabyte/yr calculated monthly at a rate of $9.92/terabyte/mo
  • Charges are determined yearly by the Mount Sinai Compliance and Finance Departments and include all Minerva services, i.e., cpu and gpu utilization; the storage, itself; archive storage; etc.

We are HIPAA Compliant

All users are required to read the HIPAA policy and complete the Minerva HIPAA compliance form on an annual basis. Click here to read more about HIPAA compliance.

Research Data

Utilize existing data, or supplement your research with additional data from the Mount Sinai Health System.

Mount Sinai Data Warehouse

The Mount Sinai Data Warehouse (MSDW) collects clinical and operational data for use in clinical and translational research, as well as quality and improvement initiatives. MSDW provides researchers access to data on patients in the Mount Sinai Health System, drawing from over 11 million patients with an encounter in Epic EHR.

More about MSDW

Data Ark: Data Commons

The Data Ark: Mount Sinai Data Commons is located on Minerva. The number, type, and diversity of restricted and unrestricted data sets on the Data Ark are increasing on an ongoing basis. Rapidly access high-quality data to increase your sample size; our diverse patient population is ideal for testing the generalizability of your results.

More about Data Ark

Acknowledge Mount Sinai in Your Work

Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use.

Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.

Need assistance?