Scientific Computing and Data / High Performance Computing / Documentation / New York State’s Empire AI
New York State’s Empire AI: Getting Started & FAQ
About Empire AI
Empire AI is a consortium of ten New York State institutions with support from New York State and private philanthropy that oversees a shared computing facility to promote responsible research and development, including the advancement of the ethical and public-interest uses of artificial intelligence (AI) and high-performance computing (HPC) technologies in New York.
Launched in April 2024 by Governor Kathy Hochul, Empire AI is a bold partnership of New York’s leading public and private universities, establishing a state-of-the-art artificial intelligence computing center, housed at SUNY’s University at Buffalo.
Who is managing the Empire AI cluster?
The Empire AI cluster is administrated and managed by Empire AI, not the Minerva admin team. The Minerva team will assist in supporting our Mount Sinai researchers on using the Empire cluster and general questions. Please note that the Empire AI team is operating with limited support capacity during this phase.
Terms and conditions to use Empire AI cluster?
PLEASE REVIEW the full Terms and conditions of use of Empire AI resources. This is developed by Empire AI.
Minerva Graphics Processing Units (GPUs) or Empire AI GPUs?
- Empire AI GPUs will complement existing Minerva infrastructure with extra GPU resources for general data as needed.
- Minerva remains the only cybersecurity – and Health Insurance Portability and Accountability Act (HIPAA) – approved GPU platform for Mount Sinai Health System (MSHS) data.
- You can always use Minerva GPUs with all the extensive services as needed.
-
- Currently Minerva has a larger number of H100 and GPUs than Empire AI (installed in November 2024); see details here.
-
- Minerva will also install 48x B200, as announced, which will be in production in December 2025
- 8x NVLinked B200 GPUs per node, 192 GB of memory per GPU, for a total of 48 xB200 GPUs and 9 terabytes (TB) of memory available on B200
- 112 Intel Xeon Platinum 8570 2.1GHz Cores, 2 TB memory, 25 TB high-speed NVME local storage per node for a total of 672 cores and 12 terabytes of memory on servers.
- Minerva will also install 48x B200, as announced, which will be in production in December 2025
Frequently Asked Questions (FAQs)
The following FAQs provide essential information on accessing, using, and managing resources on the Empire AI cluster:
1. What are the technical specifications of the Empire AI cluster?
2. Can I process sensitive data on the Empire AI cluster?
3. How do I get access to the Empire AI cluster?
4. How do I connect to the cluster?
5. What type of scheduler does the Empire AI cluster use?
8. How do I manage permissions?
10. How should I acknowledge or cite Empire AI when I use it in a paper or presentation?
11. How do I open a support ticket for Empire AI?
12. Where can I find more information or additional resources about Empire AI?
13. Do I need to pay to use Empire AI?
14. Does Mount Sinai AI policies apply to the use of Empire AI?
1. What are the technical specifications of the Empire AI cluster?
Please check here.
2. Can I process sensitive data on the Empire AI cluster?
No. Empire AI is not currently appropriate for any sensitive, HIPAA, controlled genomic, other controlled data or any data from any patient care process from MSHS (Mount Sinai Health System) such as MSHS clinical data including de-identified clinical data, EHR, genetics and transcriptomics etc on the Empire AI cluster. There is no cybersecurity/encryption or regulatory framework in place currently; however, efforts are underway to develop a HIPAA-compliant environment.
3. How do I get access to the Empire AI cluster?
All Mount Sinai PIs and Users are required to sign a DUA first.
- To get started, Mount Sinai PIs must initiate the process by submitting the Empire AI Data Use Agreement Form (a requirement from Mount Sinai).
- After DUA received, PIs and Users will be contacted with instructions for Empire AI project onboarding.
- PIs submit “Empire AI project onboarding Form” with projects details and users listed.
- Listed users submit Empire AI Data Use Agreement Form & Empire AI user onboarding form.
4. How do I connect to the cluster?
To log in to Alpha, you need to use an SSH client. The hostname is: alpha.empire-ai.org
Connect using:
ssh [YourUsername]@alpha.empire-ai.org
For more details, see here.
5. What type of scheduler does the Empire AI cluster use?
The Empire AI Alpha cluster uses Slurm (Simple Linux Utility for Resource Management) as its job scheduler. To learn about Slurm, see below.
Mount Sinai users must submit jobs to the mountsinai partition, which is dedicated to Sinai accounts.
- Check your account/partition:
ml slurm
sacctmgr show assoc user=$USER format='User,Account'
This shows the account name you must use with your jobs (for Sinai users, it will be mountsinai).
- In job scripts: Add the following lines in your script:
#SBATCH --partition mountsinai
#SBATCH --account mountsinai
- Requesting GPUs: On the Alpha cluster, you must request the number of GPUs needed by using the Slurm option
--gpus-per-node=X.
- CPU-only node:
Alpha has one compute node without GPUs that all users can access. To use it, set
--partition=cpu.
- Submitting a batch job:
Example:
sbatch my_job.slurm
- Interactive job: You can start an interactive job on the Alpha cluster like below.
srun --partition=mountsinai\ --account=mountsinai \ --gpus=1 \ --ntasks=1 \ --time=00:30:00 \ --nodes=1 \ --pty /bin/bash
For more details, please check the following links:
-
Home (/mnt/home/[username]):
Each user has a personal home directory at /mnt/home/[username]. Use this for job scripts, software installations, and files you need to keep long term and access frequently. It is not meant for large datasets or scratch work.
-
Scratch (/mnt/lustre/mountsinai/[username]):
In addition to your home directory, you also have space on the global scratch storage at
/mnt/lustre/mountsinai/[username].
This is high-performance storage designed for job data, temporary files, and large datasets. Treat it as scratch space, since policies may be added to remove inactive files in the future.
-
Shared Memory (/dev/shm):
Very fast in-memory storage, best for jobs with many small or frequent file operations. It counts toward your job’s memory use, so make sure to request enough memory to cover both your code and/dev/shm requirements.
For more details, see here.
8. How do I manage permissions?
Alpha supports Access Control Lists (ACLs) for more flexible access than POSIX permissions. ACL usage differs for
/mnt/home and /mnt/lustre.
- Managing ACLs on /mnt/home
Use nfs4_setfacl / nfs4_getfacl.
Must use UID numbers, not usernames
Example 1: Allow another user to view home directory
id -u username # get UID (e.g., 1029) nfs4_setfacl -a "A::1029:RX" $HOME nfs4_getfacl $HOME
Example 2: Allow another user to view all contents of home
id -u username nfs4_setfacl -R -a "A::1029:RX" $HOME nfs4_setfacl -a "A:fdi:1029:RX" $HOME nfs4_setfacl -a "A:gfdi:GROUP@:RX" $HOME
# Add an entry so that UID 1029 has inherited RX permissions
nfs4_setfacl -a "A:fdi:1029:RX" $HOME
For more details, see here.
- SCP, SFTP, and rsync: You can transfer files to and from the Alpha cluster using standard utilities such as SCP, SFTP, and rsync. Transfers may be initiated either from the command line or through an SSH client with a graphical interface, such as FileZilla or CyberDuck.
For detailed instructions, please see here.
10. How should I acknowledge or cite Empire AI when I use it in a paper or presentation?
Please check here.
11. How do I open a support ticket for Empire AI?
To open a support ticket, simply send an email to support@empireai.edu. Please include details such as your username, the problem you are experiencing, and any job IDs or error messages that will help the support team assist you. Please note that while our team is operating with limited support capacity during this phase, we’ll respond as promptly as possible.
12. Where can I find more information or additional resources about Empire AI?
- Empire AI General Info Page
- Empire AI Consortium – Overview
- Governor Hochul’s Press Release on Empire AI Expansion
13. Do I need to pay to use Empire AI?
No charge till Dec. 2025. Cost recovery will be starting in Dec. 2025 (for both Alpha + Beta resources). Charging rate is TBD.
14. Does Mount Sinai AI policies apply to the use of Empire AI?
Yes. All Mount Sinai AI policies (MSHS AI Implementation and Use Policy) apply to use of Empire AI.
TOWN HALL: New GPUs/AI resources available on Empire AI Computing Center
A Town Hall on the new GPUs/AI resources available on Empire AI Computing Center (https://www.empireai.edu/) was held on October 10 at 12:00 PM (noon).
- The following topics were covered:
- What is Empire AI
- Empire AI GPU Hardware Resources
- How to Access
- Discussion & Questions
Slides from this town hall are now available here and the video recording here.
