{"id":177,"date":"2019-11-06T14:52:08","date_gmt":"2019-11-06T14:52:08","guid":{"rendered":"http:\/\/labs.icahn.mssm.edu\/minervalab\/?page_id=177"},"modified":"2025-07-07T13:21:33","modified_gmt":"2025-07-07T17:21:33","slug":"hardware-technical-specs","status":"publish","type":"page","link":"https:\/\/labs.icahn.mssm.edu\/minervalab\/hardware-technical-specs\/","title":{"rendered":"Hardware and Technical Specs"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; fullwidth=&#8221;on&#8221; _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221;][et_pb_fullwidth_menu menu_id=&#8221;15&#8243; menu_style=&#8221;centered&#8221; fullwidth_menu=&#8221;on&#8221; active_link_color=&#8221;#d80b8c&#8221; dropdown_menu_bg_color=&#8221;#221f72&#8243; dropdown_menu_line_color=&#8221;#221f72&#8243; dropdown_menu_active_link_color=&#8221;#d80b8c&#8221; _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221; menu_font=&#8221;|600|||||||&#8221; menu_text_color=&#8221;#FFFFFF&#8221; menu_font_size=&#8221;16px&#8221; background_color=&#8221;#221f72&#8243; background_layout=&#8221;dark&#8221; sticky_position=&#8221;top&#8221;][\/et_pb_fullwidth_menu][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;0px||0px||false|false&#8221;][et_pb_row _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;||0px||false|false&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221;][et_pb_text admin_label=&#8221;Text&#8221; _builder_version=&#8221;4.9.0&#8243; _module_preset=&#8221;default&#8221; hover_enabled=&#8221;0&#8243; sticky_enabled=&#8221;0&#8243;]<\/p>\n<p><a href=\"https:\/\/labs.icahn.mssm.edu\/minervalab\/scientific-computing-and-data\/\">Scientific Computing and Data<\/a>\u00a0\/\u00a0<a href=\"https:\/\/labs.icahn.mssm.edu\/minervalab\/\">High Performance Computing<\/a> \/ Hardware and Technical Specs<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;3.22&#8243;][et_pb_row _builder_version=&#8221;3.25&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text admin_label=&#8221;Hardware and Specs&#8221; _builder_version=&#8221;4.9.0&#8243; header_font=&#8221;|600|||||||&#8221; header_text_color=&#8221;#221f72&#8243; header_2_text_color=&#8221;#221f72&#8243; header_2_font_size=&#8221;24px&#8221; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; hover_enabled=&#8221;0&#8243; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h1>Hardware and Technical Specs<\/h1>\n<p>&nbsp;<\/p>\n<p>The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of compute power. It consists of 24,912 Intel Platinum processors in different generations including 2.3 GHz, 2.6 GHz, and 2.9 GHz compute cores (96 cores or 64 cores or 48 cores per node with two sockets in each node) with 1.5 terabytes (TB) of memory per node, 356 graphics processing units (GPUs), including 236 Nvidia H100 GPUs, 32 Nvidia L40S servers, 40 Nvidia A100 GPUs, 48 Nvidia V100 GPUs, 440 TB of total memory, and 32 petabytes of spinning storage accessed via IBM\u2019s Spectrum Scale\/General Parallel File System (GPFS). Minerva has contributed to over 1,900 peer-reviewed publications since 2012. The Minerva cluster design is driven by the research demand performed by Minerva users (i.e. the number of nodes, the amount of memory per node, and the amount of disk space for storage).<\/p>\n<p>&nbsp;<\/p>\n<p>The following diagram shows the overall Minerva configuration.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5199 size-full\" src=\"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-content\/uploads\/sites\/342\/2022\/06\/minerva-diagram-2022.png\" alt=\"\" width=\"1122\" height=\"860\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2>Compute Nodes<\/h2>\n<h4><span style=\"color: #221f72\"><strong>Chimera Partition<\/strong><\/span><\/h4>\n<ul>\n<li><b>4 login nodes <\/b>\u2013 Intel Xeon(R) Platinum 8168 24C, 2.7GHz \u2013 384 GB memory<\/li>\n<li><b>275 compute nodes* <\/b>\u2013 Intel 8168 24C, 2.7GHz \u2013 192 GB memory\n<ul>\n<li>13,152 cores (48 per node (2 sockets\/node))<\/li>\n<\/ul>\n<\/li>\n<li><b>37 high memory nodes<\/b> \u2013 Intel 8168\/8268 24C, 2.7GHz\/2.9GHZ \u2013 1.5 TB memory<\/li>\n<li><b>48 V100 GPUs in 12 nodes<\/b> \u2013 Intel 6142 16C, 2.6GHz \u2013 384 GB memory \u2013 4x V100-16 GB GPU<\/li>\n<\/ul>\n<ul>\n<li><strong>32 A100 GPUs in 8 nodes <\/strong>\u2013 Intel 8268 24C, 2.9GHz \u2013 384 GB memory \u2013 4x A100-40 GB GPU\n<ul>\n<li>1.92TB SSD (1.8 TB usable) per node<\/li>\n<\/ul>\n<\/li>\n<li><strong>8 A100 GPUs in 2 nodes<\/strong>\u00a0\u2013 Intel 8358 32C, 2.6GHz \u2013 2 TB memory \u2013 4x A100-80 GB GPU\n<ul>\n<li><b>7.68<\/b><b>\u00a0<\/b><b>TB NVMe PCIe SSD\u00a0<\/b>\u00a0(7.0TB usable) per node, which can deliver a sustained read-write speed of 3.5 GB\/s in contrast with SATA SSDs that limit at 600 MB\/s<\/li>\n<li>The A100 is connected via NVLink<\/li>\n<\/ul>\n<\/li>\n<li>10 gateway nodes<\/li>\n<li><b>New NFS storage<\/b> (for users home directories) \u2013 192 TB raw \/ 160 TB usable RAID6<\/li>\n<li>Mellanox <b>EDR InfiniBand<\/b> fat tree fabric (100Gb\/s)<\/li>\n<\/ul>\n<p>*<em>Compute Node<\/em>\u00a0\u2014where you run your applications. Users do not have direct access to these machines. Access is managed through the LSF job scheduler.<\/p>\n<h4><span style=\"color: #221f72\"><strong>BODE2 Partition<\/strong><\/span><\/h4>\n<p>$2M S10 BODE2 awarded by NIH (Kovatch PI)<\/p>\n<ul type=\"disc\">\n<li class=\"m_4748730477779924083MsoListParagraph\">3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes<\/li>\n<li class=\"m_4748730477779924083MsoListParagraph\">192 GB of memory per node<\/li>\n<li class=\"m_4748730477779924083MsoListParagraph\">240 GB of SSDs per node<\/li>\n<li class=\"m_4748730477779924083MsoListParagraph\">15 TB memory (collectively)<\/li>\n<li>Open to all NIH funded projects<\/li>\n<\/ul>\n<h4><span style=\"color: #221f72\"><strong>CATS Partition<\/strong><\/span><\/h4>\n<p>$2M CATS awarded by NIH (Kovatch PI)<\/p>\n<ul type=\"disc\">\n<li class=\"m_4748730477779924083MsoListParagraph\">3,520 64-core 2.6 GHz Intel IceLake 8358 processors in 55 nodes<\/li>\n<li class=\"m_4748730477779924083MsoListParagraph\"><strong>1.5 TB<\/strong> of memory per node<\/li>\n<li class=\"m_4748730477779924083MsoListParagraph\">82.5 TB memory (collectively)<\/li>\n<li>Open to eligible NIH funded projects<\/li>\n<\/ul>\n<h4><span style=\"color: #221f72\"><strong>Private Nodes<\/strong><\/span><\/h4>\n<p>Purchased by private groups and hosted on Minerva.<\/p>\n<p>In summary,<\/p>\n<p><b style=\"font-size: 14px\">Total system memory <\/b><span style=\"font-size: 14px\">(computes + GPU + high mem) = <\/span><strong style=\"font-size: 14px\">210\u00a0TB<\/strong><\/p>\n<p><b style=\"font-size: 14px\">Total number of cores <\/b><span style=\"font-size: 14px\">(computes + GPU + high mem) = <\/span><strong style=\"font-size: 14px\">24,214\u00a0cores<\/strong><\/p>\n<p><b style=\"font-size: 14px\">Peak performance <\/b><span style=\"font-size: 14px\">(computes + GPU + high mem, CPU only) = <\/span><strong style=\"font-size: 14px\">2\u00a0PFLOPS<\/strong><\/p>\n<h2>File System Storage<\/h2>\n<p><span class=\"ILfuVd\"><span class=\"hgKElc\">For Minerva, we focused on parallel file systems because NFS and other file systems simply cannot scale to the number of nodes or provide performance for the sheer number of files that the genomics workload entails. Specifically, Minerva is using IBM&#8217;s General Parallel File System (GPFS) because it has advantages that are specifically useful for this workload such as parallel metadata, tiered storage, and sub-block allocation. Metadata is the information about the data in the file system. The flash storage is utilized to hold the metadata and tiny files for fast access. <\/span><\/span><\/p>\n<p><span style=\"font-size: 14px\">Currently we have one parallel file system on Minerva, Arion, which users can access at \/sc\/arion. The Hydra file system was retired at the end of 2020.<\/span><\/p>\n<table style=\"border-collapse: collapse;width: 795px;height: 157px\" border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<colgroup>\n<col style=\"width: 65pt\" span=\"5\" width=\"65\" \/> <\/colgroup>\n<tbody>\n<tr style=\"height: 30.0pt\">\n<td class=\"xl64\" style=\"background-color: #00aeef\" height=\"30\"><strong><span style=\"color: #ffffff\">GPFS Name<\/span><\/strong><\/td>\n<td class=\"xl64\" style=\"background-color: #00aeef\"><strong><span style=\"color: #ffffff\">Lifetime<\/span><\/strong><\/td>\n<td class=\"xl64\" style=\"background-color: #00aeef\"><strong><span style=\"color: #ffffff\">Storage Type<\/span><\/strong><\/td>\n<td class=\"xl64\" style=\"background-color: #00aeef\"><strong><span style=\"color: #ffffff\">Raw PB<\/span><\/strong><\/td>\n<td class=\"xl64\" style=\"background-color: #00aeef\"><strong><span style=\"color: #ffffff\">Usable PB<\/span><\/strong><\/td>\n<\/tr>\n<tr style=\"height: 15.0pt\">\n<td class=\"xl63\" style=\"height: 15pt;width: 157px\" height=\"15\">Arion<\/td>\n<td class=\"xl63\" style=\"width: 157px\">2019 &#8211;<\/td>\n<td class=\"xl63\" style=\"width: 202px;text-align: left\">Lenovo DSS<\/td>\n<td class=\"xl63\" style=\"width: 141px;text-align: left\" align=\"right\">14<\/td>\n<td class=\"xl63\" style=\"width: 132px;text-align: left\" align=\"right\">9.6<\/td>\n<\/tr>\n<tr style=\"height: 30.0pt\">\n<td class=\"xl63\" style=\"height: 30pt;width: 157px\" height=\"30\">Arion<\/td>\n<td class=\"xl63\" style=\"width: 157px\">2019 &#8211;<\/td>\n<td class=\"xl63\" style=\"width: 202px;text-align: left\">Lenovo G201 flash<\/td>\n<td class=\"xl63\" style=\"width: 141px;text-align: left\" align=\"right\">0.12<\/td>\n<td class=\"xl63\" style=\"width: 132px;text-align: left\" align=\"right\">0.12<\/td>\n<\/tr>\n<tr style=\"height: 15.0pt\">\n<td class=\"xl63\" style=\"height: 15pt;width: 157px\" height=\"15\">Arion<\/td>\n<td class=\"xl63\" style=\"width: 157px\">2020 &#8211;<\/td>\n<td class=\"xl63\" style=\"width: 202px;text-align: left\">Lenovo DSS<\/td>\n<td class=\"xl63\" style=\"width: 141px;text-align: left\" align=\"right\">16<\/td>\n<td class=\"xl63\" style=\"width: 132px;text-align: left\" align=\"right\">11.2<\/td>\n<\/tr>\n<tr style=\"height: 15.0pt\">\n<td style=\"height: 15pt;width: 157px\" height=\"15\">Arion<\/td>\n<td style=\"width: 157px\">2021 &#8211;<\/td>\n<td class=\"xl63\" style=\"width: 202px;text-align: left\">Lenovo DSS<\/td>\n<td class=\"xl63\" style=\"width: 141px;text-align: left\" align=\"right\">16<\/td>\n<td class=\"xl63\" style=\"width: 132px;text-align: left\" align=\"right\">11.2<\/td>\n<\/tr>\n<tr style=\"height: 15.0pt\">\n<td style=\"height: 15pt;width: 157px\" height=\"15\">\u00a0<\/td>\n<td style=\"width: 157px\">\u00a0<\/td>\n<td class=\"xl63\" style=\"width: 202px;text-align: left\"><strong>Total<\/strong><\/td>\n<td class=\"xl63\" style=\"width: 141px;text-align: left\" align=\"right\">46<\/td>\n<td class=\"xl63\" style=\"width: 132px;text-align: left\" align=\"right\">32<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Supported by grant UL1TR004419 from the National Center for Advancing Translational Sciences, National Institutes of Health.<\/p>\n<h2>Acknowledging Mount Sinai in Your Work<\/h2>\n<p>Utilizing S10 BODE and CATS partitions requires acknowledgements of support by NIH in your publications. To assist, we have provided exact wording of acknowledgements required by NIH for your use. <a href=\"https:\/\/labs.icahn.mssm.edu\/minervalab\/mount-sinai-data-warehouse-msdw\/acknowledge-scientific-computing-at-mount-sinai\/\">Click here for acknowledgements<\/a>.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Scientific Computing and Data\u00a0\/\u00a0High Performance Computing \/ Hardware and Technical SpecsHardware and Technical Specs &nbsp; The Minerva supercomputer is maintained by High Performance Computing (HPC). Minerva was created in 2012 and has been upgraded several times (most recently in Nov. 2024) and has over 11 petaflops of compute power. It consists of 24,912 Intel Platinum [&hellip;]<\/p>\n","protected":false},"author":415,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<p>Minerva cluster design is driven by the research demand performed by Minerva users (i.e. the number of nodes, the amount of memory per node, and the amount of disk space for storage).<\/p><p>The following diagram shows the overall Minerva configuration.<\/p><p><img class=\"alignnone size-full wp-image-2161\" src=\"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-content\/uploads\/sites\/342\/2021\/08\/Minerva-Configuration-08-2021-scaled.gif\" alt=\"\" width=\"2560\" height=\"1804\" \/><\/p><h3>\u00a0<\/h3><h2><span style=\"color: #221f72;\">Compute nodes<\/span><\/h2><h4><span style=\"color: #221f72;\"><strong>Chimera partition<\/strong> <\/span><\/h4><ul><li><b>4 login nodes <\/b>\u2013 Intel Xeon(R) Platinum 8168 24C, 2.7GHz \u2013 384 GB memory<\/li><li><b>275 compute nodes* <\/b>\u2013 Intel 8168 24C, 2.7GHz \u2013 192 GB memory<ul><li>13,152 cores (48 per node (2 sockets\/node))<\/li><\/ul><\/li><li><b>37 high memory nodes<\/b> \u2013 Intel 8168\/8268 24C, 2.7GHz\/2.9GHZ \u2013 1.5 TB memory<\/li><li><b>48 V100 GPUs in 12 nodes<\/b> \u2013 Intel 6142 16C, 2.6GHz \u2013 384 GB memory \u2013 4x V100-16 GB GPU<\/li><li><strong>32 A100 GPUs in 8 nodes <\/strong>\u2013 Intel 8268 24C, 2.9GHz \u2013 384 GB memory \u2013 4x A100-40 GB GPU<ul><li>1.92TB SSD (1.8 TB usable) per node<\/li><\/ul><\/li><li>10 gateway nodes<\/li><li><b>New NFS storage<\/b> (for users home directories) \u2013 192 TB raw \/ 160 TB usable RAID6<\/li><li>Mellanox <b>EDR InfiniBand<\/b> fat tree fabric (100Gb\/s)<\/li><\/ul><p>*<em>Compute Node<\/em>\u00a0\u2014where you run your applications. Users do not have direct access to these machines. Access is managed through the LSF job scheduler.<\/p><h4><span style=\"color: #221f72;\"><strong>BODE2 partition<\/strong> <\/span><\/h4><p>$2M S10 BODE2 awarded by NIH (Kovatch PI)<\/p><ul type=\"disc\"><li class=\"m_4748730477779924083MsoListParagraph\">3,744 48-core 2.9 GHz Intel Cascade Lake 8268 processors in 78 nodes<\/li><li class=\"m_4748730477779924083MsoListParagraph\">192 GB of memory per node<\/li><li class=\"m_4748730477779924083MsoListParagraph\">240 GB of SSDs per node<\/li><li class=\"m_4748730477779924083MsoListParagraph\">15 TB memory (collectively)<\/li><li>Open to all NIH funded projects<\/li><\/ul><h4><span style=\"color: #221f72;\"><strong>CATS partition<\/strong> <\/span><\/h4><p>$2M CATS awarded by NIH (Kovatch PI)<\/p><ul type=\"disc\"><li class=\"m_4748730477779924083MsoListParagraph\">2,640 48-core 2.9 GHz Intel IceLake processors in 55 nodes<\/li><li class=\"m_4748730477779924083MsoListParagraph\"><strong>1.5 TB<\/strong> of memory per node<\/li><li class=\"m_4748730477779924083MsoListParagraph\">82.5 TB memory (collectively)<\/li><li>Under installation. will be open to all NIH funded projects<\/li><\/ul><h4><span style=\"color: #221f72;\"><strong>Private nodes<\/strong><\/span><\/h4><p>Purchased by private groups and hosted on Minerva.<\/p><p>\u00a0<\/p><p>In summary,<\/p><p><b>Total system memory <\/b>(computes + GPU + high mem) = <strong>210\u00a0TB<\/strong><\/p><p><b>Total number of cores <\/b>(computes + GPU + high mem) = <strong>22,164\u00a0cores<\/strong><\/p><p><b>Peak performance <\/b>(computes + GPU + high mem, CPU only) = <strong>2\u00a0PFLOPS<\/strong><\/p><p>\u00a0<\/p><p>\u00a0<\/p><h2><span style=\"color: #221f72;\">File system storage<\/span><\/h2><p><span class=\"ILfuVd\"><span class=\"hgKElc\">For Minerva, we focused on parallel file systems because NFS and other file systems simply cannot scale to the number of nodes or provide performance for the sheer number of files that the genomics workload entails. Specifically, Minerva is using IBM's General Parallel File System (GPFS) because it has advantages that are specifically useful for this workload such as parallel metadata, tiered storage, and sub-block allocation. Metadata is the information about the data in the file system. The flash storage is utilized to hold the metadata and tiny files for fast access. <\/span><\/span><\/p><p>Currently we have one parallel file system on Minerva, Arion, which users can access at \/sc\/arion. The Hydra file system was retired at the end of 2020.<\/p><table style=\"border-collapse: collapse; width: 535px; height: 169px;\" border=\"0\" width=\"325\" cellspacing=\"0\" cellpadding=\"0\"><colgroup> <col style=\"width: 65pt;\" span=\"5\" width=\"65\" \/> <\/colgroup><tbody><tr style=\"height: 30.0pt;\"><td class=\"xl64\" style=\"height: 30pt; width: 65pt;\" width=\"65\" height=\"30\"><strong>GPFS Name<\/strong><\/td><td class=\"xl64\" style=\"width: 65pt; text-align: center;\" width=\"65\"><strong>Lifetime<\/strong><\/td><td class=\"xl64\" style=\"width: 65pt; text-align: center;\" width=\"65\"><strong>Storage Type<\/strong><\/td><td class=\"xl64\" style=\"width: 65pt; text-align: center;\" width=\"65\"><strong>Raw PB<\/strong><\/td><td class=\"xl64\" style=\"width: 65pt; text-align: center;\" width=\"65\"><strong>Usable PB<\/strong><\/td><\/tr><tr style=\"height: 15.0pt;\"><td class=\"xl63\" style=\"height: 15.0pt; width: 65pt;\" width=\"65\" height=\"15\">Arion<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">2019-<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">Lenovo DSS<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">14<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">9.6<\/td><\/tr><tr style=\"height: 30.0pt;\"><td class=\"xl63\" style=\"height: 30.0pt; width: 65pt;\" width=\"65\" height=\"30\">Arion<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">2019-<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">Lenovo G201 flash<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">0.12<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">0.12<\/td><\/tr><tr style=\"height: 15.0pt;\"><td class=\"xl63\" style=\"height: 15.0pt; width: 65pt;\" width=\"65\" height=\"15\">Arion<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">2020 -<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\">Lenovo DSS<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">16<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">11.6<\/td><\/tr><tr style=\"height: 15.0pt;\"><td style=\"height: 15.0pt;\" height=\"15\">\u00a0<\/td><td>\u00a0<\/td><td class=\"xl63\" style=\"width: 65pt;\" width=\"65\"><strong>Total<\/strong><\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">30<\/td><td class=\"xl63\" style=\"width: 65pt;\" align=\"right\" width=\"65\">21<\/td><\/tr><\/tbody><\/table><p>\u00a0<\/p><p>\u00a0<\/p>","_et_gb_content_width":"","footnotes":""},"class_list":["post-177","page","type-page","status-publish","hentry"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/pages\/177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/users\/415"}],"replies":[{"embeddable":true,"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/comments?post=177"}],"version-history":[{"count":47,"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/pages\/177\/revisions"}],"predecessor-version":[{"id":10364,"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/pages\/177\/revisions\/10364"}],"wp:attachment":[{"href":"https:\/\/labs.icahn.mssm.edu\/minervalab\/wp-json\/wp\/v2\/media?parent=177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}