SYSTEM CONFIGURATION

Rockfish is a community-shared cluster at Johns Hopkins University and housed at Maryland Advanced Research and Computer Center in Baltimore. It follows the “condominium model” with three main units. The first unit is based on an NSF Major Research Infrastructure grant, a second unit contains mainly medium-sodez condos (for example DURIP/DoD, Deans’ contributions condos), and the last unit is a collection of individual research groups condos. All three units are shared, with no physical separation, by all users. 

Rockfish has 34,128 cores (711 nodes), a combined theoretical performance of 3.3 PFLOPs and Rmax of 2.1 PFLOPs. Rockfish has three parallel file systems (GPFS) with a total of ~13PB of usable space. The Rockfish cluster has Mellanox Infinidad HDR100 connectivity (1:1.5 topology)

The Rockfish Cluster was ranked #443 in top500.org (November 2023). 

Compute Hardware

Original # NodesCurrent # NodesTypeCPUGPURAMStorageTotal Cores
386768ComputeIntel Xeon Gold Cascade Lake 6248RN/A192GB DDR4 2933MHz1TB NVMe SSD36,864
0 47ComputeIntel Xeon Gold Sapphire Rapids 6448YN/A256GB DDR5 4800MHz2TB NVMe SSD6,016
1028Large MemoryIntel Xeon Gold Cascade Lake 6248RN/A1.5TB DDR4 2933MHz1TB NVMe SSD1,344
1018GPU NodesIntel Xeon Gold Cascade Lake 6248R4x Nvidia A100 40GB192GB DDR4 2933MHz1TB NVMe SSD 864
0 6 GPU NodesIntel Xeon Gold Icy Lake 63384x Nvidia A100 80GB256GB DDR4 3200MHz1.6TB NVMe SSD384
406 867 45,472

Storage

FilesystemSystem TypeTotal SizeBlock SizeDefault QuotaFiles Per TBacked Up?
/home/NVMe SSD20T128K50GBN/ALimited
/scratch4/IBM GPFS 1.9PB4MB10T2,000K files per TBNo
/scratch16/IBM GPFS1.9PB16MBN/A1,000K files per TB No
/data/IBM GPFS5.1PB16MB1T400K files per TBNo

partitions

PartitionAvailable NodesMax Time (Hours)Max Cores per NodeMax Memory per Node (MB)
parallel7681 / 7248192,000
a100171 / 7248192,000
bigmem281 /48481,537,000
v10011 / 7248193,118
ica10081 / 7264256,000
express51 / 8128256,000
shared411 / 2464256,000