Beginning in 2016 the Campus Cluster moved away from an instance-based production to a continuous deployment model that allows us to continually add and retire hardware as needed. To see current generation node pricing, visit Buy Compute.
- Golub Cluster (production date September 2013)
- Taub Cluster (production date May 2011)
Golub Cluster
Production Date: September 3, 2013
Timelapse video of the cluster installation
The infrastructure of the Golub cluster, is designed to support up to 512 nodes with FDR InfiniBand for applications communications and data transport with a gigabit Ethernet control network. The disk system was selected to support expandability and the GPFS file system.
The details for the hardware components are listed below.
Login Nodes
- (4) Dell PowerEdge R720 login nodes each configured with:
- (2) Intel E5-2660 2.2 GHz 8-core processors, 95 W
- 128 GB RAM via (16) 8 GB 1333 MT/s RDIMMs
- (2) 300 GB 6G SAS 10K 2.5″ HDD
- Mellanox ConnectX-3 FDR IB HCA
- (2) NVIDIA TESLA M2090 GPUs
Compute Nodes (Current): Maximum Count
- (44) Dell PowerEdge C8000 4U chassis each with:
- (2) 1400 W power supply units
- (6) 6 x 120mm high-efficiency fans with PWM control
- (312) Compute nodes—Dell C8220 compute sleds each with:
- (2) Intel E5-2670V2 (Ivy Bridge) 2.50 GHz, 25MB cache, 10C, 115 W
- 64/128/256 GB RAM at customer’s choice
- (2) 1 TB, 7200 RPM, SATA, 3 Gbps, 2.5″ HDD
- (4) Intel Ethernet controller i350
- Compute Node Options:
- Mellanox ConnectX-3 FDR IB HCA
- (2) NVIDIA TESLA K40 GPUs
Compute Nodes (Original): Maximum Count
- (28) Dell PowerEdge C8000 4U chassis each with:
- (2) 1400 W power supply units
- (6) 6 x 120mm high-efficiency fans with PWM control
- (200) Compute nodes—Dell C8220 compute sleds each with:
- (2) Intel E5-2670 (Sandy Bridge) 2.60 GHz, 20 MB Cache, 8C, 115 W
- 32/64/128 GB RAM at customer’s choice
- (2) 1 TB, 7200 RPM, SATA, 3 Gbps, 2.5″ HDD
- (4) Intel Ethernet Controller i350
- Compute Node Options:
- Mellanox ConnectX-3 FDR IB HCA
- (2) NVIDIA TESLA M2090 GPUs
Network Infrastructure
- High-speed InfiniBand cluster interconnect
- Mellanox MSX6518-NR FDR InfiniBand (384-port capable)
- Management and IPMI control networks
- (2) Dell PowerConnect 8064F 48-port 10 GigE switches
- (41) Dell PowerConnect 5548 49-port 1 GigE switches
- (2) Dell PowerConnect 2848 48-port 1 GigE switches
Rack Infrastructure
- (9) Netshelter SX 48U 750mm wide, 1200mm deep, model AR3357 racks for compute nodes including (3) high density PDUs with IEC-309 60A plugs
- (1) Netshelter SX 48U 750mm wide, 1200mm deep, model AR3357 racks for VM hosting, fast data transfer and master nodes including (2) high density PDUs with IEC-309 60A plugs
- (1) DDN 50U rack for storage subsystem including (6) high density PDUs with IEC-309 60A plugs
Support
- Basic hardware services: Business hours (5×10) next business day on-site hardware warranty repair
- Dell hardware limited warranty plus on-site service
- 24×7 pager support, cross shipment repair replacements for DDN equipment
- Silver technical support for Mellanox IB fabric
- 4-year, next-day support on all hardware
Storage Infrastructure
- (1) DDN SFA12K40D-56IB Couplet with 5 enclosures
- (60) 3 TB 7200 RPM 2.5″ SATA HDD expandable to 600 HDD
- (4) Dell PowerEdge R720 GPFS-NDS nodes each configured with:
- (2) Intel E5-2665 2.4 GHz 8-core processors, 115 W
- 256 GB RAM via (16) 16 GB 1333 MT/s RDIMMs
- Mellanox ConnectX-3 dual-port FDR IB HCA
- Intel X520 DP 10 Gbps DA/SFP+ server adapter
- (4) 300 GB 15K RPM 6 Gbps SAS HDD
Taub Cluster (Retired From Service)
Production Date: May 1, 2011
Retirement Date: July 31, 2017
The infrastructure of the Taub cluster—representative of future cluster instances—is designed to support up to 512 nodes with QDR InfiniBand for applications communications and data transport with a gigabit Ethernet control network. The disk system was selected to support expandability and the GPFS file system.
The OS is Scientific Linux 6.1 (Linux 2.6.32). Admin Nodes
Compute Nodes: Maximum Count
Network Infrastructure
Rack Infrastructure
Support
Storage Infrastructure
|