Research Group of Prof. Dr. M. Griebel
Institute for Numerical Simulation
maximize

Systems in the INS Grid

Himalaya

Compute Node Dell PowerEdge 1850 Total: 128
CPUs Intel Xeon EM 64 T 3.2 GHz Total: 256
Memory 4-6 GB per node Total: 640 GB
Harddisk 36 GB per node  
Cluster Interconnect Myrinet XP Myrinet Clos 256 Switch  
Operating System Ubuntu 10.04 LTS, Kernel 2.6.32 (64bit)  
Message Passing Interface OpenMPI  
Peak Performance   1638.4 GFlop/s
Linpack Performance   1269 GFlop/s
Efficiency   77 %
Top 500 listing (June 2005)   428
Installation April 2005  
Operator Sonderforschungsbereich 611 Institut für Numerische Simulation

Eifel II

Compute Node Dell PowerEdge 1950/2950 Total: 19
CPUs Intel Xeon X5355 2.66 GHz Total: 38
Memory 12-16 GB per node Total: 240 GB
Harddisk 72 GB per node  
Cluster Interconnect Myrinet 2000/XP Myrinet M3-32E Switch  
Operating System Ubuntu 10.04 LTS, Kernel 2.6.32 (64bit)  
Message Passing Interface OpenMPI  
Peak Performance   1617.28 GFlop/s
Linpack Performance   519.1 GFLop/s
Efficiency   32 %
Installation December 2007  
Operator Sonderforschungsbereich 611 Institut für Numerische Simulation

Siebengebirge

Compute Node Dell PowerEdge R910 Total: 5
CPUs Intel Xeon X7560 2.226 GHz Total: 160
Memory 512 GB per node Total: 2560 GB
Harddisk 900 GB per node  
Cluster Interconnect Mellanox ConnectX Infiniband Mellanox IS5025 Switch  
Operating System Ubuntu 10.04 LTS, Kernel 2.6.35 (64bit)  
Message Passing Interface OpenMPI  
Peak Performance   1453 GFlop/s
Linpack Performance   1349 GFLop/s
Efficiency   93 %
Installation December 2010  
Operator Sonderforschungsbereich 611 Institut für Numerische Simulation