Research Group of Prof. Dr. M. Griebel
Institute for Numerical Simulation
maximize

Systems in the INS Grid

Atacama

Compute Node Dell PowerEdge M620 Total: 78
CPUs Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz Total: 156
Memory 64 GB per node Total: 4992 GB
Harddisk 500 GB per node  
Cluster Interconnect Infiniband 56 Gb/sec (4X FDR) Infiniband SX6025/SX6036 36-port 56Gb/s  
Operating System Ubuntu 12.04, Kernel 3.2.0 (64bit)  
Message Passing Interface OpenMPI  
Peak Performance   25625.6 GFlop/s
Linpack Performance   20630 GFlop/s
Efficiency   80 %
Installation December 2013  
Operator Sonderforschungsbereich 1060 Institut für Numerische Simulation

Taurus

Compute Node Supermicro X8DTT Total: 36
CPUs Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Total: 144
GPUs NVIDIA Corporation Tesla M2090 Total: 36
Memory 12 GB per node Total: 432 GB
Harddisk 500 GB per node  
Cluster Interconnect Mellanox Technologies MT26428 Infiniband SX6025/SX6036 36-port 56Gb/s  
Operating System Ubuntu 12.04, Kernel 3.2.0 (64bit)  
Message Passing Interface OpenMPI  
Installation October 2011  
Operator Fraunhofer SCAI Institut für Numerische Simulation