Supernova overview: Różnice pomiędzy wersjami

Z KdmWiki
Przejdź do nawigacji Przejdź do wyszukiwania
Linia 4: Linia 4:
 
|admini=prace-support @ wcss.pl
 
|admini=prace-support @ wcss.pl
 
}}
 
}}
'''Supernova''' - computing cluster installed at [http://www.wcss.pl/english/ WCNS], part of PRACE Tier-1 infrastructure and EGI via PL-Grid initiative. All nodes are based on x86_64 (Intel Core micro Architecture, EM64T).  
+
'''Supernova''' - computing cluster installed at [http://www.wcss.pl/english/ WCNS]. The cluster is a part of PRACE Tier-1 infrastructure and EGI grid infrastructure (via national PL-Grid initiative).
  
 
Part of the cluster is accessible through PRACE as Tier-1 machine, the following description refers to this part.
 
Part of the cluster is accessible through PRACE as Tier-1 machine, the following description refers to this part.
  
 
Cluster hardware components:  
 
Cluster hardware components:  
* access node
+
* access node,
* service nodes
+
* service nodes,
 
* 404 computational nodes (wn329 - wn732),
 
* 404 computational nodes (wn329 - wn732),
 
* computational network - [[Infiniband]] DDR fat-tree full-cbb,
 
* computational network - [[Infiniband]] DDR fat-tree full-cbb,
Linia 17: Linia 17:
 
Computational resources:
 
Computational resources:
 
* 4848 CPU cores,
 
* 4848 CPU cores,
* >9,5 TB RAM (2GB/core)
+
* >9,5 TB RAM (2GB/core),
* ~247 TB disk space (40 TB NFS + 207 TB [[Lustre]]),
+
* ~247 TB storage (40 TB NFS + 207 TB [[Lustre]]),
* node to node communication: throughput 20Gbps, latency < 5 us.
+
* node to node communication: throughput 20 Gbps, latency < 5 us.
  
 
===Summary===
 
===Summary===

Wersja z 08:12, 23 kwi 2012

Supernova
noframe
Helpdesk
prace-support @ wcss.pl

Supernova - computing cluster installed at WCNS. The cluster is a part of PRACE Tier-1 infrastructure and EGI grid infrastructure (via national PL-Grid initiative).

Part of the cluster is accessible through PRACE as Tier-1 machine, the following description refers to this part.

Cluster hardware components:

  • access node,
  • service nodes,
  • 404 computational nodes (wn329 - wn732),
  • computational network - Infiniband DDR fat-tree full-cbb,
  • management network - gigabit ethernet.

Computational resources:

  • 4848 CPU cores,
  • >9,5 TB RAM (2GB/core),
  • ~247 TB storage (40 TB NFS + 207 TB Lustre),
  • node to node communication: throughput 20 Gbps, latency < 5 us.

Summary

cluster : Supernova
access node : prace.wcss.pl
disk space home : NFS 40TB
scratch : Lustre
operating system : ScientificLinux
peak performance : 30 TFLOPS
nodes : access node computing nodes
CPU : Intel Xeon X3220 2.4 GHz
("Kentsfield", 65 nm)
Intel Xeon X5650 2.67 GHz
("Westmere-EP", 32 nm)
number of CPUs : 1x quad-core 2x six-core
cache L2 : 4MB 12MB
RAM : 8GB 24GB