Supernova overview: Różnice pomiędzy wersjami

Z KdmWiki
Przejdź do nawigacji Przejdź do wyszukiwania
(Utworzył nową stronę „{{serwer |nazwa=Supernova |zdjęcie=Nova 2011 01.jpg |admini=nova @ kdm.wcss.wroc.pl }} '''Supernova''' - computing cluster installed in WNCS, part of ...”)
 
m
 
(Nie pokazano 13 pośrednich wersji utworzonych przez tego samego użytkownika)
Linia 1: Linia 1:
{{serwer
+
{{uwaga|This page is obsolete, the machine has been decommissioned, please go to [[Bem User Guide]]}}
 +
<small>< [[Supernova User Guide]] < Supernova overview</small>
 +
{{server
 
|nazwa=Supernova
 
|nazwa=Supernova
 
|zdjęcie=Nova 2011 01.jpg
 
|zdjęcie=Nova 2011 01.jpg
|admini=nova&nbsp;@&nbsp;kdm.wcss.wroc.pl
+
|admini=prace-support&nbsp;@&nbsp;wcss.pl
 
}}
 
}}
'''Supernova''' - computing cluster installed in [[WNCS]], part of EGI and PL-Grid infrastructure. All nodes based on x86_64 (Intel Core micro Architecture, EM64T).
+
'''Supernova''' - computing cluster installed at [http://www.wcss.pl/english/ WCSS] (Wrocławskie Centrum Sieciowo-Superkomputerowe, WCNS - Wroclaw Centre for Networking and Supercomputing). The cluster is a part of PRACE Tier-1 infrastructure and EGI grid infrastructure (via national PL-Grid initiative).
  
Cluster hardware:  
+
Part of the cluster is accessible through PRACE as Tier-1 machine. The following description refers to this part.
* access node (supernova.wcss.pl),
+
 
* archive node (archiwum.wcss.pl)
+
Cluster hardware components:  
* service node
+
* access node,
* 2 grid middleware services nodes (darkmass/SE and dwarf/CE),
+
* service nodes,
* 580 computational nodes (wn153 - wn732),
+
* 404 computational nodes (wn329 - wn732),
* computational network - [[Infiniband]] DDR fat-tree full-cbb,
+
* computational network - Infiniband DDR fat-tree full-cbb,
 
* management network - gigabit ethernet.
 
* management network - gigabit ethernet.
  
 
Computational resources:
 
Computational resources:
* 6400 CPU cores,
+
* 4848 CPU cores,
* >12,5 TB RAM (2GB/core)
+
* >9,5 TB RAM (2GB/core),
* ~247 TB disk space (40 TB NFS + 207 TB [[Lustre]]),
+
* ~247 TB storage (40 TB NFS + 207 TB [[Lustre]]),
* node to node communication: throughput 20Gbps, latency < 5 us.
+
* node to node communication: throughput 20 Gbps, latency < 5 us.
 
 
  
 
===Summary===
 
===Summary===
 
{|style="vertical-align: top;"
 
{|style="vertical-align: top;"
|align="right"|cluster : ||'''supernova.wcss.wroc.pl''' || || ||
+
|align="right"|cluster : ||'''Supernova''' || || ||
 +
|-style="vertical-align: top;"
 +
|align="right"|access node for PRACE users: ||'''prace.wcss.pl''' (from public Internet)<br/> '''prace-int.wcss.pl''' (from PRACE internal network)|| || ||
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
|align="right"|disk&nbsp;space&nbsp;home :||NFS 10TB || || ||
+
|align="right"|disk&nbsp;space&nbsp;home/ (backup) :||NFS 40TB || || ||
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
|align="right"|przestrzeń&nbsp;scratch :||[[Lustre]] || || ||
+
|align="right"|scratch/ (no backup) :||Lustre || || ||
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
 
|align="right"|operating&nbsp;system :||ScientificLinux || || ||
 
|align="right"|operating&nbsp;system :||ScientificLinux || || ||
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
|align="right"|peak&nbsp;performance :||'''67,54 TFLOPS''' || || ||
+
|align="right"|peak&nbsp;performance :||'''30 TFLOPS''' || || ||
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
|align="right"|nodes :||'''access'''|| '''computational 2nd generation (136)'''||'''computational 3rd generation (40)''' ||'''computational 4th generation (404)'''
+
|align="right"|nodes :||'''access node'''|| '''computing nodes'''
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
|align="right"|CPU : ||Intel Xeon 5160 3GHz ||Intel Xeon E5345 2.33 GHz || Intel Xeon L5420 2.5 GHz ||Intel Xeon X5650 2.67 GHz
+
|align="right"|CPU : ||Intel Xeon X3220 2.4 GHz <br/>("Kentsfield", 65 nm) || Intel Xeon X5650 2.67 GHz <br/>("Westmere-EP", 32 nm)
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
 
|align="right"|number&nbsp;of&nbsp;CPUs :
 
|align="right"|number&nbsp;of&nbsp;CPUs :
|2x dual-core
+
|1x quad-core
|2x quad-core
 
|2x quad-core
 
 
|2x six-core
 
|2x six-core
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
 
|align="right"|cache&nbsp;L2 :
 
|align="right"|cache&nbsp;L2 :
 
|4MB
 
|4MB
|8MB
 
|12MB
 
 
|12MB
 
|12MB
 
|-style="vertical-align: top;"
 
|-style="vertical-align: top;"
 
|align="right"|RAM :
 
|align="right"|RAM :
|16GB
+
|8GB
|16GB
 
|16GB
 
 
|24GB
 
|24GB
 
|-
 
|-
 
|}
 
|}
  
===Software===
 
;Scientific applications
 
[[Abaqus]], [[ABINIT]], [[Accelryss]], [[Amber]], [[ANSYS]], [[ANSYS CFX]], [[ANSYS Fluent]], [[APBS]], [[AutoDock]], [[AutoDock Vina]], [[Cfour]], [[CPMD]], [[CRYSTAL09]], [[Dalton]], [[FDS-SMV]], [[GAMESS]], [[Gaussian]], [[Gromacs]], [[Hmmer]], [[LAMMPS]], [[Materials Studio]], [[Mathematica]], [[Matlab]], [[Meep]], [[MOLCAS]], [[Molden]], [[Molpro]], [[MOPAC]], [[NAMD]], [[NWChem]], [[OpenFOAM]], [[Orca]], [[R]], [[Siesta]], [[TURBOMOLE]], [[Xaim]], custom software.
 
 
;Compilers
 
[[GNU GCC]], [[Intel]], [[PGI]]
 
 
;Libraries and tools
 
* [[MVAPICH2]],
 
* [[MPIEXEC]]
 
* [[MKL]] (/opt/intel/mkl/WERSJA/lib/em64t/),
 
* GotoBLAS2 (/usr/local/GotoBLAS2/),
 
* ATLAS (/usr/local/atlas/)
 
* HDF
 
* Python + SciPy + NumPy
 
* ...
 
 
;Job scheduling system
 
[[PBSPro]]
 
 
===RSA key fingerprint===
 
d5:85:f7:5a:92:9b:72:7d:d3:74:67:ab:e4:46:28:e9
 
  
 +
<small>< [[Supernova User Guide]] < Supernova overview</small>
 
[[Category:User Guide]]
 
[[Category:User Guide]]

Aktualna wersja na dzień 08:10, 9 maj 2017

< Supernova User Guide < Supernova overview

Supernova
noframe
Helpdesk
prace-support @ wcss.pl

Supernova - computing cluster installed at WCSS (Wrocławskie Centrum Sieciowo-Superkomputerowe, WCNS - Wroclaw Centre for Networking and Supercomputing). The cluster is a part of PRACE Tier-1 infrastructure and EGI grid infrastructure (via national PL-Grid initiative).

Part of the cluster is accessible through PRACE as Tier-1 machine. The following description refers to this part.

Cluster hardware components:

  • access node,
  • service nodes,
  • 404 computational nodes (wn329 - wn732),
  • computational network - Infiniband DDR fat-tree full-cbb,
  • management network - gigabit ethernet.

Computational resources:

  • 4848 CPU cores,
  • >9,5 TB RAM (2GB/core),
  • ~247 TB storage (40 TB NFS + 207 TB Lustre),
  • node to node communication: throughput 20 Gbps, latency < 5 us.

Summary

cluster : Supernova
access node for PRACE users: prace.wcss.pl (from public Internet)
prace-int.wcss.pl (from PRACE internal network)
disk space home/ (backup) : NFS 40TB
scratch/ (no backup) : Lustre
operating system : ScientificLinux
peak performance : 30 TFLOPS
nodes : access node computing nodes
CPU : Intel Xeon X3220 2.4 GHz
("Kentsfield", 65 nm)
Intel Xeon X5650 2.67 GHz
("Westmere-EP", 32 nm)
number of CPUs : 1x quad-core 2x six-core
cache L2 : 4MB 12MB
RAM : 8GB 24GB


< Supernova User Guide < Supernova overview