Lunarc history 1986 1988 IBM 3090 150VF 1988
Lunarc history 1986 - 1988 IBM 3090 150/VF 1988 - 1991 IBM 3090 170 S/VF 1991 - 1997 Workstations, IBM RS/6000 1994 – 1997 IBM SP 2, 8 processors. 1998 Origin 2000, 46 processors, R 10000 1999 Origin 2000, 100 processors, R 12000, 300 Mhz 2000 Origin 2000, 116 processors, R 12000, 300 Mhz 2000 Beowulf Cluster, cluster with 40 AMD 1. 1 GHz cpus 2001 64 of the Origin 2000 processors were relocated to NSC. 2002 A 64 processor cluster. AMD Athlon 1900+ (When. Im 64) 2003 128 processors added (Toto 7). Intel P 4 2. 53 GHz
Current hardware • Husmodern, cluster – 32 nodes, 1, 1 GHz AMD Athlon, 001201 • When. Im 64/Toto 7, clusters – 65 noder, AMD 1900+, 020408 – 128 noder, P 4 2. 53 GHz, 030218 – Fileserver, login nodes etc • Ask, SGI Origin 2000 – 48 nodes, R 12000, 300 MHz, 12 Gb
Current hardware
About Lunarc • Current staff – 1. 3 fte • Future Administration – 2. 5 fte ( minimum, depending on contract formulations)
Current users • Core groups – Theoretical chemistry, Physical Chemistry 2, Structural Mechanics • Other large users – Fluid Mechanics, Fire safety engineering, Physics • New groups – Inflamational Research, Biophysical Chemistry, Astronomy
Current users
Lunarc web • • User registration System information System usage Job submission ?
Using clusters • Log in – Use ssh, unix tools etc • mkdir proj • sftp/scp user@. . . • vi/joe submit script For many, this is a straightforward process, but why do we get so many questions? ? – Submit script documentation • Queue management – qsub script • Transfer result files back – sftp/scp
Web portal for our clusters
• Good knowledge about local circumstances • Traditional users -> clusters -> grids • User interface • Grid of clusters
- Slides: 10