wiki:GCCluster

Version 4 (modified by Pieter Neerincx, 12 years ago) (diff)

--

GCC cluster

The GCC has its own 480 core cluster. The main workhorses are 10 servers each with

  • 48 cores
  • 256 GB RAM
  • 1 GBit management NIC
  • 10 GBit NIC for a dedicated fast IO connection to a
  • 2 PB shared GPFS for storage

For users

Login to the User Interface server

To submit jobs, check the status, test scripts, etc. you need to login on the user interface server a.k.a. cluster.gcc.rug.nl using SSH: Please note that cluster.gcc.rug.nl is only available from within certain RUG/UMCG subnets. From outside you need a double hop. Firstly login to the proxy:

$> ssh [your_account]@proxy.gcc.rug.nl

followed by:

$> ssh [your_account]@cluster.gcc.rug.nl

If you are within certain subnets of the RUG/UMCG network, you can skip the login to the proxy step and login to cluster.gcc.rug.nl directly.

Available queues

In order to quickly test jobs you are allowed to run the directly on cluster.gcc.rug.nl outside the scheduler. Please think twice though before you hit enter: if you crash cluster.gcc.rug.nl others can no longer submit or monitor their jobs, which is pretty annoying. On the other hand it's not a disaster as the scheduler and execution daemons run on physically different servers and hence are not affected by a crash of cluster.gcc.rug.nl.

To test how your jobs perform on an execution node and get an idea of the typical resource requirements for your analysis you should submit a few jobs to the test queues first. The test queues run on a dedicated execution node, so in case your jobs make that server run out of disk space, out of memory or do other nasty things accidentally, it will not affect the production queues and ditto nodes.

Once you've tested your job scripts and are sure they will behave nice & perform well, you can submit jobs to the production queue named gcc. In case you happen to be part of the gaf group and need to process high priority sequenced samples for the Genome Analysis Facility you can also use the gaf queue.

QueueJob typeLimits
test-shortdebugging10 minutes max. walltime per job; limited to a single test node / 48 cores
test-longdebuggingmax 4 jobs running simultaneously per user; limited to half the test node / 24 cores
gccproduction - default prionone
gafproduction - high prioonly available to users from the gaf group

Useful commands

Please refer to the Torque manuals for a complete overview. Some examples:

Submitting jobs:

$> qsub -N [nameOfYourJob] -W depend=afterok:[ID of a previously submitted job] myScript.sh

Checking for the status of your jobs:

Default output for all users:

$> qstat

Long jobs names:

$> wqstat

Limit output to your own jobs

$> wqstat -u [your account]

Get "full" a.k.a detailed output for a specific job (you probably don't want that for all jobs....):

$> qstat -f [jobID]

Get other detailed status info for a specific job:

$> checkjob [jobID]

List jobs based on priority as in who is next in the queue:

$> diagnose -p

List available nodes:

$> pbsnodes

For admins

Servers

FunctionDNSIPDaemonsComments
User interface nodecluster.gcc.rug.nl195.169.22.156- (clients only)Login node to submit and inspect jobs.
Relatively powerful machine.
Users can run code outside the scheduler for debugging purposes.
scheduler VMscheduler01195.169.22.214pbs_server
maui
Dedicated scheduler
No user logins if this one is currently the production scheduler
scheduler VMscheduler02195.169.22.190pbs_server
maui
Dedicated scheduler
No user logins if this one is currently the production scheduler
Execution nodetargetgcc01192.168.211.191pbs_momDedicated test node: only the test-short and test-long queues run on this node.
Crashing the test node shall not affect production!.
Execution nodetargetgcc02192.168.211.192pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc03192.168.211.193pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc04192.168.211.194pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc05192.168.211.195pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc06192.168.211.196pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc07192.168.211.197pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc08192.168.211.198pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc09192.168.211.199pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.
Execution nodetargetgcc10192.168.211.200pbs_momRedundant production node: only the default gcc and priority gaf queues run on this node.

PBS software / flavour

The current setup uses the resource manager Torque 2.5.12 combined with the scheduler Maui 3.3.1.

Maui

Runs only on the schedulers with config files in

/usr/local/maui/

Torque

Torque clients are available on all servers.
Torque's pbs_server daemon runs only on the schedulers.
Torque's pbs_mom daemon runs only on the execution nodes where the real work is done.
Torque config files are installed in

/var/spool/torque/

Dual scheduler setup

Installation details

Attachments (9)

Download all attachments as: .zip