High Throughput Computing with Condor at Notre Dame
- Slides: 31
High Throughput Computing with Condor at Notre Dame Douglas Thain 30 April 2009
Today’s Talk • High Level Introduction (20 min) – What is Condor? – How does it work? – What is it good for? • Hands-On Tutorial (30 min) – Finding Resources – Submitting Jobs – Managing Jobs – Ideas for Scaling Up
The Cooperative Computing Lab • We create software that enables the reliable sharing of cycles and storage capacity between cooperating people. • We conduct research on the effectiveness of various systems and strategies for large scale computing. • We collaborate with others that need to use large scale computing, so as to find the real problems and make an impact on the world. • We operate systems like Condor that directly support research and collaboration at ND. http: //www. cse. nd. edu/~ccl
What is Condor? • Condor is software from UW-Madison that harnesses idle cycles from existing machines. (Most workstations are ~90% idle!) • With the assistance of CSE, OIT, and CRC staff, Condor has been installed on ~700 cores in Engineering and Science since early 2005. • The Condor pool expands the capabilities of researchers in to perform both cycle and storage intensive research. • New users and contributors are welcome to join! http: //condor. cse. nd. edu
Batch Users www portals Purdue ~10 k cores login nodes db server central mgr Wisconsin ~5 k cores “flocking” to other condor pools Condor Distributed Batch System (~700 cores) CSE 170 Fitzpatrick 130 CHEG EE 25 10 Nieu De. Bart 20 10 Personal Workstations ccl 8 x 1 netscale 16 x 2 cclsun 16 x 2 Timeshared Collaboration Storage Research Network Research Storage Research netscale 1 x 32 loco 32 x 2 cvrl 32 x 2 sc 0 32 x 2 Network Research Batch Capacity Biometrics Hadoop compbio 1 x 8 Primary Interactive Users green house iss 44 x 2 MPI
http: //www. cse. nd. edu/~ccl/viz
The Condor Principle • Machine Owners Have Absolute Control – Set who, what, and when can use machine. – Can kick jobs off at any time manually. • Default policy that satisfies most people: • Start job if console idle > 15 minutes • Suspend job if console used or CPU busy. • Kick off job if suspended > 10 minutes. – After that, jobs run in this order: owner, research group, Notre Dame, elsewhere. For the full technical details, see: http: //www. cse. nd. edu/~ccl/operations/condor/policy. shtml
What’s the value proposition? • If you install Condor on your workstations, servers, or clusters, then: – You retain immediate, preemptive priority on your machines, both batch and interactive. – You gain access to the unused cycles available on other machines. – By the way, other people get to use your machines when you are not.
http: //condor. cse. nd. edu
http: //condor. cse. nd. edu
http: //condor. cse. nd. edu
Condor Architecture I want an INTEL CPU with > 3 GB RAM match maker I prefer to run jobs owned by user “joe”. Represents an available machine. You two should talk to each other. schedd startd Run job with files X, Y. Represents a user with jobs to run. X Y Y job Y
~700 CPUs at Notre Dame match maker schedd schedd startd startd
Flocking to Other Sites 2000 CPUs University of Wisconsin 20, 000 CPUs Purdue University 700 CPUs Notre Dame
What is Condor Good For? • Condor works well on large workflows of sequential jobs, provided that they match the machines available to you. • Ideal workload: – One million jobs that require one hour each. • Doesn’t work at all: – An 8 -node MPI job that must run now. • Many workloads can be converted into the ideal form, with varying degrees of effort.
High Throughput Computing • Condor is not High Performance Computing – HPC: Run one program as fast as possible. • Condor is High Throughput Computing – HTC: Run as many programs as possible before my paper deadline on May 1 st.
Intermission and Questions
Getting Started: If your shell is tcsh: % setenv PATH /afs/nd. edu/user 37/condor/software/bin: $PATH If your shell is bash: % export PATH=/afs/nd. edu/user 37/condor/software/bin: $PATH Then, create a temporary working space: % mkdir /tmp/YOURNAME % cd /tmp/YOURNAME
Viewing Available Resources • Condor Status Web Page: – http: //condor. cse. nd. edu • Command Line Tool: – condor_status –constraint ‘(Memory>2048)’ – condor_status –constraint ‘(Arch==“INTEL”)’ – condor_status –constraint ‘(Op. Sys==“LINUX”)’ – condor_status -run – condor_status –submitters – condor_status -pool boilergrid. rcac. purdue. edu
A Simple Script Job % vi simple. sh #!/bin/sh echo $@ date uname –a % chmod 755 simple. sh %. /simple. sh hello world
A Simple Submit File % vi simple. submit universe = vanilla executable = simple. sh arguments = hello condor output = simple. stdout error = simple. stderr should_transfer_files = yes when_to_transfer_output = on_exit log = simple. logfile queue
Submitting and Watching a Job • Submit the job: – condor_submit simple. submit • Look at the job queue: – condor_q • Remove a job: – condor_rm <#> • See where the job went: – tail -f simple. logfile
Submitting Lots of Jobs % vi simple. submit universe = vanilla executable = simple. sh arguments = hello $(PROCESS) output = simple. stdout. $(PROCESS) error = simple. stderr. $(PROCESS) should_transfer_files = yes when_to_transfer_output = on_exit log = simple. logfile queue 50
What Happened to All My Jobs? • http: //condorlog. cse. nd. edu
Setting Requirements • By default, Condor will only run your job on a machine with the same CPU and OS as the submitter. • Use requirements to send your job to other kinds of machines: – – requirements = (Memory>2084) requirements = (Arch==“INTEL” || Arch==“X 86_64”) requirements = (Machine. Group==“fitzlab”) requirements = (Uid. Domain!=“nd. edu”) • (Hint: Try out your requirements expressions using condor_status as above. )
Setting Requirements • By default, Condor will assume any machine that satisfies your requirements is sufficient. • Use the rank expression to indicate which machines that you prefer: – rank = (Memory>1024) – rank = (Machine. Group==“fitzlab”) – rank = (Arch==“INTEL”)*10 + (Arch==“X 86_64”)*20
File Transfer • Notes to keep in mind: – Condor cannot write to AFS. (no creds) – Not all machines in Condor have AFS. • So, you must specify what files your job needs, and Condor will send them there: – transfer_input_files = x. dat, y. calib, z. library • By default, all files created by your job will be sent home automatically.
In Class Assignment • Execute 50 jobs that run on a machine not at Notre Dame that has >1 GB RAM.
- Ndb belmont
- Notre dame high school belmont
- How to upload photos on google
- Golden aspect ratio
- Notre dame suter
- Notre dame opt
- Notre dame army rotc
- Cathédrale notre-dame-de-l'assomption de naples
- Acct20100
- Notre dame risk management
- Brad weldon notre dame
- Notre dame des sablons
- Notre-dame de santa cruz
- Notre dame organum
- University of notre dame map
- Chartres cathedral floor plan
- Nous te saluons ô toi notre dame
- Notre dame rotc scholarship
- Sanctuaire plateau de son ange
- Notre dame msm program
- Notre dame de la lumière chapelet
- Vpn notre dame
- College notre dame des soeurs antonines
- Ron kraemer notre dame
- Migration period art
- Haicao notre dame
- Notre dame leeds moodle
- Támív támpillér
- Notre dame de sion istanbul
- Collège notre dame guingamp
- Notre dame sorin society
- Danny chen notre dame