Lecture 1 Introduction CSE 490 h Introduction to

  • Slides: 37
Download presentation
Lecture 1 – Introduction CSE 490 h – Introduction to Distributed Computing, Winter 2008

Lecture 1 – Introduction CSE 490 h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2. 5 License.

Staff n Gayle Laakmann – Instructor ¨ n Gayle. L@gmail. com (IM or Email)

Staff n Gayle Laakmann – Instructor ¨ n Gayle. L@gmail. com (IM or Email) Slava Chernyak – TA chernyak@cs. washington. edu ¨ OH – TDB ¨

Outline n n n n Administrivia Scope of Problems A Brief History Parallel vs.

Outline n n n n Administrivia Scope of Problems A Brief History Parallel vs. Distributed Computing Parallelization and Synchronization Problem Discussion Prelude to Map. Reduce

Goals and Expectations Weekly Lecture n Weekly Reading n ¨ Short response questions +

Goals and Expectations Weekly Lecture n Weekly Reading n ¨ Short response questions + discussion Final Project n Spring 2008: Follow-Up Course n

Final Project n Implement a simulation of Map. Reduce

Final Project n Implement a simulation of Map. Reduce

Spring 2008 Capstone course n Use Map. Reduce to build large application n

Spring 2008 Capstone course n Use Map. Reduce to build large application n

Computer Speedup Moore’s Law: “The density of transistors on a chip doubles every 18

Computer Speedup Moore’s Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Image: Tom’s Hardware

Moore’s Law n Applies to… ¨ Processing Speed ¨ Memory Capacity n What does

Moore’s Law n Applies to… ¨ Processing Speed ¨ Memory Capacity n What does this mean for computing?

Scope of problems What can you do with 1 computer? n What can you

Scope of problems What can you do with 1 computer? n What can you do with 100 computers? n What can you do with an entire data center? n

Distributed problems n Simulating several hundred or thousand characters Happy Feet © Kingdom Feature

Distributed problems n Simulating several hundred or thousand characters Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema

Distributed problems n Indexing the web (Google)

Distributed problems n Indexing the web (Google)

Parallel vs. Distributed n Parallel computing can mean: ¨ Vector processing of data (SIMD)

Parallel vs. Distributed n Parallel computing can mean: ¨ Vector processing of data (SIMD) ¨ Multiple CPUs in a single computer (MIMD) n Distributed computing is multiple CPUs across many computers (MIMD)

A Brief History… 1975 -85 n n n Parallel computing was favored in the

A Brief History… 1975 -85 n n n Parallel computing was favored in the early years Primarily vector-based at first Gradually more threadbased parallelism was introduced Cray 2 supercomputer (Wikipedia)

A Brief History… 1985 -95 “Massively parallel architectures” start rising in prominence n Message

A Brief History… 1985 -95 “Massively parallel architectures” start rising in prominence n Message Passing Interface (MPI) and other libraries developed n Bandwidth was a big problem n

A Brief History… 1995 -Today Cluster/grid architecture increasingly dominant n Special node machines eschewed

A Brief History… 1995 -Today Cluster/grid architecture increasingly dominant n Special node machines eschewed in favor of COTS technologies n Web-wide cluster software n Companies like Google take this to the extreme (10, 000 node clusters) n

Parallelization & Synchronization

Parallelization & Synchronization

Parallelization Idea n Parallelization is “easy” if processing can be cleanly split into n

Parallelization Idea n Parallelization is “easy” if processing can be cleanly split into n units:

Parallelization Idea (2) In a parallel computation, we would like to have as many

Parallelization Idea (2) In a parallel computation, we would like to have as many threads as we have processors. e. g. , a four-processor computer would be able to run four threads at the same time.

Parallelization Idea (3)

Parallelization Idea (3)

Parallelization Idea (4)

Parallelization Idea (4)

Parallelization Pitfalls But this model is too simple! n n n How do we

Parallelization Pitfalls But this model is too simple! n n n How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into completely separate tasks? What is the common theme of all of these problems?

Parallelization Pitfalls (2) n Each of these problems represents a point at which multiple

Parallelization Pitfalls (2) n Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource. n Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!

What is Wrong With This? Thread 1: void foo() { x++; y = x;

What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x++; } If the initial state is y = 0, x = 6, what happens after these threads finish running?

Multithreaded = Unpredictability n Many things that look like “one step” operations actually take

Multithreaded = Unpredictability n Many things that look like “one step” operations actually take several steps under the hood: Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } n Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; inc eax; mem[x] = eax; } When we run a multithreaded program, we don’t know what order threads run in, nor do we know when they will interrupt one another.

Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from

Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue n Reporting work back to master unit n Telling another thread that it can begin the “next phase” of processing n … All require synchronization!

Synchronization Primitives n A synchronization primitive is a special shared variable that guarantees that

Synchronization Primitives n A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically. n Hardware support guarantees that operations on synchronization primitives only ever take one step

Semaphores n n A semaphore is a flag that can be raised or lowered

Semaphores n n A semaphore is a flag that can be raised or lowered in one step Semaphores were flags that railroad engineers would use when entering a shared track Only one side of the semaphore can ever be red! (Can both be green? )

Semaphores set() and reset() can be thought of as lock() and unlock() n Calls

Semaphores set() and reset() can be thought of as lock() and unlock() n Calls to lock() when the semaphore is already locked cause thread to block. n n Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly

The “corrected” example Thread 1: Thread 2: void foo() { sem. lock(); x++; y

The “corrected” example Thread 1: Thread 2: void foo() { sem. lock(); x++; y = x; sem. unlock(); } void bar() { sem. lock(); y++; x++; sem. unlock(); } Global var “Semaphore sem = new Semaphore(); ” guards access to x & y

Condition Variables n A condition variable notifies threads that a particular condition has been

Condition Variables n A condition variable notifies threads that a particular condition has been met n Inform another thread that a queue now contains elements to pull from (or that it’s empty – request more elements!) n Pitfall: What if nobody’s listening?

The final example Thread 1: Thread 2: void foo() { sem. lock(); x++; y

The final example Thread 1: Thread 2: void foo() { sem. lock(); x++; y = x; foo. Done = true; sem. unlock(); foo. Finished. CV. notify(); } void bar() { sem. lock(); if(!foo. Done) foo. Finished. CV. wait(sem); y++; x++; sem. unlock(); } Global vars: Semaphore sem = new Semaphore(); Condition. Var foo. Finished. CV = new Condition. Var(); boolean foo. Done = false;

Barriers A barrier knows in advance how many threads it should wait for. Threads

Barriers A barrier knows in advance how many threads it should wait for. Threads “register” with the barrier when they reach it, and fall asleep. n Barrier wakes up all registered threads when total count is correct n n Pitfall: What happens if a thread takes a long time?

Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be

Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be used Can cause entire system to “get stuck” Thread A: semaphore 1. lock(); semaphore 2. lock(); /* use data guarded by semaphores */ semaphore 1. unlock(); semaphore 2. unlock(); Thread B: semaphore 2. lock(); semaphore 1. lock(); /* use data guarded by semaphores */ semaphore 1. unlock(); semaphore 2. unlock(); (Image: RPI CSCI. 4210 Operating Systems notes)

The Moral: Be Careful! n Synchronization is hard ¨ Need to consider all possible

The Moral: Be Careful! n Synchronization is hard ¨ Need to consider all possible shared state ¨ Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! n Keeping shared state to a minimum reduces total system complexity n

Prelude to Map. Reduce n We saw earlier that explicit parallelism/synchronization is hard Synchronization

Prelude to Map. Reduce n We saw earlier that explicit parallelism/synchronization is hard Synchronization does not even answer questions specific to distributed computing, like how to move data from one machine to another n Fortunately, Map. Reduce handles this for us n

Prelude to Map. Reduce is a paradigm designed by Google for making a subset

Prelude to Map. Reduce is a paradigm designed by Google for making a subset (albeit a large one) of distributed problems easier to code n Automates data distribution & result aggregation n Restricts the ways data can interact to eliminate locks (no shared state = no locks!) n

Next time… We will go over Map. Reduce in detail n Discuss the functional

Next time… We will go over Map. Reduce in detail n Discuss the functional programming paradigms of fold and map, and how they relate to distributed computation n