Map Reduce Architecture Adapted from Lectures by Anand
Map Reduce Architecture Adapted from Lectures by Anand Rajaraman (Stanford Univ. ) and Dan Weld (Univ. of Washington) Prasad L 06 Map. Reduce 1
Single-node architecture CPU Machine Learning, Statistics Memory “Classical” Data Mining Disk Prasad L 06 Map. Reduce 2
Commodity Clusters o Web data sets can be very large n Tens to hundreds of terabytes o Cannot mine on a single server (why? ) o Standard architecture emerging: n Cluster of commodity Linux nodes n Gigabit ethernet interconnect o How to organize computations on this architecture? n Mask issues such as hardware failure Prasad L 06 Map. Reduce 3
Cluster Architecture 2 -10 Gbps backbone between racks 1 Gbps between any pair of nodes in a rack Switch CPU Mem Disk … Switch CPU Mem Disk CPU … Mem Disk Each rack contains 16 -64 nodes Prasad L 06 Map. Reduce 4
Stable storage o First order problem: if nodes can fail, how can we store data persistently? o Answer: Distributed File System n Provides global file namespace n Google GFS; Hadoop HDFS; Kosmix KFS o Typical usage pattern n Huge files (100 s of GB to TB) n Data is rarely updated in place n Reads and appends are common Prasad L 06 Map. Reduce 5
Distributed File System o Chunk Servers n n File is split into contiguous chunks Typically each chunk is 16 -64 MB Each chunk replicated (usually 2 x or 3 x) Try to keep replicas in different racks o Master node n a. k. a. Name Nodes in HDFS n Stores metadata n Might be replicated o Client library for file access n Talks to master to find chunk servers n Connects directly to chunkservers to access data Prasad L 06 Map. Reduce 6
Motivation for Map. Reduce (why) o Large-Scale Data Processing n Want to use 1000 s of CPUs o But don’t want hassle of managing things o Map. Reduce Architecture provides n n Automatic parallelization & distribution Fault tolerance I/O scheduling Monitoring & status updates Prasad L 06 Map. Reduce 7
What is Map/Reduce o Map/Reduce n Programming model from LISP n (and other functional languages) o Many problems can be phrased this way o Easy to distribute across nodes o Nice retry/failure semantics Prasad L 06 Map. Reduce 8
Map in LISP (Scheme) o (map f list [list 2 list 3 …]) p o y ar Un r o t a er o (map square ‘(1 2 3 4)) n (1 4 9 16) Prasad L 06 Map. Reduce 9
Reduce in LISP (Scheme) o (reduce f id list) r a n i B r o t ra pe o y o (reduce + 0 ‘(1 4 9 16)) n (+ 16 (+ 9 (+ 4 (+ 1 0)) ) ) n 30 o (reduce + 0 (map square (map – l 1 l 2)))) Prasad L 06 Map. Reduce 10
Warm up: Word Count o We have a large file of words, one word to a line o Count the number of times each distinct word appears in the file o Sample application: analyze web server logs to find popular URLs Prasad L 06 Map. Reduce 11
Word Count (2) o Case 1: Entire file fits in memory o Case 2: File too large for mem, but all <word, count> pairs fit in mem o Case 3: File on disk, too many distinct words to fit in memory n sort datafile | uniq –c Prasad L 06 Map. Reduce 12
Word Count (3) o To make it slightly harder, suppose we have a large corpus of documents o Count the number of times each distinct word occurs in the corpus words(docs/*) | sort | uniq -c where words takes a file and outputs the words in it, one to a line o The above captures the essence of Map. Reduce n Great thing is it is naturally parallelizable Prasad L 06 Map. Reduce 13
Map. Reduce o Input: a set of key/value pairs o User supplies two functions: n map(k, v) list(k 1, v 1) n reduce(k 1, list(v 1)) v 2 o (k 1, v 1) is an intermediate key/value pair o Output is the set of (k 1, v 2) pairs Prasad L 06 Map. Reduce 14
Word Count using Map. Reduce map(key, value): // key: document name; value: text of document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; values: an iterator over counts result = 0 for each count v in values: result += v emit(key, result) Prasad L 06 Map. Reduce 15
Count, Illustrated map(key=url, val=contents): For each word w in contents, emit (w, “ 1”) reduce(key=word, values=uniq_counts): Sum all “ 1”s in values list Emit result “(word, sum)” see bob run see spot throw Prasad see bob run see spot throw L 06 Map. Reduce 1 1 1 bob run see spot throw 1 1 2 1 1 16
Model is Widely Applicable Map. Reduce Programs In Google Source Tree Example uses: distributed grep distributed sort web link-graph reversal term-vector / host web access log stats inverted index construction document clustering machine learning statistical machine translation . . . Prasad . . . L 06 Map. Reduce . . . 17
Implementation Overview Typical cluster: • 100 s/1000 s of 2 -CPU x 86 machines, 2 -4 GB of memory • Limited bisection bandwidth • Storage is on local IDE disks • GFS: distributed file system manages data (SOSP'03) • Job scheduling system: jobs made up of tasks, scheduler assigns tasks to machines Implementation is a C++ library linked into user programs Prasad L 06 Map. Reduce 18
Distributed Execution Overview User Program fork assign map Input Data Split 0 read Split 1 Split 2 Master assign reduce Worker local write Worker Prasad fork remote read, sort L 06 Map. Reduce write Output File 0 Output File 1 19
Data flow o Input, final output are stored on a distributed file system n Scheduler tries to schedule map tasks “close” to physical storage location of input data o Intermediate results are stored on local FS of map and reduce workers o Output is often input to another map reduce task Prasad L 06 Map. Reduce 20
Coordination o Master data structures n Task status: (idle, in-progress, completed) n Idle tasks get scheduled as workers become available n When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer n Master pushes this info to reducers o Master pings workers periodically to detect failures Prasad L 06 Map. Reduce 21
Failures o Map worker failure n Map tasks completed or in-progress at worker are reset to idle n Reduce workers are notified when task is rescheduled on another worker o Reduce worker failure n Only in-progress tasks are reset to idle o Master failure n Map. Reduce task is aborted and client is notified Prasad L 06 Map. Reduce 22
Execution Prasad L 06 Map. Reduce 23
Parallel Execution Prasad L 06 Map. Reduce 24
How many Map and Reduce jobs? o M map tasks, R reduce tasks o Rule of thumb: n Make M and R much larger than the number of nodes in cluster n One DFS chunk per map is common n Improves dynamic load balancing and speeds recovery from worker failure o Usually R is smaller than M, because output is spread across R files Prasad L 06 Map. Reduce 25
Combiners o Often a map task will produce many pairs of the form (k, v 1), (k, v 2), … for the same key k n E. g. , popular words in Word Count o Can save network time by preaggregating at mapper n combine(k 1, list(v 1)) v 2 n Usually same as reduce function o Works only if reduce function is commutative and associative Prasad L 06 Map. Reduce 26
Partition Function o Inputs to map tasks are created by contiguous splits of input file o For reduce, we need to ensure that records with the same intermediate key end up at the same worker o System uses a default partition function e. g. , hash(key) mod R o Sometimes useful to override n E. g. , hash(hostname(URL)) mod R ensures URLs from a host end up in the same output file Prasad L 06 Map. Reduce 27
Execution Summary o How is this distributed? 1. Partition input key/value pairs into chunks, run map() tasks in parallel 2. After all map()s are complete, consolidate all emitted values for each unique emitted key 3. Now partition space of output map keys, and run reduce() in parallel o If map() or reduce() fails, reexecute! Prasad L 06 Map. Reduce 28
Exercise 1: Host size o Suppose we have a large web corpus o Let’s look at the metadata file n Lines of the form (URL, size, date, …) o For each host, find the total number of bytes n i. e. , the sum of the page sizes for all URLs from that host Prasad L 06 Map. Reduce 29
Exercise 2: Distributed Grep o Find all occurrences of the given pattern in a very large set of files Prasad L 06 Map. Reduce 30
Grep n Input consists of (url+offset, single line) n map(key=url+offset, val=line): o If contents matches regexp, emit (line, “ 1”) n reduce(key=line, values=uniq_counts): o Don’t do anything; just emit line Prasad L 06 Map. Reduce 31
Exercise 3: Graph reversal o Given a directed graph as an adjacency list: src 1: dest 11, dest 12, … src 2: dest 21, dest 22, … o Construct the graph in which all the links are reversed Prasad L 06 Map. Reduce 32
Reverse Web-Link Graph o Map n For each URL linking to target, … n Output <target, source> pairs o Reduce n Concatenate list of all source URLs n Outputs: <target, list (source)> pairs Prasad L 06 Map. Reduce 33
Exercise 4: Frequent Pairs o Given a large set of market baskets, find all frequent pairs n Remember definitions from Association Rules lectures Prasad L 06 Map. Reduce 34
Hadoop o An open-source implementation of Map Reduce in Java n Uses HDFS for stable storage o Download from: http: //lucene. apache. org/hadoop/ Prasad L 06 Map. Reduce 35
Reading o Jeffrey Dean and Sanjay Ghemawat, Map. Reduce: Simplified Data Processing on Large Clusters http: //labs. google. com/papers/mapreduce. html o Sanjay Ghemawat, Howard Gobioff, and Shun. Tak Leung, The Google File System http: //labs. google. com/papers/gfs. html Prasad L 06 Map. Reduce 36
Conclusions o Map. Reduce proven to be useful abstraction o Greatly simplifies large-scale computations o Fun to use: n focus on problem, n let library deal w/ messy details Prasad L 06 Map. Reduce 37
- Slides: 37