Locality Sensitive Distributed Computing Exercise Set 2 David
Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute
Basic partition construction algorithm Simple distributed implementation for Algorithm Basic. Part Single “thread” of computation (single locus of activity at any given moment)
Basic partition construction algorithm Components Cluster. Cons : Procedure for constructing a cluster around a chosen center v Next. Ctr : Procedure for selecting the next center v around which to grow a cluster Rep. Edge : Procedure for selecting a representative intercluster edge between any two adjacent clusters
Cluster construction procedure Cluster. Cons Goal: Invoked at center v, construct cluster and BFS tree (rooted at v) spanning it Tool: Variant of Dijkstra's algorithm.
Recall: Dijkstra’s BFS algorithm phase p+1:
Main changes to Algorithm Dist. Dijk 1. Ignoring covered vertices: Global BFS algorithm sends exploration msgs to all neighbors save those known to be in tree New variant ignores also vertices known to belong to previously constructed clusters 2. Bounding depth: BFS tree grown to limited depth, adding new layers tentatively, based on halting condition (|G(S)| < |S|·n 1/k)
Distributed Implementation Before deciding to expand tree T by adding newly discovered layer L: Count # vertices in L by convergecast process: • Leaf w T: set Zw = # new children in L • Internal vertex: add and upcast counts.
Distributed Implementation • Root: compare final count Zv to total # vertices in T (known from previous phase). - If ratio ≥ n 1/k, then broadcast next Pulse msg (confirm new layer and start next phase) - Otherwise, broadcast message Reject (reject new layer, complete current cluster) Final broadcast step has 2 more goals: - mark cluster by unique name (e. g. , ID of root), - inform all vertices of new cluster name
Distributed Implementation (cont) This information is used to define cluster borders. I. e. , once cluster is complete, each vertex in it informs all neighbors of its new residence. nodes of cluster under construction know which neighbors already belong to existing clusters.
Center selection procedure Next. Ctr Fact: Algorithm's “center of activity” always located at currently constructed cluster C. Idea: Select as center for next cluster some vertex v adjacent to C (= v from rejected layer) Implementation: Via convergecast process. (leaf: pick arbitrary neighbor from rejected layer, upcast to parent internal node: upcast arbitrary candidate)
Center selection procedure (Next. Ctr) Problem: What if rejected layer is empty? (It might still be that the entire process is not yet complete: there may be some yet unclustered nodes elsewhere in G) ? ? r 0
Center selection procedure (Next. Ctr) Solution: Traverse the graph (using cluster construction procedure within a global search procedure) r 0
Distributed Implementation Use DFS algorithm for traversing the tree of constructed cluster. • Start at originator vertex r 0, invoke Cluster. Cons to construct the first cluster. • Whenever the rejected layer is nonempty, choose one rejected vertex as next cluster center • Each cluster center marks a parent cluster in the cluster DFS tree, namely, the cluster from which it was selected
Distributed Implementation (cont) DFS algorithm (cont): • Once the search cannot progress forward (rejected layer is empty) : the DFS backtracks to previous cluster and looks for new center among neighboring nodes • If no neighbors are available, the DFS process continues backtracking on the cluster DFS tree
Inter-cluster edge selection Rep. Edge Goal: Select one representative inter-cluster edge between every two adjacent clusters C and C' E(C, C') = edges connecting C and C' (known to endpoints in C, as C vertices know the clusterresidence of each neighbor) r 0
Inter-cluster edge selection Rep. Edge Representative edge can be selected by convergecast process on all edges of E(C, C'). Requirement: C and C' must select same edge Solution: Using unique ordering of edges pick minimum E(C, C') edge. Q: Define unique edge order by unique ID's?
Inter-cluster edge selection (Rep. Edge) E. g. , Define ID-weight of edge e=(v, w), where ID(v) < ID(w), as pair h ID(v), ID(w) i, and order ID-weights lexicographically; This ensures distinct weights and allows consistent selection of inter-cluster edges
Inter-cluster edge selection (Rep. Edge) Problem: Cluster C must carry selection process for every adjacent cluster C' individually Solution: • Inform each C vertex of identities of all clusters adjacent to C by convergecast + broadcast • Pipeline individual selection processes
Analysis (C 1, C 2, . . . , Cp) = clusters constructed by algorithm For cluster Ci: Ei = edges with at least one endpoint in Ci ni = |Ci|, mi = |Ei|, ri=Rad(Ci)
Analysis (cont) Cluster. Cons: Depth-bounded Dijkstra procedure constructs Ci and BFS tree in: O(ri 2) time and O(niri + mi) messages Time(Cluster. Cons) = ∑i O(ri 2) ≤ ∑i O(rik) ≤ k ∑i O(ni) = O(kn) Q: Prove O(n) bound
Analysis (cont) Ci and BFS tree cost: O(ri 2) time and O(niri + mi) messages Comm(Cluster. Cons) = ∑i O(niri + mi) Each edge occurs in ≤ 2 distinct sets Ei, hence Comm(Cluster. Cons) = O(nk + |E|)
Analysis (Next. Ctr) DFS process on the cluster tree is more expensive than plain DFS: DFS step visiting cluster Ci and deciding the next step requires O(ri) time and O(ni) comm. Deciding next step DFS step
Analysis (Next. Ctr) DFS visits clusters in cluster tree O(p) times Entire DFS process (not counting Procedure Cluster. Cons invocations) requires: • Time(Next. Ctr) = O(pk) = O(nk) • Comm(Next. Ctr) = O(pn) = O(n 2)
Analysis (Rep. Edge) si = # neighboring clusters surrounding Ci Convergecasting ID of neighboring cluster C' in Ci costs O(ri) time and O(ni) messages For all si neighboring clusters: O(si+ri) time O(sini) messages (pipelining)
Analysis (Rep. Edge) Pipelined inter-cluster edge selection – similar. As si ≤ n, we get Time(Rep. Edge) = maxi O(si + ri) = O(n) Comm(Rep. Edge) = ∑i O(si ni) = O(n 2)
Analysis Thm: Distributed Algorithm Basic. Part requires Time = O(nk) Comm = O(n 2)
Sparse spanners Example - m-dimensional hypercube: Hm=(Vm, Em), Vm={0, 1}m, Em = {(x, y) | x and y differ in exactly one bit} |Vm|=2 m, |Em|=m 2 m-1, diameter m Ex: Prove that for every m ≥ 0, the m-cube has a 3 -spanner with # edges ≤ 7· 2 m
Regional Matchings Locality sensitive tool for distributed match-making
Distributed match making Paradigm for establishing client-server connection in a distributed system (via specified rendezvous locations in the network) Ads of server v: written in locations Write(v) v client u: reads in locations Read(u) u
Regional Matchings Requirement: “read” and “write” sets must intersect: for every v, u V, Write(v) Å Read(u) ≠ Write(v) v u Read(u) Client u must find an ad of server v
Regional Matchings (cont) Distance considerations taken into account: Client u must find an ad of server v only if they are sufficiently close l-regional matching: “read” and “write” sets RW = { Read(v) , Write(v) | v V } s. t. for every v, u V, dist(u, v) ≤ l Write(v) Å Read(u) ≠
Regional Matchings (cont) Degree parameters: Dwrite(RW) = maxv V |Write(v)| Dread(RW) = maxv V |Read(v)|
Regional Matchings (cont) Radius parameters: Strwrite(RW) = maxu, v V { dist(u, v) | u Write(v) } / l Strread(RW) = maxu, v V { dist(u, v) | u Read(v)} / l
Regional matching construction [Given graph G, k, l ≥ 1, construct regional matching RWl, k] 1. Set S Gsl(V) (l-neighborhood cover)
Regional matching construction 2. Build coarsening cover T as in -Deg-Cover Thm Max
Regional matching construction 3. Select a center vertex r 0(T) in each cluster T T
Regional matching construction 4. Select for every v a cluster Tv T s. t. Gl(v) Tv Tv=T 1 v Gl(v)
Regional matching construction 5. Set Read(v) = {r 0(T) | v T} Write(v) = {r 0(Tv)} T 1 r 1 Gl(v) v r 2 Read(v) = {r 1, r 2, r 3} Write(v) = {r 1} T 2 r 3 T 3
Analysis Claim: Resulting RWl, k is an l-regional matching. Proof: Consider u, v such that dist(u, v) ≤ l Let Tv be cluster s. t. Write(v) = {r 0(Tv)}
Analysis (cont) By definition, u Gl(v). Also Gl(v) Tv u Tv r 0(Tv) Read(u) Å Write(v) ≠
Analysis (cont) Thm: For every graph G(V, E, w), l, k≥ 1, there is an l-regional matching RWl, k with Dread(RWl, k) ≤ 2 k n 1/k Dwrite(RWl, k) = 1 Strread(RWl, k) ≤ 2 k+1 Strwrite(RWl, k) ≤ 2 k+1
Analysis (cont) Taking k=log n we get Corollary: For every graph G(V, E, w), l≥ 1, there is an l-regional matching RWl with • • Dread(RWl) = O(log n) Dwrite(RWl) = 1 Strread(RWl) = O(log n) Strwrite(RWl) = O(log n)
- Slides: 42