Polylogarithmic Approximation for Edit Distance and the Asymmetric

  • Slides: 22
Download presentation
Polylogarithmic Approximation for Edit Distance (and the Asymmetric Query Complexity) Robert Krauthgamer [Weizmann Institute]

Polylogarithmic Approximation for Edit Distance (and the Asymmetric Query Complexity) Robert Krauthgamer [Weizmann Institute] Joint with: Alexandr Andoni [Microsoft SVC] Krzysztof Onak [CMU]

11011 00111 Polylogarithmic Approximation for Edit Distance (and the Asymmetric Query Complexity) Robert Krauthgamer

11011 00111 Polylogarithmic Approximation for Edit Distance (and the Asymmetric Query Complexity) Robert Krauthgamer [Weizmann Institute] Joint with: Alexandr Andoni [Microsoft SVC] Krzysztof Onak [CMU] …

Edit Distance (Levenshtein distance) Given two strings x, y n: ed(x, y) = minimum

Edit Distance (Levenshtein distance) Given two strings x, y n: ed(x, y) = minimum number of character operations (insertion/deletion/substitution) that transform x to y. ed( banana , ananas ) = 2 Applications: • Computational Biology Generic Search Engine • Text processing • Web search Polylog. Approx. for ED and the Asymmetric Query Complexity 3

Basic task n Compute ed(x, y) for input x, y n O(n 2) time

Basic task n Compute ed(x, y) for input x, y n O(n 2) time [WF’ 74] q Faster algorithms? b a n a s a n a D(i, j) = ed( x[1: i], y[1: j] ) 1 1 2 3 4 5 2 2 1 2 3 4 3 2 2 1 2 5 4 3 2 2 1 D(i-1, j-1) , if x[i]=y[j] D(i, j)= min D(i-1, j) + 1 D(i, j-1) + 1 6 5 4 3 3 2 Polylog. Approx. for ED and the Asymmetric Query Complexity 4

Faster Algorithms? n Compute ed(x, y) for given x, y n q q n

Faster Algorithms? n Compute ed(x, y) for given x, y n q q n O(n 2) time [WF’ 74] O(n 2/log 2 n) time [MP’ 80] Linear time (or near-linear)? q Specific cases (average, smoothed, restricted input) and variants (block edit dist etc. ) [U’ 83, LV’ 85, M’ 86, GG’ 88, GP’ 89, UW’ 90, CL’ 90, CH’ 98, LMS’ 98, U’ 85, CL’ 92, N’ 99, CPSV’ 00, MS’ 00, CM’ 02, AK’ 08, BF’ 08…] q n 2 O (√log n) approximation [OR’ 05, AO’ 09], improving earlier ncapproximation [BEKMRRS’ 03, BJKK’ 04, BES’ 06] Same “barrier” 2 O (√log n)-approximation also for related tasks: q Nearest neighbor search (text indexing), embedding into normed spaces, sketching [OR’ 05] Polylog. Approx. for ED and the Asymmetric Query Complexity 5

Results I n Theorem 1: Can approximate ed(x, y) within (log n)O(1/ε) factor in

Results I n Theorem 1: Can approximate ed(x, y) within (log n)O(1/ε) factor in time n 1+ε (for any ε>0). n Exponential improvement over previous factor 2 O (√log n) n Fallout from the study of asymmetric query model … Polylog. Approx. for ED and the Asymmetric Query Complexity 6

Approach: asymmetric query model n “Compress” one string, x, to nε information q n

Approach: asymmetric query model n “Compress” one string, x, to nε information q n How to compress? q n n Use dynamic programming to compute ed(x, y) in n 1+ε time Carefully subsample x… Focus on sample-size (number of queried positions) in x, for fixed y ? Obtain near-tight bounds y x Polylog. Approx. for ED and the Asymmetric Query Complexity 7

Results II: Asymmetric Query Complexity n n Problem: Decide ed(x, y) ≥ n/10 vs

Results II: Asymmetric Query Complexity n n Problem: Decide ed(x, y) ≥ n/10 vs ed(x, y) ≤ n/A Complexity = #queries into x (unlimited access to y) Approximation: # Queries: # queries (log n)O(1/ε) [n 1/(t+1), n 1/t-ε] O(nε) O(logt n) Ω(nε/loglog n) Ω(logt n) Θ(log 3 n) Θ(log 2 n) Θ(log n) 1/4 n 1/3 1/2 -ε n 1/2 n n 1/(t+1) n 1/t-ε n Polylog. Approx. for ED and the Asymmetric Query Complexity n 1 -ε A 8

Upper bound n Theorem 2: can distinguish ed(x, y) ≥ n/10 vs ed(x, y)

Upper bound n Theorem 2: can distinguish ed(x, y) ≥ n/10 vs ed(x, y) ≤ n/A for A=(log n)O(1/ε) approximation with nε queries into x (for any ε>0). n Proof structure: 1. Characterize edit by “tree-distance” Txy n Parameter b≥ 2 (degree) n Txy ≈ ed(x, y) up to 6 b*log n factor b 2. Prune the tree to subsample x x 1 x 2 xn sampled positions in x Polylog. Approx. for ED and the Asymmetric Query Complexity 9

Step 1: Tree distance Partition x into b blocks, recursively, for h=logbn levels n

Step 1: Tree distance Partition x into b blocks, recursively, for h=logbn levels n x[1: n] x[1: ⅓n] x[⅓n: ⅔n] … x[1] x[2] x[3] x[⅔n: n] x[u: u+⅓n] y[1: n] y[u: u+⅓n] n Ti(s, u) = T-distance between x[s: s+ℓi] and y[u: u+ℓi] where ℓi is the block-length at level i Polylog. Approx. for ED and the Asymmetric Query Complexity 10

Tree distance: recursive definition n Recall Ti(s, u) = distance between x[s: s+ℓi] and

Tree distance: recursive definition n Recall Ti(s, u) = distance between x[s: s+ℓi] and y[u: u+ℓi] n Base case: Th(s, u)=Hamming(x[s], y[u]) Output: Txy=T 0(s=1, u=1) n x[s: s+ℓi] x r 0 y y[u: u+ℓi] Polylog. Approx. for ED and the Asymmetric Query Complexity 11

T-distance approximates edit distance n Lemma: Txy≈ed(x, y) up to 6 b*logbn factor. n

T-distance approximates edit distance n Lemma: Txy≈ed(x, y) up to 6 b*logbn factor. n Hierarchical decomposition inspired by earlier approaches [BEKMRRS’ 03, OR’ 05] q All had approximation recurrence of the type A(n) = c*A(n/b) + b for c≥ 2 q n Solves to A(n) ≥ 2√log n factor for every choice of b Our characterization has no multiplicative loss (c=1): q A(n) = A(n/b) + b Analysis inspired by algorithms for smoothed edit [AK’ 08] Polylog. Approx. for ED and the Asymmetric Query Complexity 12

Step 2: Compute the tree distance n For b=2, T-distance gives O(log n) approximation!

Step 2: Compute the tree distance n For b=2, T-distance gives O(log n) approximation! q n n BUT know only how to compute T-distance in O (n 2) time Instead, for b=(log n)1/ε, can prune the tree to n. O(ε) nodes, and get 1+ε approximation Pruning: subsample (log n)O(1) children out of each node q q Works only when ed(x, y) ≥ (n) Generally, must subsample the tree non-uniformly, using the Precision Sampling Lemma b sampled positions in x Polylog. Approx. for ED and the Asymmetric Query Complexity 13

Key tool: non-uniform sampling n Goal: q For unknown a 1, a 2, …an

Key tool: non-uniform sampling n Goal: q For unknown a 1, a 2, …an [0, 1] q Estimate their sum, up to an additive constant error q Using only “weak” estimates a 1, a 2, …a n Sum Estimator Adversary 0. fix distribution U 2. pick “precisions” ui (our algorithm: ui~U i. i. d. ) 4. report S =S (a 1, …, u 1, …) with |S – ∑ai | < 1. Fix a 1, a 2, …an (unknown) 3. provide a 1, a 2, …a n s. t. |ai-a i|<1/ui Polylog. Approx. for ED and the Asymmetric Query Complexity 14

Precision Sampling n n Goal: estimate ∑ai from {a i} s. t. |ai-a i|<1/ui.

Precision Sampling n n Goal: estimate ∑ai from {a i} s. t. |ai-a i|<1/ui. Precision Sampling Lemma: Can achieve WHP q additive error 1 and multiplicative error 1. 5 q with expected precision Eu_i~U[ui]=O(log n). n Inspired by a technique from [IW’ 05] for streaming (Fk moments) q In fact, PSL gives simple & improved algorithms for Fk moments, cascaded (mixed) norms, ℓp-sampling problems [AKO’ 10] n Also distant relative of Priority Sampling [DLT’ 07] Polylog. Approx. for ED and the Asymmetric Query Complexity 15

Precision Sampling for Edit Distance n n Apply Precision Sampling to the tree from

Precision Sampling for Edit Distance n n Apply Precision Sampling to the tree from the characterization recursively at each node If a node has very weak precision, can trim the entire sub-tree Polylog. Approx. for ED and the Asymmetric Query Complexity 16

Lower Bound Theorem n Theorem 3: Achieving approximation A=O(log 7 n) for edit distance

Lower Bound Theorem n Theorem 3: Achieving approximation A=O(log 7 n) for edit distance requires asymmetric query complexity nΩ(1/loglog n). q I. e. , distinguishing ed(x, y)>n/10 vs ed(x, y)<n/10 A Implications: n First lower bound to expose hardness from repetitiveness in edit distance n Contrast with edit on non-repetitive strings (Ulam’s distance) q q Empirically easier (better algorithms are known for it) Yet, all previous lower bounds essentially equivalent for the two variants [BEKMRRS’ 03, AN’ 10, KN’ 05, KR’ 06, AK’ 07, AJP’ 10] n But asymmetric query complexity: q q Ulam: 2 -approx. with O(log n) queries [ACCL’ 04, SS’ 10] Edit: requires nΩ(1/loglog n) queries Polylog. Approx. for ED and the Asymmetric Query Complexity 17

Lower Bound Techniques n Core gadget: ¾(. ) = cyclic shift operation q n

Lower Bound Techniques n Core gadget: ¾(. ) = cyclic shift operation q n Observation: ed(x, ¾j(x)) · 2 j Lower bound outline: q q exhibit lower bound via shifts Amplification by “composing” the hard instance recursively We will see here: n Theorem 4: Asymmetric query complexity of approximation n 1/2 to edit distance is Ω(log 2 n) Polylog. Approx. for ED and the Asymmetric Query Complexity 18

The Shift Gadget n n Lemma: Ω(log n) query lower bound for approximation A=n

The Shift Gadget n n Lemma: Ω(log n) query lower bound for approximation A=n 0. 5. Hard distribution (x, y): q q Fix specific z 1, z 2 {0, 1}n (random-looking) Set: j y= 00101 q n x= ¾ ( 00101 ) ) ed(x, y) · 2 n 0. 5 [close] ¾j( 01101 ) ) ed(x, y) ¸ n/10 [far] Formally: y=z 1 and x=σj(z 1 OR z 2) and random j [n 0. 5] An algorithm is a set queried positions: Q½[n], |Q|<<log n It “reads” (z 1 OR z 2) at positions Q+j n Claim: Both z 1|Q+j and z 2|Q+j close to uniform dist. on {0, 1}|Q| q n up to ~2|Q|/n 0. 5 statistical distance Hence |Q| ¸ Ω(log n), even for approximation A=n 0. 99 Polylog. Approx. for ED and the Asymmetric Query Complexity 19

Amplification via Substitution Product n n Ω(log 2 n) lower bound by amplification: “compose”

Amplification via Substitution Product n n Ω(log 2 n) lower bound by amplification: “compose” two shift instances Hard distribution (x, y): q q q n Fix z 1, z 2 {0, 1}√n, w 0, w 1 {0, 1}√n and y=z 1 (w 0, w 1) (substitution) Choose either z=z 1 (close) or z=z 2 (far) x = z (w 0, w 1) but with random shifts j [n 1/3] inside each block and between blocks Intuition: must distinguish z=z 1 from z=z 2 q Must “learn” Ω(log n) positions i of z, and each requires reading Ω(log n) further positions in the corresponding blocks wz[i] z 1=00101 x= w 0= 11011 w 1= 00111 11011 00111 Polylog. Approx. for ED and the Asymmetric Query Complexity 20

Towards the Full Theorem n For the full theorem: recursive composition n Proof overview:

Towards the Full Theorem n For the full theorem: recursive composition n Proof overview: 1. Define ®-similarity of k distributions 2. ®-similarity ) query lower bound 1/® (®≈information per query) (for adaptive algorithms) 3. Initial “Shift metric” has high ®-similarity 4. ®-similarity amplified under substitution product 5. Prove edit distance concentrates well 6. Can reduce large alphabet to binary (induction basis) (inductive step) (requires large alphabet) (lossy, but done once) Polylog. Approx. for ED and the Asymmetric Query Complexity 21

Conclusion n We compute ed(x, y) up to (log n)O(1/ε) approximation in n 1+ε

Conclusion n We compute ed(x, y) up to (log n)O(1/ε) approximation in n 1+ε time q Via Asymmetric Query Complexity (new model) Open questions: n Do faster / limitations: q n E. g. O(log 2 n) approximation in n 1+o(1) time? Use these insights for related problems: q q q Nearest Neighbor Search? Sublinear-time algorithms (symmetric queries)? Embeddings? Communication complexity? Further thoughts: n Practical ramifications? n Asymmetric queries model? n Paradigm for “fast dynamic programming”? Polylog. Approx. for ED and the Asymmetric Query Complexity 22