Chapter 11 Approximation Algorithms Slides by Kevin Wayne
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1
Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality. Solve problem in poly-time. Solve arbitrary instances of the problem. n n n -approximation algorithm. Guaranteed to run in poly-time. Guaranteed to solve arbitrary instance of the problem Guaranteed to find solution within ratio of true optimum. n n n Challenge. Need to prove a solution's value is close to optimum, without even knowing what optimum value is! 2
11. 1 Load Balancing
Load Balancing Input. m identical machines; n jobs, job j has processing time t j. Job j must run contiguously on one machine. A machine can process at most one job at a time. n n Def. Let J(i) be the subset of jobs assigned to machine i. The load of machine i is Li = j J(i) tj. Def. The makespan is the maximum load on any machine L = maxi Li. Load balancing. Assign each job to a machine to minimize makespan. 4
Load Balancing: List Scheduling List-scheduling algorithm. Consider n jobs in some fixed order. Assign job j to machine whose load is smallest so far. n n List-Scheduling(m, n, t 1, t 2, …, tn) { for i = 1 to m { load on machine i Li 0 jobs assigned to machine i J(i) } for j = 1 to n { i = argmink Lk J(i) {j} L i Li + t j } machine i has smallest load assign job j to machine i update load of machine i } Implementation. O(n log n) using a priority queue. 5
Load Balancing: List Scheduling Analysis Theorem. [Graham, 1966] Greedy algorithm is a 2 -approximation. First worst-case analysis of an approximation algorithm. Need to compare resulting solution with optimal makespan L*. n n Lemma 1. The optimal makespan L* maxj tj. Pf. Some machine must process the most time-consuming job. ▪ Lemma 2. The optimal makespan Pf. The total processing time is j tj. One of m machines must do at least a 1/m fraction of total work. ▪ n n 6
Load Balancing: List Scheduling Analysis Theorem. Greedy algorithm is a 2 -approximation. Pf. Consider load Li of bottleneck machine i. Let j be last job scheduled on machine i. When job j assigned to machine i, i had smallest load. Its load before assignment is Li - tj Lk for all 1 k m. n n blue jobs scheduled before j machine i j 0 Li - t j L = Li 7
Load Balancing: List Scheduling Analysis Theorem. Greedy algorithm is a 2 -approximation. Pf. Consider load Li of bottleneck machine i. Let j be last job scheduled on machine i. When job j assigned to machine i, i had smallest load. Its load before assignment is Li - tj Lk for all 1 k m. Sum inequalities over all k and divide by m: n n n Lemma 1 n Now ▪ Lemma 2 8
Load Balancing: List Scheduling Analysis Q. Is our analysis tight? A. Essentially yes. Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m machine 2 idle machine 3 idle machine 4 idle m = 10 machine 5 idle machine 6 idle machine 7 idle machine 8 idle machine 9 idle machine 10 idle list scheduling makespan = 19 9
Load Balancing: List Scheduling Analysis Q. Is our analysis tight? A. Essentially yes. Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m m = 10 optimal makespan = 10 10
Load Balancing: LPT Rule Longest processing time (LPT). Sort n jobs in descending order of processing time, and then run list scheduling algorithm. LPT-List-Scheduling(m, n, t 1, t 2, …, tn) { Sort jobs so that t 1 ≥ t 2 ≥ … ≥ tn for i = 1 to m { Li 0 J(i) } load on machine i jobs assigned to machine i for j = 1 to n { i = argmink Lk J(i) {j} L i Li + t j } machine i has smallest load assign job j to machine i update load of machine i } 11
Load Balancing: LPT Rule Observation. If at most m jobs, then list-scheduling is optimal. Pf. Each job put on its own machine. ▪ Lemma 3. If there are more than m jobs, L* 2 tm+1. Pf. Consider first m+1 jobs t 1, …, tm+1. Since the ti's are in descending order, each takes at least t m+1 time. There are m+1 jobs and m machines, so by pigeonhole principle, at least one machine gets two jobs. ▪ n n n Theorem. LPT rule is a 3/2 approximation algorithm. Pf. Same basic approach as for list scheduling. ▪ Lemma 3 ( by observation, can assume number of jobs > m ) 12
Load Balancing: LPT Rule Q. Is our 3/2 analysis tight? A. No. Theorem. [Graham, 1969] LPT rule is a 4/3 -approximation. Pf. More sophisticated analysis of same algorithm. Q. Is Graham's 4/3 analysis tight? A. Essentially yes. Ex: m machines, n = 2 m+1 jobs, 2 jobs of length m+1, m+2, …, 2 m-1 and one job of length m. 13
11. 2 Center Selection
Center Selection Problem Input. Set of n sites s 1, …, sn. Center selection problem. Select k centers C so that maximum distance from a site to nearest center is minimized. k=4 r(C) center site 15
Center Selection Problem Input. Set of n sites s 1, …, sn. Center selection problem. Select k centers C so that maximum distance from a site to nearest center is minimized. Notation. dist(x, y) = distance between x and y. dist(si, C) = min c C dist(si, c) = distance from si to closest center. r(C) = maxi dist(si, C) = smallest covering radius. n n n Goal. Find set of centers C that minimizes r(C), subject to |C| = k. Distance function properties. dist(x, x) = 0 dist(x, y) = dist(y, x) dist(x, y) dist(x, z) + dist(z, y) n n n (identity) (symmetry) (triangle inequality) 16
Center Selection Example Ex: each site is a point in the plane, a center can be any point in the plane, dist(x, y) = Euclidean distance. Remark: search can be infinite! r(C) center site 17
Greedy Algorithm: A False Start Greedy algorithm. Put the first center at the best possible location for a single center, and then keep adding centers so as to reduce the covering radius each time by as much as possible. Remark: arbitrarily bad! greedy center 1 k = 2 centers center site 18
Center Selection: Greedy Algorithm Greedy algorithm. Repeatedly choose the next center to be the site farthest from any existing center. Greedy-Center-Selection(k, n, s 1, s 2, …, sn) { Select any site s and let C = {s} repeat k-1 times { Select a site si with maximum dist(si, C) Add si to C site farthest from any center } return C } Observation. Upon termination all centers in C are pairwise at least r(C) apart. Pf. By construction of algorithm. 19
Center Selection: Analysis of Greedy Algorithm Theorem. Let C* be an optimal set of centers. Then r(C) 2 r(C*). Pf. (by contradiction) Assume r(C*) < ½ r(C). For each site ci in C, consider ball of radius ½ r(C) around it. Exactly one ci* in each ball; let ci be the site paired with ci*. Consider any site s and its closest center ci* in C*. dist(s, C) dist(s, ci*) + dist(ci*, ci) 2 r(C*). Thus r(C) 2 r(C*). ▪ n n n -inequality r(C*) since ci* is closest center ½ r(C) ci ½ r(C) C* sites s ci * 20
Center Selection Theorem. Let C* be an optimal set of centers. Then r(C) 2 r(C*). Theorem. Greedy algorithm is a 2 -approximation for center selection problem. Remark. Greedy algorithm always places centers at sites, but is still within a factor of 2 of best solution that is allowed to place centers anywhere. e. g. , points in the plane Question. Is there hope of a 3/2 -approximation? 4/3? Theorem. Unless P = NP, there no -approximation for center-selection problem for any < 2. 21
11. 6 LP Rounding: Vertex Cover
Weighted Vertex Cover Weighted vertex cover. Given an undirected graph G = (V, E) with vertex weights wi 0, find a minimum weight subset of nodes S such that every edge is incident to at least one vertex in S. 10 A F 6 9 16 B G 7 10 6 C 3 H 9 23 D I 33 7 E J 10 32 total weight = 55 23
Weighted Vertex Cover: IP Formulation Weighted vertex cover. Given an undirected graph G = (V, E) with vertex weights wi 0, find a minimum weight subset of nodes S such that every edge is incident to at least one vertex in S. Integer programming formulation. Model inclusion of each vertex i using a 0/1 variable xi. n Vertex covers in 1 -1 correspondence with 0/1 assignments: S = {i V : xi = 1} n n Objective function: maximize i wi xi. Must take either i or j: xi + xj 1. 24
Weighted Vertex Cover: IP Formulation Weighted vertex cover. Integer programming formulation. Observation. If x* is optimal solution to (ILP), then S = {i V : x*i = 1} is a min weight vertex cover. 25
Integer Programming INTEGER-PROGRAMMING. Given integers aij and bi, find integers xj that satisfy: Observation. Vertex cover formulation proves that integer programming is NP-hard search problem. even if all coefficients are 0/1 and at most two variables per inequality 26
Linear Programming Linear programming. Max/min linear objective function subject to linear inequalities. Input: integers cj, bi, aij. Output: real numbers xj. n n Linear. No x 2, xy, arccos(x), x(1 -x), etc. Simplex algorithm. [Dantzig 1947] Can solve LP in practice. Ellipsoid algorithm. [Khachian 1979] Can solve LP in poly-time. 27
LP Feasible Region LP geometry in 2 D. x 1 = 0 x 2 = 0 2 x 1 + x 2 = 6 x 1 + 2 x 2 = 6 28
Weighted Vertex Cover: LP Relaxation Weighted vertex cover. Linear programming formulation. Observation. Optimal value of (LP) is optimal value of (ILP). Pf. LP has fewer constraints. Note. LP is not equivalent to vertex cover. ½ Q. How can solving LP help us find a small vertex cover? A. Solve LP and round fractional values. ½ ½ 29
Weighted Vertex Cover Theorem. If x* is optimal solution to (LP), then S = {i V : x*i ½} is a vertex cover whose weight is at most twice the min possible weight. Pf. [S is a vertex cover] Consider an edge (i, j) E. Since x*i + x*j 1, either x*i ½ or x*j ½ (i, j) covered. n n Pf. [S has desired cost] Let S* be optimal vertex cover. Then n LP is a relaxation x*i ½ 30
Weighted Vertex Cover and Bipartite Vertex Cover Weighted vertex cover. Linear programming formulation. Observation. Replace xi ← xi + xj 1 ← ( xi. L + xi. R )/2 xi. L + xj. R 1 xi. R + xj. L 1 The new LP represents Vertex Cover in bipartite graphs (that can be solved integrally by using network flow algorithms). The LP solution gives a half integral solution: xi∈{0, ½, 1} 31
Weighted Vertex Cover Theorem. 2 -approximation algorithm for weighted vertex cover. Theorem. [Dinur-Safra 2001] If P NP, then no -approximation for < 1. 3607, even with unit weights. 10 5 - 21 Open research problem. Close the gap. 32
11. 8 Knapsack Problem
Polynomial Time Approximation Scheme PTAS. (1 + )-approximation algorithm for any constant > 0. Load balancing. [Hochbaum-Shmoys 1987] Euclidean TSP. [Arora 1996] n n Consequence. PTAS produces arbitrarily high quality solution, but trades off accuracy for time. This section. PTAS for knapsack problem via rounding and scaling. 34
Knapsack Problem Knapsack problem. Given n objects and a "knapsack. " we'll assume w i W Item i has value vi > 0 and weighs wi > 0. Knapsack can carry weight up to W. Goal: fill knapsack so as to maximize total value. n n Ex: { 3, 4 } has value 40. W = 11 Item Value Weight 1 1 1 2 6 2 3 18 5 4 22 6 5 28 7 35
Knapsack is NP-Complete KNAPSACK: Given a finite set X, nonnegative weights wi, nonnegative values vi, a weight limit W, and a target value V, is there a subset S X such that: SUBSET-SUM: Given a finite set X, nonnegative values ui, and an integer U, is there a subset S X whose elements sum to exactly U? Claim. SUBSET-SUM P KNAPSACK. Pf. Given instance (u 1, …, un, U) of SUBSET-SUM, create KNAPSACK instance: 36
Knapsack Problem: Dynamic Programming 1 Def. OPT(i, w) = max value subset of items 1, . . . , i with weight limit w. Case 1: OPT does not select item i. – OPT selects best of 1, …, i– 1 using up to weight limit w Case 2: OPT selects item i. – new weight limit = w – wi – OPT selects best of 1, …, i– 1 using up to weight limit w – w i n n Running time. O(n W). W = weight limit. Not polynomial in input size! n n 37
Knapsack Problem: Dynamic Programming II Def. OPT(i, v) = min weight subset of items 1, …, i that yields value exactly v. Case 1: OPT does not select item i. – OPT selects best of 1, …, i-1 that achieves exactly value v Case 2: OPT selects item i. – consumes weight wi, new value needed = v – vi – OPT selects best of 1, …, i-1 that achieves exactly value v n n V* n vmax Running time. O(n V*) = O(n 2 vmax). V* = optimal value = maximum v such that OPT(n, v) W. Not polynomial in input size! n n 38
Knapsack: FPTAS Intuition for approximation algorithm. Round all values up to lie in smaller range. Run dynamic programming algorithm on rounded instance. Return optimal items in rounded instance. n n n Item Value Weight 1 134, 221 1 1 2 656, 342 2 2 7 2 3 1, 810, 013 5 3 19 5 4 22, 217, 800 6 4 23 6 5 28, 343, 199 7 5 29 7 W = 11 original instance W = 11 rounded instance 39
Knapsack: FPTAS Knapsack FPTAS. Round up all values: vmax = largest value in original instance – = precision parameter – = scaling factor = vmax / n – Observation. Optimal solution to problems with or are equivalent. Intuition. close to v so optimal solution using is nearly optimal; small and integral so dynamic programming algorithm is fast. Running time. O(n 3 / ). Dynamic program II running time is n , where 40
Knapsack: FPTAS Knapsack FPTAS. Round up all values: Theorem. If S is solution found by our algorithm and S* is any other feasible solution then Pf. Let S* be any feasible solution satisfying weight constraint. always round up solve rounded instance optimally never round up by more than |S| n DP alg can take vmax n = vmax, vmax i S vi 41
The Hamiltonian Path Problem: Given an undirected graph, find a cycle visiting every vertex exactly once. Eulerian Path Problem: Given an undirected graph, find a walk visiting every edge exactly once. Notice that in a walk some vertices may have been visited more than once. The Eulerian Path problem is polynomial time solvable. A graph has an Eulerian path if and only if every vertex has an even degree. The Hamiltonian Path problem is NP-complete.
The Hamiltonian Path Problem Application: Seating assignment problem: Given n persons and a big round table of n seats, each pair of persons may like each other or hate each other. Can we find a seating assignment so that everyone likes his/her neighbors (only two neighbors)? Construct a graph as follows. For each person, create a vertex. For each pair of vertices, add an edge if and only if they like each other. Then the problem of finding a seating assignment is reduced to the Hamiltonian path problem.
The Traveling Salesman Problem (TSP): Given a complete graph with nonnegative edge costs, Find a minimum cost cycle visiting every vertex exactly once. Given a number of cities and the costs of traveling from any city to any other city, what is the cheapest round-trip route that visits each city exactly once and then returns to the starting city? One of the most well-studied problem in combinatorial optimization.
NP-completeness of Traveling Salesman Problem (TSP): Given a complete graph with nonnegative edge costs, Find a minimum cost cycle visiting every vertex exactly once. Hamiltonian Path Problem: Given an undirected graph, find a cycle visiting every vertex exactly once. The Hamiltonian path problem is a special case of the Traveling Salesman Problem. • For each edge, we add an edge of cost 1. • For each non-edge, we add an edge of cost 2. • If there is a Hamiltonian path, then there is a cycle of cost n. • If there is no Hamiltonian path, then every cycle has cost at least n+1.
Inapproximability of Traveling Salesman Problem Theorem: There is no constant factor approximation algorithm for TSP, unless P=NP. Idea: Use the Hamiltonian path problem. • For each edge, we add an edge of cost 1. • For each non-edge, we add an edge of cost 2 nk. • If there is a Hamiltonian path, then there is a cycle of cost n. • If there is no Hamiltonian path, then every cycle has cost greater than nk. So, if you have a k-approximation algorithm for TSP, one just needs to check if the returned solution is at most nk. • If yes, then the original graph has a Hamiltonian path. • Otherwise, the original graph has no Hamiltonian path.
Inapproximability of Traveling Salesman Problem Theorem: There is no constant factor approximation algorithm for TSP, unless P=NP. This type of theorem is called “hardness result” in the literature. Just like their names, usually they are very hard to obtain. • If there is a Hamiltonian path, then there is a cycle of cost n. • If there is no Hamiltonian path, then every cycle has cost greater than nk. The strategy is usually like this. This creates a gap between yes and no instances. The bigger the gap, the problem is harder to approximate.
Approximation Algorithm for TSP There is no constant factor approximation algorithm for TSP, what can we do then? Observation: in real world situation, the costs satisfy triangle inequality. a a+b≥c c b For example, think of cost of an edge as the distance between two points.
Approximation Algorithm for Metric TSP Metric Traveling Salesman Problem (metric TSP): Given a complete graph with edge costs satisfying triangle inequalities, Find a minimum cost cycle visiting every vertex exactly once. 1 First check the bad example. nk 1 This violates triangle inequality! How could triangle inequalities help in finding approximation algorithm?
Lower Bounds for TSP What can be a good lower bound to the cost of TSP? A tour contains a matching. Let OPT be the cost of an optimal tour, since a tour contains two matchings, the cost of a minimum weight perfect matching is at most OPT/2. A tour contains a spanning tree. So, the cost of a minimum spanning tree is at most OPT.
Spanning Tree and TSP Let the thick edges have cost 1, And all other edges have cost greater than 1. So the thick edges form a minimum spanning tree. But it doesn’t look like a Hamiltonian cycle at all! Consider a Hamiltonian cycle. The costs of the edges which are not in the minimum spanning tree might have very high costs. Not really! Each such edge has cost at most 2 because of the triangle inequality. Edge cost at most 2
Spanning Tree and TSP a b c e d Strategy: Construct the TSP tour from a minimum spanning tree. Use the edges in the minimum spanning tree as many as possible. The center vertex already has degree 2, skip to the next vertex. Cost = e + a + b + c + d + e = 2 a + 2 b + 2 c + 2 d + 2 e = 2(a +b + c + d + e) MST ≤ OPT + SOL ≤ 2 MST Cost of a minimum spanning tree SOL ≤ 2 OPT
Spanning Tree and TSP How to formalize the idea of “following” a minimum spanning tree?
Spanning Tree and TSP How to formalize the idea of “following” a minimum spanning tree? Key idea: double all the edges and find an Eulerian tour. This graph has cost 2 MST.
Spanning Tree and TSP How to formalize the idea of “following” a minimum spanning tree? Key idea: double all the edges and find an Eulerian tour. This graph has cost 2 MST.
Spanning Tree and TSP Strategy: shortcut this Eulerian tour.
Spanning Tree and TSP By triangle inequalites, the shortcut tour is not longer than the Eulerian tour. Each directed edge is used exactly once in the shortcut tour.
A 2 -Approximation Algorithm for Metric TSP (Metric TSP – Factor 2) 1. Find an MST, T, of G. 2. Double every edge of the MST to obtain an Eulerian graph. 3. Find an Eulerian tour, T*, on this graph. 4. Output the tour that visits vertices of G in the order of their first appearance in T*. Let C be this tour. (That is, shortcut T*) Analysis: 1. cost(T) ≤ OPT (because MST is a lower bound of TSP) 2. cost(T*) = 2 cost(T) (because every edge appears twice) 3. cost(C) ≤ cost(T*) (because of triangle inequalities, shortcutting) 4. So, cost(C) ≤ 2 OPT
Better approximation? There is a 1. 5 approximation algorithm for metric TSP. Hint: use a minimum spanning tree and a maximum matching (instead of double a minimum spanning tree). Major open problem: Improve this to 4/3? An aside: hardness result is not an excuse to stop working, but to guide us to identify interesting cases. Just for fun: can we design an approximation algorithm for the seating assignment problem?
Extra Slides
Load Balancing on 2 Machines Claim. Load balancing is hard even if only 2 machines. Pf. NUMBER-PARTITIONING P LOAD-BALANCE. NP-complete by Exercise 8. 26 a e b c d f g length of job f a machine 1 b machine 2 0 d Machine 1 f c Machine e 2 yes g Time L 61
Center Selection: Hardness of Approximation Theorem. Unless P = NP, there is no -approximation algorithm for metric k-center problem for any < 2. Pf. We show we could use a (2 - ) approximation algorithm for kcenter to solve DOMINATING-SET in poly-time. see Exercise 8. 29 Let G = (V, E), k be an instance of DOMINATING-SET. Construct instance G' of k-center with sites V and distances – d(u, v) = 1 if (u, v) E – d(u, v) = 2 if (u, v) E Note that G' satisfies the triangle inequality. Claim: G has dominating set of size k iff there exists k centers C* with r(C*) = 1. Thus, if G has a dominating set of size k, a (2 - )-approximation algorithm on G' must find a solution C* with r(C*) = 1 since it cannot use any edge of distance 2. n n n 62
- Slides: 62