Analysis of Algorithms Chapter 06 Greedy Graph Algorithms

  • Slides: 63
Download presentation
Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms 1

Analysis of Algorithms Chapter - 06 Greedy Graph Algorithms 1

This Chapter Contains the following Topics: 1. Introduction i. Graph Categorization ii. Graph Terminology

This Chapter Contains the following Topics: 1. Introduction i. Graph Categorization ii. Graph Terminology iii. Graph Representation 2. Searching Graphs i. Depth-First Search ii. Breadth-First Search 3. Greedy Methods i. Fractional Knapsack Problem ii. A Task-Scheduling Problem 4. Minimum Cost Spanning Trees i. Spanning trees ii. Kruskal’s Algorithm iii. Prim’s Algorithm 5. Shortest Path Problem i. 2 Dijkstra’s Algorithm

Introduction 3

Introduction 3

What is Graph? Ø A Graph is a data structure which consists of a

What is Graph? Ø A Graph is a data structure which consists of a set of vertices, and a set of edges that connect (some of) them. Ø That is, G = ( V, E ), Where V - set of vertices, E set of edges 3 1 Vertex (Node) 2 Edge 4 5 V = {1, 2, 3, 4, 5} E = { (1, 2), (1, 3), (1, 4), (2, 3), (3, 5), (4, 5) } 4

Applications Ø Computer Networks Computer Ø Electrical Circuits Resistor/Inductor/… Ø Road Map City 5

Applications Ø Computer Networks Computer Ø Electrical Circuits Resistor/Inductor/… Ø Road Map City 5

Graph Categorization Ø A Directed Graph or Digraph is a graph where each edge

Graph Categorization Ø A Directed Graph or Digraph is a graph where each edge has a direction Ø The edges in a digraph are called Arcs or Directed Edges Ø Example: 1 6 4 2 3 5 Ø G = (V, E), where V = {1, 2, 3, 4, 5, 6} and E = {(1, 4), (2, 1), (2, 3), (3, 2), (4, 3), (4, 5), (4, 6), (5, 3), (6, 1), (6, 5)} Ø (1, 4) = 1→ 4 where 1 is the tail and 4 is the head 6

Graph Categorization (Contd. ) Ø An Undirected Graph is a graph where the edges

Graph Categorization (Contd. ) Ø An Undirected Graph is a graph where the edges have no directions Ø The edges in an undirected graph are called Undirected Edges Ø Example: 3 1 2 4 5 Ø G = (V, E), where V = {1, 2, 3, 4, 5} and E = {(1, 2), (1, 3), (1, 4), (2, 3), (3, 5), (4, 5)} 7

Graph Categorization (Contd. ) Ø A Weighted Graph is a graph where all the

Graph Categorization (Contd. ) Ø A Weighted Graph is a graph where all the edges are assigned weights. 40 1 60 10 4 2 3 20 50 5 Ø If the same pair of vertices have more edge, that graph is called a Multigraph 1 2 8 70 3 than one

Graph Terminology Ø Adjacent vertices: If (i, j) is an edge of the graph,

Graph Terminology Ø Adjacent vertices: If (i, j) is an edge of the graph, then the nodes i and j are adjacent. Ø An edge (i, j) is Incident to vertices i and j. 3 1 2 4 5 Ø Vertices 2 and 5 are not adjacent Ø Loop or self edges: An edge ( i, i ) is called a self edge or a loop. Ø In graphs loops are not permitted 3 1 4 2 5 Ø ( 1, 1 ) and ( 4, 4 ) are self edges 9

Graph Terminology (Contd. ) Ø Path: A sequence of edges in the graph. Ø

Graph Terminology (Contd. ) Ø Path: A sequence of edges in the graph. Ø There can be more than one path between two vertices. Ø Vertex A is reachable from B if there is a path from A to B. G A Ø Paths from B to D Ø B, A, D Ø B, C, D B F D E C Ø Simple Path: A path where all the vertices are distinct. 1 2 Ø 1, 4, 5, 3 is a simple path. 4 Ø But 1, 4, 5, 4 is not a simple path. 10 5 3

Graph Terminology (Contd. ) Ø Length : Sum of the lengths of the edges

Graph Terminology (Contd. ) Ø Length : Sum of the lengths of the edges on the path. Ø Length of the path 1, 4, 5, 3 is 3 3 1 2 4 5 Ø Circuit: A path whose first and last vertices are the same. Ø The path 3, 2, 1, 4, 5, 3 is a circuit. Ø Cycle: A circuit where all the vertices are distinct except for the first (and the last) vertex. Ø 1, 4, 5, 3, 1 is a cycle, but 1, 4, 5, 4, 1 is not a cycle. Ø Hamiltonian Cycle: A Cycle that contains all the vertices of the graph. Ø 1, 4, 5, 3, 2, 1 is a Hamiltonian Cycle. 11

Graph Terminology (Contd. ) Ø Degree of a Vertex : In an undirected graph,

Graph Terminology (Contd. ) Ø Degree of a Vertex : In an undirected graph, the no. of edges incident to the vertex Ø In-degree: The no. of edges entering the vertex in a digraph Ø Out-Degree: The no. of edges leaving the vertex in a digraph 3 Ø In-degree of 1 is 3 1 Ø Out-degree of 1 is 1 2 4 5 Ø A Subgraph of graph G=(V, E) is a graph H=(U, F) such that U Є V and F Є E 3 1 4 2 12 3 2 5 G=(V, E) 1 H=(U, F)

Graph Terminology (Contd. ) Ø A graph is said to be Connected if there

Graph Terminology (Contd. ) Ø A graph is said to be Connected if there is at least one path from every vertex to every other vertex in the graph. 1 4 3 1 3 2 2 4 5 5 Unconnected Connected Ø Tree: A connected undirected graph that contains no cycles Ø Forest: A graph that does not contain a cycle 3 1 2 4 13 2 5 Tree 3 1 4 5 Forest

Graph Terminology (Contd. ) Ø The Spanning Tree of a Graph G is a

Graph Terminology (Contd. ) Ø The Spanning Tree of a Graph G is a subgraph of G that is a tree and contains all the vertices of G. 3 1 2 4 5 Graph 3 1 2 4 5 Spanning Tree 14

Representation of Graphs Ø Adjacency Matrix (A) Ø The Adjacency Matrix A=(ai, j) of

Representation of Graphs Ø Adjacency Matrix (A) Ø The Adjacency Matrix A=(ai, j) of a graph G=(V, E) with n nodes is an n. Xn matrix Ø Each element of A is either 0 or 1, depending on the adjacency of the nodes Ø aij = 1, if (i, j) Є E, = 0, otherwise Ø Example: Find the adjacency matrices of the following graphs. 3 1 2 4 3 2 15 1 0 0 1 5 1 0 1 1 1 0 0 0 1 1 1 0

Representation of Graphs (Contd. ) Ø Adjacency Matrix of a Weighted Graph Ø The

Representation of Graphs (Contd. ) Ø Adjacency Matrix of a Weighted Graph Ø The weight of the edge can be shown in the matrix when the vertices are adjacent Ø A nil value (0 or ∞) depending on the problem is used when they are not adjacent Ø Example: To find the minimum distance between nodes. . . 9 3 1 4 16 2 5 ∞ 4 ∞ ∞ ∞ 5 9 5 ∞

Representation of Graphs (Contd. ) Ø Adjacency List Ø An Adjacency list is an

Representation of Graphs (Contd. ) Ø Adjacency List Ø An Adjacency list is an array of lists, each list showing the vertices a given vertex is adjacent to…. 3 1 2 4 5 1 2 2 1 3 3 1 5 4 1 5 5 Ø Adjacency List of a Weighted Graph Ø The weight is included in the list 9 1 4 17 2 3 5 1 2 4 2 3 5 3 1 9 2 5

Searching Graphs 18

Searching Graphs 18

Depth-First Search Ø Why do we need to search graphs? Ø To find paths

Depth-First Search Ø Why do we need to search graphs? Ø To find paths Ø To look for connectivity Ø Depth-First Search (DFS) Ø Start from an arbitrary node Ø Visit (Explore) an unvisited adjacent edge Ø If the node visited is a dead end, go back to the previous node (Backtrack) Ø Stop when no unvisited nodes are found and no backtracking can be done Ø Implemented using a Stack Ø Explore if possible, Backtrack otherwise… 19

DFS Algorithm DFS(G) { for each vertex u Є V[G] do { Color[u] :

DFS Algorithm DFS(G) { for each vertex u Є V[G] do { Color[u] : = white; Parent[u] : = nil; } for each vertex u Є V[G] do if (Color[u] = white) then DFS_Visit(u); } Algorithm DFS_Visit(u) { Color[u] : = gray for each vertex v Є Adj[u] do if (Color[v] = white) then { Parent[v] : = u; DFS_Visit(v); } Color[u] : = black; } 20 white - Unvisited gray - Discovered black - Finished

Example and Analysis A B F C G H D E A, B, F,

Example and Analysis A B F C G H D E A, B, F, C, G, E Ø Two for loops of DFS take Θ(V) time, excluding the time to execute the calls to DFS_Visit(). Ø The procedure DFS_Visit() is called exactly once for each vertex of the graph, since DFS_Visit() is invoked only on white vertices and the first thing it does is paint the vertex grey. Ø During an execution of DFS_Visit(v), the 2 nd for loop is executed |Adj[v]| times. Ø Since Σ|Adj[v]| = Θ(E), the total cost of executing 2 nd for loop is Θ(E). Ø So, the total running time of DFS is Θ(E+V). 21

Breadth-First Search (BFS) Ø Start from an arbitrary node Ø Visit all the adjacent

Breadth-First Search (BFS) Ø Start from an arbitrary node Ø Visit all the adjacent nodes (distance=1) Ø Visit the nodes adjacent to the visited nodes (distance=2, 3 etc. ) Ø Stop when a dead end is met Ø Implemented using a Queue Ø Explore all nodes at distance d… 22

BFS Algorithm BFS(G, s) white - Unvisited { for each vertex u Є V[G]

BFS Algorithm BFS(G, s) white - Unvisited { for each vertex u Є V[G] – {s} do gray - Discovered { black - Finished Color[u] : = white; Distance[u] : = ∞; Parent[u] : = nil; s - Source Vertex } Color[s] : = gray; Q - FIFO Queue Distance[s] : = 0; Parent[s] : = nil; Q : = Ø; Enqueue (Q, s); while (Q ≠ Ø) do { u : = Dequeue (Q); for each v Є Adj[u] do if (Color[v] = white) then { Color[v] : = gray; Distance[v] : = Distance[u] + 1; Parent[v] : = u; Enqueue (Q, v); } Color[u] : = black; } 23

Example and Analysis A B F C G D H A, B, C, E,

Example and Analysis A B F C G D H A, B, C, E, F, G E Ø After initialization, no vertex is ever whitened. Ø Thus each vertex is enqueued at most once, and hence dequeued at most once. Ø The operations of enqueueing and dequeueing take O(1) time, so the total time devoted to queue operations is O(V). Ø Because the adjacency list of each vertex is scanned only when the vertex is dequeued, each adjacency list is scanned at most once. Ø Since the sum of the lengths of all the adjacency lists is Θ(E), the total time spent in scanning adjacency list is O(E). Ø S, the total running time of BFS is O(V+E). 24

Greedy Methods 25

Greedy Methods 25

The Greedy Method Ø A greedy algorithm obtains an optimal solution to a problem

The Greedy Method Ø A greedy algorithm obtains an optimal solution to a problem by making a sequence of choices. Ø For each decision point in the algorithm, the choice that seems best at the moment is chosen. Ø This chooses the Locally Optimal Choice hoping that it would give a Globally Optimal Solution. Ø But it does not always produce an optimal solution. Ø Of course, it is powerful and works for many problems. Ø ExampleØ Find the shortest path from A to E using the Greedy Method. 4 B A 3 4 2 C 26 5 E 1 2 D 3

Solutions Ø Greedy solution – Ø Order of visit is A → C →

Solutions Ø Greedy solution – Ø Order of visit is A → C → D → B → E Ø Path cost is 2+2+1+5 = 10 4 B A 3 5 4 2 E 1 2 C D 3 Ø The best possible solution we can achieve. Ø In the example the optimal solution is ØA→D→E Ø Path cost is 3+3 = 6 4 B A 3 4 2 C 27 5 E 1 2 D 3

The Fractional Knapsack Problem Ø Given: A set S of n items, with each

The Fractional Knapsack Problem Ø Given: A set S of n items, with each item i having bi - a positive benefit, and wi - a positive weight Ø Goal: Choose items with maximum total benefit but with weight at most W. Ø If we are allowed to take fractional amounts, then this is the fractional knapsack problem. Ø In this case, we let xi denote the amount we take of item i Ø Objective: maximize Ø Constraint: Ø Example: “knapsack” Items: 1 Weight: 4 ml Benefit: $12 Value: 3 ($ per ml) 28 2 3 4 5 8 ml 2 ml 6 ml 1 ml $32 $40 $30 $50 4 20 5 50 Solution: • 1 ml of 5 • 2 ml of 3 • 6 ml of 4 • 1 ml of 2 10 ml

The Algorithm ØGreedy choice: Keep taking item with highest value (benefit to weight ratio)

The Algorithm ØGreedy choice: Keep taking item with highest value (benefit to weight ratio) Ø Since Ø Input: set S of items with benefit bi and weight wi; max. weight W Ø Output: amount xi of each item i to maximize benefit with weight at most W Algorithm Fractional. Knapsack(S, W) { for each item iε S do { xi : = 0; vi : = bi / wi ; //value w : = 0; //total weight if (w < W) then { //remove item i with highest value (vi) xi : = min{wi , W - w}; w : = w + x; } 29 } }

A Task-Scheduling Problem Ø Given: a set T of n tasks, each having: A

A Task-Scheduling Problem Ø Given: a set T of n tasks, each having: A start time, si A finish time, fi (where si < fi) Ø Goal: Perform all the tasks using a minimum number of “machines. ” Ø For example: – [1, 4], [1, 3], [2, 5], [3, 7], [4, 7], [6, 9], [7, 8] (ordered by start) Ø Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Machine 3 Machine 2 Machine 1 1 30 2 3 4 5 6 7 8 9

The Algorithm Input: Set T of tasks with start time si and finish time

The Algorithm Input: Set T of tasks with start time si and finish time fi. Output: non-conflicting schedule with minimum number of machines Algorithm Task. Schedule(T) { m : = 0; //no. of machines while (T is not empty) do { remove task i with smallest si; if (there’s a machine j for i) then schedule i on machine j; else m : = m + 1; schedule i on machine m; } } 31

Minimum Cost Spanning Trees 32

Minimum Cost Spanning Trees 32

Spanning Tree Ø A Tree is a connected undirected graph that contains no cycles

Spanning Tree Ø A Tree is a connected undirected graph that contains no cycles is called a tree Ø A Spanning Tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G B A C D B A C C E Undirected Graph D B A E D E Some Spanning Trees Ø Properties Ø The spanning tree of a n –vertex Undirected Graph has exactly n-1 edges Ø It connects all the Vertices in the Graph Ø A Spanning tree has no Cycles 33

Minimum Cost Spanning Tree (MCST) Ø The Tree among all the Spanning Trees with

Minimum Cost Spanning Tree (MCST) Ø The Tree among all the Spanning Trees with the Lowest Cost. Ø B 6 A B A 1 3 C 1 1 2 4 D 5 3 C 1 2 D E E Weighted Undirected Graph MCST Ø Applications: Ø Computer Networks Ø To find how to connect a set of computers using the minimum amount of wire Ø Shipping/Airplane Lines Ø To find the fastest way between locations 34

Constructing a MCST Ø We shall examine two algorithms for solving the MCST problem:

Constructing a MCST Ø We shall examine two algorithms for solving the MCST problem: Kruskal’s algorithm and Prim’s algorithm. Ø Each can easily be made to run in time O(E lg V) using ordinary binary heaps. Ø By using Fibonaci heaps, Prim’s algorithm can be sped up to run in time O(E+V lg V), which is an improvement if |V| is much less than |E|. Ø Both the algorithms use a greedy approach to the problem. Ø This greedy strategy is captured by following algorithm, which grows the minimum spanning tree one edge at a time. Ø Algorithm MCST(G, w) { T : = Ø; while (T does not form a MCST) do { find an edge (u, v) that is safe for T; T : = T υ {(u, v)}; } return T; } 35

Kruskal’s Algorithm Ø Ø Ø Ø Ø 36 Mark each vertex as being in

Kruskal’s Algorithm Ø Ø Ø Ø Ø 36 Mark each vertex as being in a Set Initially each vertex is in a set of it’s own Sort the edges in increasing order of the weight Take the edges in the sorted order (smallest one first) Ø If it’s Safe to add the edge Ø Add it to the tree, don’t worry about the overall structure It is Safe to connect to vertices from different sets, No Cycles will be formed. Our implementation of Kruskal’s algorithm uses a disjoint-set data structure to maintain several set of elements. Each set contains the vertices in a tree of the current forest. The opreration Find-Set(u) returns a representative element from the set that contains u. Thus, we can determine whether two vertices u and v belong to the same tree by testing whether Find. Set(u)=Find-Set(v). The combining of trees is accomplished by the Union() procedure.

Kruskal’s Algorithm (Contd. ) Ø To implement a disjoint-set forest with the union -by-rank

Kruskal’s Algorithm (Contd. ) Ø To implement a disjoint-set forest with the union -by-rank heuristic, we must keep track of ranks. Ø With each node x, we maintain the integer value rank[x], which is an upper bound on the height of x. Ø When a singleton set is created by Make-Set(), the initial rank of the single node in the corresponding tree is 0. Ø Each Find-Set() operation leaves all ranks unchanged. Ø When applying Union() to two trees, there are two cases, depending on whether the roots have equal rank. Ø If unequal ranks, we make root of the higher rank the parent of the root of other, but their ranks remain same. Ø If equal ranks, we arbitrarily choose one of the roots as the parent and increment its rank. Ø Let us put this method into pseudocode. 37

Kruskal’s Algorithm (Contd. ) Algorithm Make-Set(x) { Π(x) : = x; rank[x] : =

Kruskal’s Algorithm (Contd. ) Algorithm Make-Set(x) { Π(x) : = x; rank[x] : = 0; } Algorithm Union(x, y) { Link(Find-set(x), Find-set(y)); } Algorithm Link(x, y) { if (rank[x] > rank[y]) then Π[y] : = x; else { Π[x] : = y; if (rank[x] = rank[y]) then rank[y] : = rank[y] + 1; } } Algorithm Find-Set(x) { if (x ≠ Π[x]) then Π[x] : = Find-Set( Π[x] ); return Π[x]; } 38

Kruskal’s Algorithm (Contd. ) Algorithm MCST-Kruksal(G, w) { T : = Ø; for (each

Kruskal’s Algorithm (Contd. ) Algorithm MCST-Kruksal(G, w) { T : = Ø; for (each vertex v Є V[G]) do Make-Set(v); // Make separate sets for vertices sort the edges by increasing weight w for (each edge (u, v) Є E, in sorted order) do if (Find-Set(u) ≠ Find-Set(v)) then { // if no cycles are formed T : = T U {(u, v)}; // Add edge to Tree Union(u, v); // Combine Sets } return T; } Ø The MCST resulting from this Algorithm is Optimal. 39

Illustration Initially T = Φ, Sets – {a}, {b}, {c}, {d}, {e}, {f}. E

Illustration Initially T = Φ, Sets – {a}, {b}, {c}, {d}, {e}, {f}. E (Sorted in Ascending Order ) (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 d 2 1 Step 1 Take (f, d); Set(f) ≠ Set(d) => Add (f, d) to T, Combine Set(f) & Set(d); T = {(f, d)}. Sets – {a}, {b}, {c}, {e}, {f, d}. 40 (a, f)

Illustration (Contd. ) Step 2: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 2: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (b, e); Set(b) ≠ Set(e) => Add (b, e) to T, Combine Set(b) & Set(e); T = {(f, d), (b, e)}. Sets – {a}, {b, e}, {c}, {f, d}. 41 (a, f)

Illustration (Contd. ) Step 3: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 3: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (c, d); Set(c) ≠ Set(d) => Add (c, d) to T, Combine Set(c) & Set(d); T = {(f, d), (b, e), (c, d)}. Sets – {a}, {b, e}, {f, d, c}. 42 (a, f)

Illustration (Contd. ) Step 4: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 4: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (a, b); Set(a) ≠ Set(b) => Add (a, b) to T, Combine Set(a) & Set(b); T = {(f, d), (b, e), (c, d), (a, b)}. Sets – {b, e, a}, {f, d, c}. 43 (a, f)

Illustration (Contd. ) Step 5: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 5: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (b, c); Set(b) ≠ Set(c) => Add (b, c) to T, Combine Set(b) & Set(c); T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}. 44 (a, f)

Illustration (Contd. ) Step 6: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 6: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (e, d); Set(e) = Set(d) => Ignore T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}. 45 (a, f)

Illustration (Contd. ) Step 7: (f, d) (b, e) (c, d) a (a, b)

Illustration (Contd. ) Step 7: (f, d) (b, e) (c, d) a (a, b) 4 (b, c) b 2 8 (e, d) 4 c e f 6 1 d 2 Take (a, f); Set(a) = Set(fd) => Ignore T = {(f, d), (b, e), (c, d), (a, b), (b, c)}. Sets – {b, e, a, f, d, c}. 46 (a, f)

Analysis of Kruskal’s Algorithm Ø The running time of Kruskal’s Algorithm for a graph

Analysis of Kruskal’s Algorithm Ø The running time of Kruskal’s Algorithm for a graph G = (V, E). Ø Initializing the set T takes O(1) time. Ø Cost of Make-Set() is |V|. Ø The time to sort the edges is O(E lg E). Ø The for loop performs O(E) Find-Set() and Union() operations on the disjoint-set forest. Ø Along with |V| Make-Set() operations, these take a total of O((V+E) α(V)) time, where α is very slowly growing function. Ø Since, G is assumed to be connected, we have |E| ≥ |V| - 1, so, the disjoint-set operations take O(E α(V)) time. Ø Since α(|V|) = O(lg V) = O(lg E) Ø The total running time of Kruskal’s algorithm is O(E lg E). Ø Observing that |E| ≤ |V|2, we have lg |E| = O(lg. V). Ø So, we can restate the running time of Kruskals’ algorithm as O(E lg V). 47

Prim’s Algorithm Ø Pick any Vertex v Ø Choose the Shortest Edge from v

Prim’s Algorithm Ø Pick any Vertex v Ø Choose the Shortest Edge from v to any other Vertex w Ø Add the edge (v, w) to the MCST Ø Continue to add at Every Step, the Shortest Edge from a Vertex in the MCST, to a Vertex Outside, with out worrying about overall structure. Ø Stop when all the Vertices are in the MCST a 4 b 2 8 4 c e f 6 1 Graph d 2 a 4 2 4 c e f 1 MCST 48 b d 2

Prim’s Algorithm (Contd. ) Ø Q – Priority Queue, r – Starting vertex Ø

Prim’s Algorithm (Contd. ) Ø Q – Priority Queue, r – Starting vertex Ø Key[v] – Key of Vertex v, π[v] –Parent of Vertex v Ø Adj[v] – Adjacency List of v. Algorithm MST-Prim(G, w, r) { Q : = V[G]; // Initially Q holds all vertices for (each u Є Q) do { Key[u] : = ∞; // Initialize all Keys to ∞ π[u] : = Nil; } Key[r] ← 0 while (Q ≠ Ø) do { u : = Extract_min(Q); // Get the min key node for (each v Є Adj[u]) do if (v Є Q and w(u, v) < Key[v]) then { π[v] : = u; Key[v] : = w(u, v); } } } Ø The MCST resulting from this Algorithm is Optimal 49

Illustration 4 a Initially b 2 8 a 0 Nil a c e 6

Illustration 4 a Initially b 2 8 a 0 Nil a c e 6 f Nodes Key Π Q 4 1 b ∞ Nil b c ∞ Nil c d ∞ Nil d 2 d e ∞ Nil e f ∞ Nil f Step 1 Extract_Min(Q) => a Q b c d e f Adj[a] b f b Є Q and Key[b] > w(a, b) => π[b] = a, Key[b] = 4 f Є Q and Key[f] > w(a, f) => π[f] = a, Key[f] = 8 50

Illustration (Contd. ) 4 a After Step 1 b 2 8 c e 6

Illustration (Contd. ) 4 a After Step 1 b 2 8 c e 6 f Nodes a Key 0 Π Nil Q 4 d 1 b 4 a b c ∞ Nil c 2 d ∞ Nil d e ∞ Nil e f 8 a f Step 2 Extract_Min(Q) => b Q c d e f Adj[b] a c e a not є Q => ignore c Є Q and Key[c] > w(b, c) => π[c] = b, Key[c] = 4 e Є Q and Key[e] > w(b, e) => π[e] = b, Key[e] = 2 51

Illustration (Contd. ) a After Step 2 4 b 4 2 8 f Nodes

Illustration (Contd. ) a After Step 2 4 b 4 2 8 f Nodes a Key 0 Π Nil Q c e b 4 a 6 1 c 4 b c d ∞ Nil d 2 d e 2 b e f 8 a f Step 3 Extract_Min(Q) => e Q c d f Adj[e] b d b not є Q => ignore d Є Q and Key[d] > w(e, d) => π[d] = e, Key[d] = 6 52

Illustration (Contd. ) a After Step 3 4 b 4 2 8 f Nodes

Illustration (Contd. ) a After Step 3 4 b 4 2 8 f Nodes a Key 0 Π Nil Q c e b 4 a 6 d 1 c 4 b c 2 d 6 e d e 2 b f 8 a f Step 4 Extract_Min(Q) => c Q d f Adj[c] b d b not є Q => ignore d Є Q and Key[d] > w(c, d) => π[d] = c, Key[d] = 2 53

Illustration (Contd. ) a After Step 4 4 b 4 2 8 f Nodes

Illustration (Contd. ) a After Step 4 4 b 4 2 8 f Nodes a Key 0 Π Nil Q c e b 4 a 6 d 1 c 4 b 2 d 2 c d e 2 b f 8 a f Step 5 Extract_Min(Q) => d Q f Adj[d] c e c not є Q => ignore e not є Q => ignore f Є Q and Key[f] > w(d, f) => π[f] = d, Key[d] = 1 54 f

Illustration (Contd. ) a After Step 5 4 b 4 2 8 f Nodes

Illustration (Contd. ) a After Step 5 4 b 4 2 8 f Nodes a Key 0 Π Nil Q c e b 4 a 6 d 1 c 4 b 2 d 2 c e 2 b f 1 d f Step 6 Extract_Min(Q) => f Q --a not є Q => ignore d not є Q => ignore 55 Adj[f] a d

Illustration (Contd. ) 4 a After Step 6 b 4 2 8 6 f

Illustration (Contd. ) 4 a After Step 6 b 4 2 8 6 f Nodes a Key 0 Π Nil Q MCST c 4 b d 2 c 4 b 2 8 2 d 1 b 4 a a c e e 2 b 4 c e f 6 1 f 1 d d 2 Value of MCST = 4 + 2 + 1 = 13. 56

Analysis of Prim’s Algorithm Ø The running time of Prim’s Algorithm for a graph

Analysis of Prim’s Algorithm Ø The running time of Prim’s Algorithm for a graph G = (V, E). Ø Initializing the key, parent and Queue take O(V) time. Ø The body of while loop is executed |V| times. Ø Each Extract-Min() operation takes O(lg V) time. Ø So, total time for all calls to Extract-Min() is O(V lg V). Ø The for loop is executed O(E) times. Ø Inside the for loop, assigning parent and decreasing key value can be implemented in O(lg V) time. Ø Thus, the total running time of Prim’s algorithm is O(V lg V + E lg V) = O(E lg V). Ø This is asymptotically same as for the running time of Kruskals’ algorithm. 57

Shortest Path Problem Ø Given a Weighted, Directed graph G= (V, E), with weight

Shortest Path Problem Ø Given a Weighted, Directed graph G= (V, E), with weight function w : E → R mapping edges to real-valued weights. Ø The weight of a path p = (v 0, v 1, …, vk) is the sum of the weights of its all edges that are in the path. Ø A shortest path from vertex u to vertex v is then defined as any path with minimum weight. Ø Single Source Shortest Path Problem Ø The shortest path from one vertex to any other vertex in the graph. Ø The Single Source SP Algorithms can be used to solve other shortest path problems also. Ø Single Destination SP Ø Take destination as the node, reverse all directions and apply Single Source SP Algorithm. Ø All Pairs SP Ø Apply Single Source SP Algorithm to all nodes Ø Single Pair SP Ø Automatically found when the above Algorithm is used. 58

Dijkstra’s Algorithm Ø Dijkstra’s Algorithm uses the Greedy Method. Ø It is a very

Dijkstra’s Algorithm Ø Dijkstra’s Algorithm uses the Greedy Method. Ø It is a very Efficient Algorithm to calculate the Shortest Path. Ø The Algorithm: Ø Start from the source vertex, s. Ø The algorithm maintains a set S of vertices whose final shortest-path weights from source s have already been determined. Ø Take the adjacent nodes and update the current shortest distance. Ø The algorithm repeatedly selects the vertex u ε V-S, with minimum shortest distance estimated. Ø Update the current shortest distance of the adjacent vertices where necessary. Ø i. e. when the new distance is less than the existing value. Ø Stop when all the Vertices are checked. 59

Dijkstra’s Algorithm (Contd. ) Ø s – Source. Ø d[v] – Current shortest distance

Dijkstra’s Algorithm (Contd. ) Ø s – Source. Ø d[v] – Current shortest distance from s to v Ø S – Set of nodes whose shortest distance is known 60 Algorithm Dijkstra (G, w, s) { for (each vertex v є V[G]) do { d[v] : = ∞; π[v] : = Nil; } d[s] : = 0; S : = Φ; Q : = V[G]; while (Q ≠ Φ) do { u : = Extract_Min(Q); S : = S U {u}; for each vertex v є Adj[u] if (d[v] > d[u] + w(u, v)) do { d[v] : = d[u] + w(u, v) ; π[v] : = u; } } }

Illustration u v 1 9 8α 13 14 9α 9 s 2 0 3

Illustration u v 1 9 8α 13 14 9α 9 s 2 0 3 9 4 5 7 5 α x 61 6 2 7 α y

Analysis of Dijkstra’s Algorithm Ø The running time of the Dijkstra’s Algorithm depends on

Analysis of Dijkstra’s Algorithm Ø The running time of the Dijkstra’s Algorithm depends on how the min-priority queque is implemented. Ø The for loop in the initialization step, takes |V| times. Ø The while loop executed |v| times. Ø Each Extract-Min() operation takes |V| time. Ø Each edge in the adjacency list Adj[v] is examined in the for loop inside the while loop exactly once during the course of the algorithm. Ø Since the total number of edges in adjacency list is |E|, there a total of |E| iterations of this for loops. Ø So, the total running time is O(V 2+E)=O(V 2). Ø If the graph is sufficiently sparse, it is practical to implement min-priority queue with a binary min-heap. Ø Then the total running time is O(E lg V). 62

End of Chapter-05 63

End of Chapter-05 63