Lecture 17 Path Algebra n Matrix multiplication of

Lecture 17 Path Algebra n Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these matrices to study graphs is path algebra. n With "path algebra" we can solve the following problems: n Compute the total number of paths between all pairs of vertices in a directed acyclic graph; n Solve the "all-pairs-shortest-paths" problem in a weighted directed graph with no negative cycles; n Compute the "transitive" closure of a directed graph. n Think: what is the meaning of M 2 where M is an adjacency matrix of a graph G?

All paths of length r n Claim. Mr describes paths of length r in G, with entry i, j denoting number of distinct paths from i to j, here, M is the adjacency matrix of G. n Proof. The base case is r = 0, trivial. For the induction step, assume the claim is true for r' < r; we prove it for r. Then Mrij = (M · Mr-1)ij = ∑ 1 ≤ k ≤ n Mik Mr-1 kj Now any r-step path from i to j must start with a step to some intermediate vertex k. If there is such a path, Mik is 1; otherwise it is 0. So adding up Mik Mr-1 kj gives the number of r-step paths.

Computing the matrix powers n Suppose G is acyclic (no path longer than n-1). Consider I + M 2 +. . . + Mn-1 : the ij'th entry gives the total number of distinct paths from i to j. n Therefore, in O(nω+1) steps, we can compute the total number of distinct paths between all pairs of vertices. Here ω denotes the best-known exponent for matrix multiplication; currently ω = 2. 376. n We can even do better, if n=2 k, by first calculating M 2, M 4, . . . , M 2^(k-1), and then calculating (I+M)(I+M 2)(I+M 4). . . (I+M 2^(k-1)) = I + M 2 +. . . + M 2^k– 1, where 2 k is the least power of 2 that is ≥ n. This gives an algorithm that runs in O(nω log n) time, where n is the number of vertices.

Reachability graph n Given an un-weighted directed graph G = (V, E), and we want to form the graph G' that has an edge between u and v if and only if there exists a path (of any length) in G from u to v. n Let's first see how to solve it using what we know from say CS 240. There, we explored depth-first and breadth-first search algorithms. These algorithms could find all vertices reachable from a given vertex in O(|V|+|E|) time. So if we run depth-first search from every vertex, the total time is O(n(|V|+|E|), which could be as bad as O(n 3) if the graph is dense. Can we do better than O(n 3)?

Transitive closure n Back to path algebra. Now our matrix M consists of 1's and 0's. How can we find the matrix where there's a 1 in row i and column j iff there is a length-2 path connecting vertex i and j? I claim the entry in row i and column j should be n OR 1 ≤ k ≤ n Mik AND Mkj. n That is: we just need to use “boolean multiplication and additions in our matrix computation. n So the transitive closure of G is given by M' = I + M 2 +. . . + Mn-1 where +, * are corresponding boolean operations. It tells if there is a path between any two nodes.

Transitive closure continues. . n How fast can we compute: I + M 2 +. . . + Mn-1 ? n we can multiply 2 Boolean matrices in O(nω) steps. n Since, using the “doubling trick”, we have to do log n Boolean matrix multiplications, this gives a total cost of O(nω log n) to solve the transitive closure problem. This is indeed better than simply running breadth-first or depth-first search from each vertex. n Think: what if n is not a power of 2?
- Slides: 6