Chapter 9 Text Processing Pattern Matching Data Compression








![REs in common use n Syntactic sugar: n n n n [-a-cx-z]: match one REs in common use n Syntactic sugar: n n n n [-a-cx-z]: match one](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-9.jpg)
![Examples n Perl examples (and other languages): $input =~ s/t[wo]? o/2/; $input =~ s/<link[^>]*>s*//gs; Examples n Perl examples (and other languages): $input =~ s/t[wo]? o/2/; $input =~ s/<link[^>]*>s*//gs;](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-10.jpg)
![09 -08 -04 Multiples of 3? n /^([0369]|[258][0369]*[147]|[147]([0369]|[ 147][0369]*[258])*[258]|[258][0369]*[258] ([0369]|[147][0369]*[258])*[258]|[147]([03 69]|[147][0369]*[258])*[147][0369]*[147]|[ 258][0369]*[258]([0369]|[147][0369]*[258] )*[147][0369]*[147])*$/ 09 -08 -04 Multiples of 3? n /^([0369]|[258][0369]*[147]|[147]([0369]|[ 147][0369]*[258])*[258]|[258][0369]*[258] ([0369]|[147][0369]*[258])*[258]|[147]([03 69]|[147][0369]*[258])*[147][0369]*[147]|[ 258][0369]*[258]([0369]|[147][0369]*[258] )*[147][0369]*[147])*$/](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-11.jpg)




















- Slides: 31

Chapter 9: Text Processing Pattern Matching Data Compression

Outline and Reading n n Strings (§ 9. 1. 1) Pattern matching algorithms n n n Brute-force algorithm (§ 9. 1. 2) Knuth-Morris-Pratt algorithm (§ 9. 1. 4) Regular Expressions and Finite Automata Data Compression Huffman Coding Lempel-Ziv Compression

Motivation: Bioinformatics n n The application of computer science techniques to genetic data See Gene-Finding notes Many interesting algorithm problems Many interesting ethical issues!

Strings n n A string is a sequence of characters Examples of strings: n n n n n ASCII Unicode {0, 1} {A, C, G, T} Let P be a string of size m n Java program HTML document DNA sequence Digitized image An alphabet S is the set of possible characters for a family of strings Example of alphabets: n n n A substring P[i. . j] of P is the subsequence of P consisting of the characters with ranks between i and j A prefix of P is a substring of the type P[0. . i] A suffix of P is a substring of the type P[i. . m - 1] Given strings T (text) and P (pattern), the pattern matching problem consists of finding a substring of T equal to P Applications: n n Regular expressions Programming languages Search engines Biological research

Pattern matching n Suppose you want to find repeated ATs followed by a G in GAGATATCATATG. n n n How do you express that pattern to find? How can you find it efficiently? How if the strings were billions of characters long?

Finite Automata and Regular Expressions n n How do I match perl-like regular expressions to text? Important topic: regular expressions and finite automata. n n theoretician: regular expressions are grammars that define regular languages programmer: compact patterns for matching and replacing

Regular Expressions n Regular expressions are one of n n n n a literal character a (regular expression) – in parentheses a concatenation of two REs the alternation (“or”) of two REs, denoted + in formal notation the closure of an RE, denoted * (ie 0 or more occurrences) Possibly additional syntactic sugar Examples abracadabra(cadabra)* = {abra, abracadabracadabra, … } (a*b + ac)d (a(a+b)b*)* t(w+o)? o [? means 0 or 1 occurrence in Perl] aa+rdvark [+ means 1 or more occurrences in Perl]

Finite Automata n n n Regular language: any language defined by a RE Finite automata: machines that recognize regular languages. Deterministic Finite Automaton (DFA): n n n a set of states including a start state and one or more accepting states a transition function: given current state and input letter, what’s the new state? Non-deterministic Finite Automaton (NFA): n like a DFA, but there may be n n more than one transition out of a state on the same letter (Pick the right one non-deterministically, i. e. via lucky guess!) epsilon-transitions, i. e. optional transitions on no input letter
![REs in common use n Syntactic sugar n n n n acxz match one REs in common use n Syntactic sugar: n n n n [-a-cx-z]: match one](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-9.jpg)
REs in common use n Syntactic sugar: n n n n [-a-cx-z]: match one of -, a, b, c, x, y, z [^abc]: match a character that is not an a, b, or c. : match any character ? : match 0 or 1 instances of what preceded s: match a whitespace character ^, $: match the beginning or end of string ([pattern]): make [pattern] available in substitutions as $1, $2, etc.
![Examples n Perl examples and other languages input stwo o2 input slinksgs Examples n Perl examples (and other languages): $input =~ s/t[wo]? o/2/; $input =~ s/<link[^>]*>s*//gs;](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-10.jpg)
Examples n Perl examples (and other languages): $input =~ s/t[wo]? o/2/; $input =~ s/<link[^>]*>s*//gs; $input =~ s/s*mso-[^>"]*”/”/gis; $input =~ s/([^ ]+) +([^ ]+)/$2 $1/; $input =~ m/^[0 -9]+. ? [0 -9]*|. [0 -9]+$/; ($word 1, $word 2, $rest) = ($foo =~ m/^ *([^ ]+) +(. *)$/);
![09 08 04 Multiples of 3 n 036925803691471470369 14703692582582580369258 0369147036925825814703 6914703692581470369147 258036925803691470369258 1470369147 09 -08 -04 Multiples of 3? n /^([0369]|[258][0369]*[147]|[147]([0369]|[ 147][0369]*[258])*[258]|[258][0369]*[258] ([0369]|[147][0369]*[258])*[258]|[147]([03 69]|[147][0369]*[258])*[147][0369]*[147]|[ 258][0369]*[258]([0369]|[147][0369]*[258] )*[147][0369]*[147])*$/](https://slidetodoc.com/presentation_image_h2/36b535213c0948c817331d1f9751fe56/image-11.jpg)
09 -08 -04 Multiples of 3? n /^([0369]|[258][0369]*[147]|[147]([0369]|[ 147][0369]*[258])*[258]|[258][0369]*[258] ([0369]|[147][0369]*[258])*[258]|[147]([03 69]|[147][0369]*[258])*[147][0369]*[147]|[ 258][0369]*[258]([0369]|[147][0369]*[258] )*[147][0369]*[147])*$/

DFA for AT(AT)*C n n Note that DFA can be represented as a 2 D array, DFA[state][input. Letter] newstate DFA: state letter 0 A 1 0 TCG 1 T 1 ACG 2 C 2 GT 2 A 3 T 3 AGC 4 AGCT newstate 0 2 0 4 [accept] 0 3 2 0 0

RE NFA n Given a Regular Expression, how can I build a DFA? Work bottom up. Letter: n Concatenation: n Or: n n Closure:

RE NFA Example n Construct an NFA for the RE (A*B + AC)D A A* A*B + AC (A*B + AC)D

NFA -> DFA n n n Keep track of the set of states you are in. On each new input letter, compute the new set of states you could be in. The set of states for the DFA is the power set of the NFA states. n I. e. up to 2 n states, where there were n in the DFA.

Recognizing Regular Languages n n n Suppose your language is given by a DFA. How to recognize? Build a table. One row for every (state, input letter) pair. Give resulting state. For each letter of input string, compute new state When done, check whether the last state is an accepting state. Runtime? O(n), where n is the number of input letters Another approach: use a C program to simulate NFA with backtracking. Less space, more time.

Data Compression: Intro n n n Suppose you have a text, abracadabra. Want to compress it. How many bits required? at 3 bits per letter, 33 bits. Can we do better? How about variable length codes? In order to be able to decode the file again, we would need a prefix code: no code is the prefix of another. How do we make a prefix code that compresses the text?

Huffman Coding n n n Note: Put the letters at the leaves of a binary tree. Left=0, Right=1. Voila! A prefix code. Huffman coding: an optimal prefix code Algorithm: use a priority queue. insert all letters according to frequency if there is only one tree left, done. else, a=delete. Min(); b=delete. Min(); make tree t out of a and b with weight a. weight() + b. weight(); insert(t)

Huffman coding example n abracadabra frequencies: n n Huffman code: n n a: 5, b: 2, c: 1, d: 1, r: 2 a: 0, b: 100, c: 1010, d: 1011, r: 11 bits: 5 * 1 + 2 * 3 + 1 * 4 + 2 * 2 = 23 Follow the tree to decode – Q(n) Time to encode? n n n Compute frequencies – O(n) Build heap – O(1) assuming alphabet has constant size Encode – O(n)

Huffman coding summary n n Huffman coding is very frequently used (You use it every time you watch HTDV or listen to mp 3, for example) Text files often compress to 60% of original size (depending on entropy) In real life, Huffman coding is usually used in conjunction with a modeling algorithm…

Data compression overview n n n Two stages: modeling and entropy coding Modeling: break up input into tokens or chunks (the bigger, the better) Entropy Coding: use shorter bit strings to represent more frequent tokens n If P is the probability of a code element, the optimal number of bits is –lg(P)

Lempel-Ziv Modeling n n n Consider compressing text Certain byte strings are more frequent than others: the, and, tion, es, etc. Model these with single tokens Build a dictionary of the byte strings you see; the second time you see a byte string, use the dictionary entry

Lempel-Ziv Compression n n Start with a dictionary of 256 entries for the first 256 characters At each step, n n n Output the code of the longest dictionary match and delete those characters from input Add previous token plus last letter as new dictionary entry with code 256, 257, 258, … Note that code lengths grow by one bit as dictionary reaches size 512, 1024, 2048, etc.

Lempel-Ziv Example ABRACADABRA Output Add to Dictionary 1 (A) 2 (B) AB 5 (R) BR 1 (A) RA 3 (C) AC 1 (A) CA 4 (D) AD 6 (AB) DA 8 (RA) ABR Dictionary: 1 2 3 4 5 A B C D R 6 7 8 9 10 11 12 13 AB BR RA AC CA AD DA ABR

Lempel-Ziv Variations n n All compression algorithms like zip, gzip use variations on Lempel-Ziv Possible variations: n n Fixed-length vs. variable length codes or adaptive Huffman or arithmetic coding Don’t add duplicate entries to the dictionary Limit the number of codes or switch to larger ones as needed Delete less frequent dictionary entries or give frequent entries shorter codes

How about this approach: n Repeat n for each letter pair occurring in the text, try: n n n replace the pair with a single new token measure the total entropy (Huffman-compressed size) of the file if that letter pair resulted in the greatest reduction in entropy so far, remember it permanently substitute new token for the pair that caused the greatest reduction in entropy until no more reductions in entropy are possible Results: compression to about 25% for big books: better than gzip, zip. [But not as good as bzip!]

Compression other data n Modeling for audio? n Modeling for images?

Modeling for Images? Wikipedia

JPEG, etc. n n n n Modeling: convert to the frequency domain with DCT Throw away some high-frequency components Throw away imperceptible components Quantize coefficients Encode the remaining coefficients with Huffman coding Results: up to 20 -1 compression with good results, 100 -1 with recognizable results How the DCT changed the world…

Data compression results Best algorithms compress text to 25% of original size, but humans can compress to 10% n Humans have far better modeling algorithms because they have better pattern recognition and higher-level patterns to recognize n Intelligence ≈ pattern recognition ≈ data compression? n Going further: Data-Compression. com

09 -08 -04 Ethical issues on algorithms n Back to an issue from the start of class: Can algorithms be unethical?