Spatial and Temporal Data Mining Data Compression V

  • Slides: 30
Download presentation
Spatial and Temporal Data Mining Data Compression V. Megalooikonomou

Spatial and Temporal Data Mining Data Compression V. Megalooikonomou

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM,

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM, ADPCM Quantization n n Scalar quantization Vector quantization Image Compression Video Compression

Data Compression n n Why data compression? Storing or transmitting multimedia data requires large

Data Compression n n Why data compression? Storing or transmitting multimedia data requires large space or bandwidth The size of one hour 44 K sample/sec 16 -bit stereo (two channels) audio is 3600 x 44000 x 2 x 2= 633. 6 MB, which can be recorded on one CD (650 MB). MP 3 compression can reduce this number by factor of 10 The size of a 500 x 500 color image is 750 KB without compression (JPEG can reduced this by a factor of 10 to 20) The size of one minute real-time, full size, color video clip is 60 x 30 x 640 x 480 x 3= 1. 659 GB. A two-hour movie requires 200 GB. MPEG 2 compression can bring this number down to 4. 7 GB (DVD)

Data Compression

Data Compression

Run length coding Example: n A scanline of a binary image is 00000 00010

Run length coding Example: n A scanline of a binary image is 00000 00010 00000 01000 00000 n Total of 50 bits n However, strings of consecutive 0’s or 1’s can be represented n more efficiently 0(23) 1(1) 0(12) 1(1) 0(13) n If the counts can be represented using 5 bits, then we can reduce the amount of data to 5+5*5=30 bits. A compression ratio of 40%

Huffman coding n n n The basic idea behind Huffman coding algorithm is to

Huffman coding n n n The basic idea behind Huffman coding algorithm is to assign shorter codewords to more frequently used symbols Example: let there be 4 letters in language “A”, “B”, “S”, “Z” To uniquely encode each letter, we need two bits: A- 00 B-01 S-10 Z-11 A message “AAABSAAAAZ” is encoded with 20 bits Now how about assign A- 0, B-100, S-101, Z-11 The same message can be encoded using 15

Huffman coding n n Given a set of N symbols S={si, i=1, …N} with

Huffman coding n n Given a set of N symbols S={si, i=1, …N} with probabilities of occurrence Pi, i=1, …N, find the optimal encoding of the symbol to achieve the minimum transmission rate (bits/symbol) Algorithm: n n Each symbol is a leaf node in a tree Combining the two symbols or composite symbols with the least probabilities to form a new parent composite symbol, which has the combined probabilities. Assign a bit 0 and 1 to the two links Continue this process until all symbols are merged into one root node (bottom up) For each symbol, the sequence of the 0 s and 1 s from the root node to the symbol is the codeword

Pulse code modulation n The process of digitizing audio signal is called pulse code

Pulse code modulation n The process of digitizing audio signal is called pulse code modulation n Sampling the analog waveform at a minimum rate Each sample is quantized using a fixed number of bits To reduce the amount of data, we can n n Reduce the sampling rate Reduce the number of bits per sample

Differential Pulse Code Modulation (DPCM) n n Encode the changes between consecutive samples Example

Differential Pulse Code Modulation (DPCM) n n Encode the changes between consecutive samples Example The value of the differences between samples are much smaller than those of the original samples. Less bits are used to encode the signal (e. g. , 7 bits instead of 8 bits) For decoding the difference is added to the previous sample to obtain the value of the current sample. Lossless coding is achieved

Adaptive Differential Pulse Code Modulation (ADPCM) n n One observation is that a small

Adaptive Differential Pulse Code Modulation (ADPCM) n n One observation is that a small difference between samples happens more often than large changes An Entropy coding method such the Huffman coding scheme can be used to encode the difference for additional efficiency n n n The probabilities of occurrence of different differences are first obtained using a large data base Huffman coding is used to determine the codeword for each difference The codeword is fixed and made available to decoders

Linear Predictive Coding (LPC) n n n In DPCM, the value of the current

Linear Predictive Coding (LPC) n n n In DPCM, the value of the current sample is guessed based on the previous sample. Can a better prediction be made ? Yes! For example, we can use the previous two samples to predict the current one LPC is more general than DPCM. It exploits the correlation between multiple consecutive samples

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM,

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM, ADPCM Quantization n n Scalar quantization Vector quantization Image Compression Video Compression

Quantization One-dimensional quantizer interval endpoints and levels indicated on a horizontal line n n

Quantization One-dimensional quantizer interval endpoints and levels indicated on a horizontal line n n n Quantization is the discretization of a continuousalphabet source (signal) X: original value, X’: codeword, Q: quantizer Distortion: n n d(X, X’): a measure of overall quality degradation due to Q Mean Squared Error (MSE): E[(X-X’)2]

Scalar Quantization One-dimensional quantizer interval endpoints and levels indicated on a horizontal line n

Scalar Quantization One-dimensional quantizer interval endpoints and levels indicated on a horizontal line n n n Approximates a source symbol by its closest representative from a codebook An N-point scalar quantizer Q is a mapping: Q: R C where R is the real line and C={y 1, y 2, …, y. N} R is the codebook of size N Q(x)=D(E(x)) where E: R I is the encoder and D: I C is the decoder I={1, 2, …, N}

Vector Quantization n n VQ: a generalization of scalar quantization to quantization of a

Vector Quantization n n VQ: a generalization of scalar quantization to quantization of a vector VQ is superior to scalar quantization. Why?

Vector Quantization n n VQ: a generalization of scalar quantization to quantization of a

Vector Quantization n n VQ: a generalization of scalar quantization to quantization of a vector VQ is superior to scalar quantization. Why? n n Exploits linear and non-linear dependence that exists among the components of a vector VQ is superior even when the components of the random vector are statistically independent of each other. How? A vector quantizer Q of dimension k and size N is a mapping from a vector (or a “point” in Rk), into a finite set C={y 1, y 2, …, y. N}, yi Rk, the codebook of size N Q: Rk C It partitions Rk into N regions or cells, Ri for i J {1, 2, …, N} Ri={x Rk: Q(x)=yi}

Motivation for Vector Quantization Scalar quantization Plot two successive values as a single vector

Motivation for Vector Quantization Scalar quantization Plot two successive values as a single vector (x(1), x(2)) Scalar quantization Vector quantization

Scalar vs Vector Quantization

Scalar vs Vector Quantization

Vector Quantization - Design n The goal is to find: n n n A

Vector Quantization - Design n The goal is to find: n n n A codebook (decoder) – representation levels A partition rule (encoder) – decision levels To maximize an overall measure of performance

VQ Design – Optimality Conditions Nearest Neighbor Condition n For a given codebook, C,

VQ Design – Optimality Conditions Nearest Neighbor Condition n For a given codebook, C, the optimal regions {Ri: i=1, …, N} satisfy the condition: Ri {x: d(x, yi) d(x, yj); j} That is Q(x)=yi only if d(x, yi) d(x, yj) j

VQ Design – Optimality Conditions Centroid Condition n For given partition regions {Ri: i=1,

VQ Design – Optimality Conditions Centroid Condition n For given partition regions {Ri: i=1, …, N} the optimal codewords satisfy the condition: yi = cent(Ri) For the SE measure, the centroid of a set R is the arithmetic average: for R={xi : i = 1, …, |R|}

VQ Design – The Generalized Lloyd Algorithm (GLA) n n It produces a locally

VQ Design – The Generalized Lloyd Algorithm (GLA) n n It produces a locally optimal codebook from a training sequence, T. It starts with an initial codebook and iteratively improves it. Lloyd Iteration Given a codebook Cm={yi} generate an improved codebook Cm+1 as follows: n n n Partition T into cells Rj using the Nearest Neighbor Condition: Ri={x : d(x, yi); j i} Using the Centroid Condition compute the centroids of the cells just found to obtain the new codebook, Cm+1={cent(Ri)} Compute the average distortion for Cm+1 , Dm+1. If the fractional drop (Dm- Dm+1) / Dm is below a certain threshold stop, else continue with m m+1

VQ Design – The Generalized Lloyd Algorithm (GLA) To solve the initial codebook generation

VQ Design – The Generalized Lloyd Algorithm (GLA) To solve the initial codebook generation problem a partition split mechanism is used. How?

VQ Design – The Generalized Lloyd Algorithm (GLA) To solve the initial codebook generation

VQ Design – The Generalized Lloyd Algorithm (GLA) To solve the initial codebook generation problem a partition split mechanism is used. How? Starts with a codebook containing only one codeword (which one? ) In each repetition and before the application of the Lloyd iteration, it doubles the number of codewords from the previous iteration See: http: //www. data-compression. com/vq. html#animation

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM,

General Overview n Data Compression n n Run Length Coding Huffman Coding PCM, DPCM, ADPCM Quantization n n Scalar quantization Vector quantization Image Compression Video Compression

Image Compression n n From the 1 D case, we observe that data compression

Image Compression n n From the 1 D case, we observe that data compression can be achieved by exploiting the correlation between samples This idea is applicable to 2 D signals as well. Instead of predicting sample values, we can use the so called transformation method to obtain a more compact representation Discrete Cosine Transform (DCT) DCT is the real part of the 2 D Fourier Transform

Discrete Cosine Transform (DCT) n DCT n Inverse DCT

Discrete Cosine Transform (DCT) n DCT n Inverse DCT

DCT transform of 2 D Images n n DCT Example DCT of images can

DCT transform of 2 D Images n n DCT Example DCT of images can also be considered as the projection of the original image into the DCT basis functions. Each basis function is in the form of

DCT transform of 2 D Images n The basis functions for an 8 x

DCT transform of 2 D Images n The basis functions for an 8 x 8 DCT

DCT compression of 2 D Images n n After DCT compression, only a few

DCT compression of 2 D Images n n After DCT compression, only a few DCT coefficients have large values We need to: n n n Quantize the DCT coefficients Encode the position of the large coefficients Compress the value of the coefficients