Single Image SuperResolution Using Sparse Representation Michael Elad

  • Slides: 27
Download presentation
Single Image Super-Resolution Using Sparse Representation * Michael Elad The Computer Science Department The

Single Image Super-Resolution Using Sparse Representation * Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel * Joint work with Roman Zeyde and Matan Protter MS 45: Recent Advances in Sparse and Non-local Image Regularization - Part III of III Wednesday, April 14, 2010 Image Super-Resolution Using Sparse Representation By: Michael Elad

The Super-Resolution Problem WGN v Blur (H) and Decimation (S) + Our Task: Reverse

The Super-Resolution Problem WGN v Blur (H) and Decimation (S) + Our Task: Reverse the process – recover the high-resolution image from the low-resolution one Image Super-Resolution Using Sparse Representation By: Michael Elad 2

Single Image Super-Resolution ? Recovery Algorithm The reconstruction process should rely on: q The

Single Image Super-Resolution ? Recovery Algorithm The reconstruction process should rely on: q The given low-resolution image q The knowledge of S, H, and statistical properties of v, and q Image behavior (prior). In our work: q We use patch-based sparse and redundant representation based prior, and q We follow the work by Yang, Wright, Huang, and Ma [CVPR 2008, IEEE-TIP – to appear], proposing an improved algorithm. Image Super-Resolution Using Sparse Representation By: Michael Elad 3

Core Idea (1) - Work on Patches We interpolate the low-res. Image in order

Core Idea (1) - Work on Patches We interpolate the low-res. Image in order to align the coordinate systems Interp. Q Every patch from should go through a process of resolution enhancement. The improved patches are then merged together into the final image (by averaging). Image Super-Resolution Using Sparse Representation By: Michael Elad Enhance Resolution 4

Core Idea (2) – Learning [Yang et. al. ’ 08] ? Enhance Resolution We

Core Idea (2) – Learning [Yang et. al. ’ 08] ? Enhance Resolution We shall perform a sparse decomposition of the lowres. patch, w. r. t. a learned dictionary We shall construct the high res. patch using the same sparse representation, imposed on a dictionary and now, lets go into the details … Image Super-Resolution Using Sparse Representation By: Michael Elad 5

The Sparse-Land Prior [Aharon & Elad, ’ 06] Extraction of a patch from yh

The Sparse-Land Prior [Aharon & Elad, ’ 06] Extraction of a patch from yh in location k is performed by Position k m Model Assumption: Every such patch can be represented sparsely over the dictionary Ah: Image Super-Resolution Using Sparse Representation By: Michael Elad n Ah 6

Low Versus High-Res. Patches Position k Interpolation Q ? q L = blur +

Low Versus High-Res. Patches Position k Interpolation Q ? q L = blur + decimation + interpolation. q This expression relates the low-res. patch to the high-res. one, based on Image Super-Resolution Using Sparse Representation By: Michael Elad 7

Low Versus High-Res. Patches Position k We interpolate the low-res. Image in order to

Low Versus High-Res. Patches Position k We interpolate the low-res. Image in order to align the coordinate systems Interpolation qk is ALSO the sparse representation of the low-resolution patch, with respect to the dictionary LAh. Image Super-Resolution Using Sparse Representation By: Michael Elad 8

Training the Dictionaries – General Training pair(s) We obtain a set of matching patch-pairs

Training the Dictionaries – General Training pair(s) We obtain a set of matching patch-pairs Pre-process Interpolation (low-res) Image Super-Resolution Using Sparse Representation By: Michael Elad Pre-process (high-res) These will be used for the dictionary training 9

Alternative: Bootstrapping The given image to be scaled-up Simulate the degradation process WGN v

Alternative: Bootstrapping The given image to be scaled-up Simulate the degradation process WGN v Blur (H) and Decimation (S) + Use this pair of images to generate the patches for the training Image Super-Resolution Using Sparse Representation By: Michael Elad 10

Pre-Processing High-Res. Patches High-Res Low-Res Interpolation Image Super-Resolution Using Sparse Representation By: Michael Elad

Pre-Processing High-Res. Patches High-Res Low-Res Interpolation Image Super-Resolution Using Sparse Representation By: Michael Elad Patch size: 9× 9 11

Pre-Processing Low-Res. Patches Low-Res Interpolation We extract patches of size * [0 1 -1]T

Pre-Processing Low-Res. Patches Low-Res Interpolation We extract patches of size * [0 1 -1]T Dimensionality 9× 9 from each of Reduction these images, and * [-1 2 -1] By PCA concatenate them to form one vector Patch size: ~30 T * [-1 2 -1] of length 324 * [0 1 -1] Image Super-Resolution Using Sparse Representation By: Michael Elad 12

Training the Dictionaries: K-SVD 40 iterations, L=3 For an image of size 1000× 1000

Training the Dictionaries: K-SVD 40 iterations, L=3 For an image of size 1000× 1000 pixels, there are ~12, 000 examples to train on Remember m=1000 N=30 Given a low-res. Patch to be scaled-up, we start the resolution enhancement by sparse coding it, to find qk: Image Super-Resolution Using Sparse Representation By: Michael Elad 13

Training the Dictionaries: And then, the high-res. Patch is obtained by Thus, Given Ah

Training the Dictionaries: And then, the high-res. Patch is obtained by Thus, Given Ah should be designed such that a low-res. Patch to be scaled-up, we start the Remember resolution enhancement by sparse coding it, to find qk: Image Super-Resolution Using Sparse Representation By: Michael Elad 14

Training the Dictionaries: q However, this approach disregards the fact that the resulting highres.

Training the Dictionaries: q However, this approach disregards the fact that the resulting highres. patches are not used directly as the final result, but averaged due to overlaps between them. q A better method (leading to better final scaled-up image) would be – Find Ah such that the following error is minimized: The constructed image Image Super-Resolution Using Sparse Representation By: Michael Elad 15

Overall Block-Diagram Training the highres. dictionary Training the lowres. dictionary On-line or off-line The

Overall Block-Diagram Training the highres. dictionary Training the lowres. dictionary On-line or off-line The recovery algorithm (next slide) Image Super-Resolution Using Sparse Representation By: Michael Elad 16

The Super-Resolution Algorithm * [0 1 -1]T Interpolation * [-1 2 1]T Extract patches

The Super-Resolution Algorithm * [0 1 -1]T Interpolation * [-1 2 1]T Extract patches multiply by B to reduce dimension Image Super-Resolution Using Sparse Representation By: Michael Elad Sparse coding using Aℓ to compute Multiply by Ah and combine by averaging the overlaps 17

Relation to the Work by Yang et. al. We use a direct approach to

Relation to the Work by Yang et. al. We use a direct approach to the training of Ah We interpolate to avoid coordinate ambiguity Training the highres. dictionary We use OMP for sparse-coding Training the lowres. dictionary We can train on We train using the given image the K-SVD a training set Bottom line: The or proposed algorithm is much simpler, much faster, and, as we show next, it We results reduce also leads to better dimension prior We avoid postto training processing We train on a difference-image The recovery algorithm (previous slide) Image Super-Resolution Using Sparse Representation By: Michael Elad 18

Results (1) – Off-Line Training The training image: 717× 717 pixels, providing a set

Results (1) – Off-Line Training The training image: 717× 717 pixels, providing a set of 54, 289 training patch-pairs. Image Super-Resolution Using Sparse Representation By: Michael Elad 19

Results (1) – Off-Line Training SR Result PSNR=16. 95 d. B Bicubic interpolation PSNR=14.

Results (1) – Off-Line Training SR Result PSNR=16. 95 d. B Bicubic interpolation PSNR=14. 68 d. B Image Super-Resolution Using Sparse Representation By: Michael Elad Ideal Image Given Image 20

Results (2) – On-Line Training Given image Scaled-Up (factor 2: 1) using the proposed

Results (2) – On-Line Training Given image Scaled-Up (factor 2: 1) using the proposed algorithm, PSNR=29. 32 d. B (3. 32 d. B improvement over bicubic) Image Super-Resolution Using Sparse Representation By: Michael Elad 21

Results (2) – On-Line Training The Original Image Super-Resolution Using Sparse Representation By: Michael

Results (2) – On-Line Training The Original Image Super-Resolution Using Sparse Representation By: Michael Elad Bicubic Interpolation SR result 22

Results (2) – On-Line Training The Original Image Super-Resolution Using Sparse Representation By: Michael

Results (2) – On-Line Training The Original Image Super-Resolution Using Sparse Representation By: Michael Elad Bicubic Interpolation SR result 23

Comparative Results – Off-Line Bicubic Yang et. al. Our alg. PNSR SSIM PSNR SSIM

Comparative Results – Off-Line Bicubic Yang et. al. Our alg. PNSR SSIM PSNR SSIM Barbara 26. 24 0. 75 26. 39 0. 76 26. 77 0. 78 Coastguard 26. 55 0. 61 27. 02 0. 64 27. 12 0. 66 Face 32. 82 0. 80 33. 11 0. 80 33. 52 0. 82 Foreman 31. 18 0. 91 32. 04 0. 91 33. 19 0. 93 Lenna 31. 68 0. 86 32. 64 0. 86 33. 00 0. 88 Man 27. 00 0. 75 27. 76 0. 77 27. 91 0. 79 Monarch 29. 43 0. 92 30. 71 0. 93 31. 12 0. 94 Pepper 32. 39 0. 87 33. 33 0. 87 34. 05 0. 89 PPT 3 23. 71 0. 87 24. 98 0. 89 25. 22 0. 91 Zebra 26. 63 0. 79 27. 95 0. 83 28. 52 0. 84 Average 28. 76 0. 81 29. 59 0. 83 30. 04 0. 85 Image Super-Resolution Using Sparse Representation By: Michael Elad 24

Comparative Results - Example Ours et. al. Yang Image Super-Resolution Using Sparse Representation By:

Comparative Results - Example Ours et. al. Yang Image Super-Resolution Using Sparse Representation By: Michael Elad 25

Summary Single-image scale-up – The rules of the game: use an important problem, the

Summary Single-image scale-up – The rules of the game: use an important problem, the given image, the known with many attempts to degradation, and a solve it in the past decade. sophisticated image prior. We introduced modifications Yang et. al. [2008] – a very to Yang’s work, leading to a elegant way to incorporate simple, more efficient alg. sparse & redundant with better results. representation prior More work is required to improve further the results obtained. Image Super-Resolution Using Sparse Representation By: Michael Elad 26

A New Book TIONS A T N E S E R P E R

A New Book TIONS A T N E S E R P E R EDUNDANT R D N A E S R SPA ns in Signal io t a c li p p A y to From Theor ocessing r P e g a Im d an Michael Elad Thank You Very Much !! Image Super-Resolution Using Sparse Representation By: Michael Elad 27