Chapter I Digital Imaging Fundamentals Lesson III Processing
- Slides: 46
Chapter I, Digital Imaging Fundamentals: Lesson III Processing http: //www. kodak. com/country/US/en/digital/dlc/book 3/chapter 1/dig. Fund. Process 1. shtml
In this lesson, we will see how the quality of a digital image can be analyzed to identify problems with contrast and dynamic range. The most common analysis operation is the histogram, a bar graph showing the number of pixels at each gray level.
An image with good contrast and good dynamic range generates a histogram with a pixel distribution across the brightness range from 0 to 255.
Pixels in this type of image are white and black and hundreds of shades of gray.
In a low contrast image pixels are distributed over a short dynamic range-in this example, from about 130 to 180 on the gray scale. Pixels in a low contrast image are only a few shades of gray.
A high contrast image generates a histogram with high pixel count at the white and black extremes of the range.
Pixels in this type of image are white and black and hundreds of shades of gray.
Image enhancement processes are based on fundamental ways in which image data can be changed mathematically. First, let's look at three ways in which histogram information can be manipulated. Slide Mapping changes brightness by adding or subtracting a constant value. For example, adding a constant of 50 to every pixel in this image, slides the histogram to the right by 50 gray levels.
Stretch Mapping improves poor contrast by multiplying or dividing each pixel by a constant. Multiplying "spreads" the pixel values out so that a greater range of gray is used.
Complement mapping changes the digital value of each pixel to reverse the image. Black pixels become white. White pixels become black. And gray pixels become their complement.
To make color corrections to 24 -bit color images, mapping operations can be applied to the red, green and blue color planes. Reducing the red color plane by 50 levels moves the color balance towards cyan.
Reducing the green color plane by 50 levels moves the color balance towards magenta.
Reducing the blue color plane by 50 levels moves the color balance towards yellow.
The mapping functions we have just considered are examples of Pixel Point image processing. Two other types are Pixel Group Processes, and Frame Processes. As we have seen, in pixel point processing a mathematical function "maps" the input value of each pixel to a new output value. This lightens or darkens the image, or changes contrast.
In pixel group processing, a mathematical process called a convolution changes a pixel's values based on the brightness of the pixel and its neighbors. The following are some examples of group processing. Examples are: Noise Filtering, Sharpening and Blurring of Images.
Example 1 Noise Filtering This example of noise filtering changes all black pixels surrounded by white pixels to white. This eliminates the noisy dots, but leaves the rest of the image unchanged.
Example 2 Sharpening An Image Group processes can be used to sharpen an image. . .
Example 3 Blurring An Image . . . or to blur a portion of the image, such as a busy background.
In frame processing, the image is manipulated by changing the locations of pixels of the entire image or a portion of the image. Following are some examples of frame processing. Examples are: Image Rotation, and Scaling.
Example 1 Image Rotation rotates the image a specified value, such as 90 degrees or 180 degrees.
Example 2 Image Scaling reduces the size of the image through a process called decimation, or enlarges the size by replication or interpolation.
Decimation removes pixels to reduce the size of an image. To reduce it by half, every other line and row of pixels is removed.
Replication enlarges images by duplicating pixels.
Interpolation enlarges images by averaging the values of neighboring pixels to calculate values for the added pixels. This produces higher quality enlargement than replication.
Transforms are frame processes which place image data into another space, or domain, so that it can be more readily manipulated.
For example, Photo YCC conversion used in Photo CD, transforms red, green, and blue data into luminance and chrominace values. As we will see in the next unit, this makes the data easier to compress.
Transforms can also provide precise filtering by separating an image into its spatial frequency components, then manipulating specific frequencies. For example, edges can be enhanced by increasing the high spatial frequencies.
In this unit, we will see how image compression reduces the data needed to store and transmit digital images. As we have seen, photographic digital images generate a lot of data. For example, one 35 mm negative scanned for Photo CD creates an 18 megabyte file. If that file were text it would fill over 6000 pages.
Image compression reduces image data by identifying patterns in the bit strings describing pixel values, then replacing them with a short code. For example, a scan line beginning with 9 black pixels followed by 5 white pixels could be encoded as "9 b, 5 W. "
In much the same way, a color image can be compressed by grouping the data for similar pixels. For example, a group of 20 pixels can be encoded with one pixel address and color value.
There are two basic types of data compression: lossless compression and lossy compression. Lossless compression achieves only about a 2: 1 compression ratio, but the reconstructed image is mathematically and visually identical to the original.
Lossy compression provides much higher compression rates, but the reconstructed image shows some loss of data compared to the original image. This loss can be visible to the eye or visually lossless.
Visually lossless compression is based on knowledge about color images and human perception. Visually lossless compression algorithms sort image data into "important data" and "unimportant data, " then discard the unimportant.
A type of visually lossless compression is used in Photo CD. As we saw in the previous unit, Photo YCC converts RGB scanner data to a luminance signal and two chrominance signals.
The luminance signal represents most of the image detail and is the signal to which the human eye is most sensitive.
Chroma decimation discards chrominance information for every other row and column. This reduces file size without loss of visual information.
Photo CD further reduces image file size through a combination of lossy and lossless compression called hierarchical encoding. This type of encoding makes several files at different resolutions for applications. Following are examples of file resolutions used to save images to a Photo CD. Examples are: Base x 16, Base x 4, and Base Subsampling.
Example 1 Base x 16 Subsampling The Base x 16 image, used for high resolution output, is sub-sampled to generate the Base x 4 image and file of residual data. This residual data will be used to reconstruct the high resolution image.
Example 2 Base x 4 Subsampling The Base x 4 image, used for high definition television, is further subsampled to generate the Base image and a file of residual data
Example 3 Base Subsampling The Base image, used for display on regular television, is then subsampled to create the two lowest resolutions: Base/4 for thumbnails, and Base/16 for low resolution preview.
The Base image and the two low resolution images are saved without data compression. The two higher resolution images are compressed further using a lossless technique.
Chroma decimation discards chrominance information for every other row and column. This reduces file size without loss of visual information.
Lesson Review Let's review what we've learned so far. Select 1, 2 or 3 to indicate which of the histograms represents a low contrast image. 1 2 3
# 2 In this low contrast image pixels are distributed over a short dynamic range-in this example, from about 130 to 180 on the gray scale. Pixels in a low contrast image are only a few shades of gray.
Lesson Review Let's review what we've just learned about the three basic image scaling processes. Select 1, 2 or 3 on the image to indicate the process shown.
Lesson Review Let's review what we've just learned. Select 1 or 2 on the image to indicate the type of compression which is illustrated.
- Fundamentals of digital imaging
- Frc control system
- Color fundamentals in digital image processing
- Chapter 39 digital imaging film and radiographs
- Chapter 39 digital imaging film and radiographs
- Histogram processing in digital image processing
- Point processing operations
- Laplacian filter
- Point processing in image processing
- Fractal
- Morphological dilation
- Digital imagery definition
- Digital imaging artist
- Digital color imaging
- Digital ultrasonic diagnostic imaging system
- Digital imaging terminology
- Digital fundamentals chapter 4
- Hamlet act iii scene ii
- Database processing fundamentals design and implementation
- Too little vertical angulation results in images that are
- Lateral jaw radiography
- Horizontal angulation in radiology
- Digital fundamentals floyd ppt
- Floyd digital fundamentals 10th edition
- Digital fundamentals by floyd 10th edition
- Digital fundamentals answers
- Digital image fundamentals
- Lesson 6b shielded metal arc welding fundamentals
- Bottom-up processing example
- Bottom up processing vs top down processing
- Bottom up processing example
- What is primary and secondary processing
- Parallel processing vs concurrent processing
- Topdown processing
- What is interactive processing
- Chapter 1 lesson 1 your total health
- The range of values spanned by the gray scale is called
- Image representation and description
- Representation and description in digital image processing
- Double thresholding matlab
- Oerdigital
- Dm distance in image processing
- Basic intensity transformation in digital image processing
- For coordinates p(2,3)the 4 neighbors of pixel p are
- Digital light processing definition
- Image transform in digital image processing
- Gray level transformation in digital image processing