Last Time Course introduction Assignment 1 not graded

  • Slides: 33
Download presentation
Last Time • Course introduction • Assignment 1 (not graded, but necessary) – View

Last Time • Course introduction • Assignment 1 (not graded, but necessary) – View is part of Project 1 • Image and film basics 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Today • More on Digital Images • Introduction to color • Homework 1 1/24/02

Today • More on Digital Images • Introduction to color • Homework 1 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Digital Images • Computers work with discrete pieces of information • How do we

Digital Images • Computers work with discrete pieces of information • How do we digitize a continuous image? – Break the continuous space into small areas, pixels – Use a single value for each pixel - the pixel value (no color, yet) – No longer continuous in space or intensity • This process is fraught with danger, as we shall see Continuous Discrete Pixels: Picture Elements 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Digital Cameras • CCD stores a charge each time a photon hits it –

Digital Cameras • CCD stores a charge each time a photon hits it – “Bins” have discrete area, one per pixel – Spatially discrete • Camera “reads” the charges out of the bins at some frequency • Convert charges to discrete value – Discrete in intensity • Store values in memory - the image • Still have issues of motion blur, depth of field, dynamic range, etc 1/24/02 © University of Wisconsin, CS 559 Spring 2002 Light in Lens CCD

Alternative Imaging Methods • We obviously don’t have to use a digital camera to

Alternative Imaging Methods • We obviously don’t have to use a digital camera to generate a digital image • You can write the pixels directly, or use a paint program, or describe a 3 D scene and have a computer render it into an image, or … • This course is all about these other methods • However, it still helps to think of any digital image as a sample of some ideal image 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Discretization Issues • Can only store a finite number of pixels – Resolution: Pixels

Discretization Issues • Can only store a finite number of pixels – Resolution: Pixels per inch, or dpi (dots per inch from printers) – Storage space goes up with square of resolution • 600 dpi has 4× more pixels than 300 dpi • Can only store a finite range of intensity values – Typically referred to as depth - number of bits per pixel • Directly related to the number of colors available – Also concerned with the minimum and maximum intensity – dynamic range – Both film and digital cameras have highly limited dynamic range • The big question is: What is enough resolution and enough depth? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Perceptual Issues • Humans can discriminate about ½ a minute of arc – At

Perceptual Issues • Humans can discriminate about ½ a minute of arc – At fovea, so only in center of view, 20/20 vision – At 1 m, about 0. 2 mm (“Dot Pitch” of monitors) – Limits the required number of pixels • Humans can discriminate about 8 bits of intensity – “Just Noticeable Difference” experiments – Limits the required depth for typical dynamic ranges – Actually, it’s 9 bits, but 8 is far more convenient 129 128 125 • BUT, while perception can guide resolution requirements for display, when manipulating images much higher resolution may be required 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Intensity Perception • Humans are actually tuned to the ratio of intensities, not their

Intensity Perception • Humans are actually tuned to the ratio of intensities, not their absolute difference – So going from a 50 to 100 Watt light bulb looks the same as going from 100 to 200 – So, if we only have 4 intensities, between 0 and 1, we should choose to use 0, 0. 25, 0. 5 and 1 • Most computer graphics ignores this, giving poorer perceptible intensity resolution at low light levels, and better resolution at high light levels – It would use 0, 0. 33, 0. 66, and 1 1/24/02 © University of Wisconsin, CS 559 Spring 2002

High range, low res • Image depth refers to the number of bits available,

High range, low res • Image depth refers to the number of bits available, but not how those bits map onto intensities • We can use those bits to represent a large range at low resolution, or a small range at high resolution • Common display devices can only show a limited dynamic range, so typically we fix the range at that of the display device and choose high resolution Low range, high res Dynamic Range All possible intensities 1/24/02 © University of Wisconsin, CS 559 Spring 2002

More Dynamic Range • Real scenes have very high and very low intensities •

More Dynamic Range • Real scenes have very high and very low intensities • Humans can see contrast at very low and very high light levels – Can’t see all levels all the time – use adaptation to adjust – Still, high range even at one adaptation level • Film has low dynamic range ~ 100: 1 • Monitors are even worse • Many ways to deal with the problem, but no great solution – Way beyond the scope of this course 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Display on a Monitor • When images are created, a linear mapping between pixels

Display on a Monitor • When images are created, a linear mapping between pixels and intensity is assumed – For example, if you double the pixel value, the displayed intensity should double • Monitors, however, do not work that way – – For analog monitors, the pixel value is converted to a voltage The voltage is used to control the intensity of the monitor pixels But the voltage to display intensity is not linear Same problem with digital monitors, they just do the pixel to intensity conversion differently • The outcome: A linear intensity scale in memory does not look linear on a monitor! 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Gamma Control • The mapping from voltage to display is usually an exponential function:

Gamma Control • The mapping from voltage to display is usually an exponential function: • To correct the problem, we pass the pixel values through a gamma function before converting them to the monitor • This process is called gamma correction • The parameter, , is controlled by the user – It should be matched to a particular monitor – Typical values are between 2. 2 and 2. 5 • The mapping can be done in hardware or software 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Some Facts About Color • So far we have only discussed intensities, so called

Some Facts About Color • So far we have only discussed intensities, so called achromatic light (black and white) • Accurate color reproduction is commercially valuable - e. g. Kodak yellow, painting a house • Of the order of 10 color names are widely recognized by English speakers - other languages have fewer/more, but not much more • Color reproduction problems have been increased by the prevalence of digital imaging - eg. digital libraries of art • Color consistency is also important in user interfaces, eg: what you see on the monitor should match the printed version 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Light and Color • The frequency of light determines its “color” – Frequency, wavelength,

Light and Color • The frequency of light determines its “color” – Frequency, wavelength, energy all related • Describe incoming light by a spectrum – Intensity of light at each frequency – Just like a graph of intensity vs. frequency • We care about wavelengths in the visible spectrum: between the infra-red (700 nm) and the ultra-violet (400 nm) 1/24/02 © University of Wisconsin, CS 559 Spring 2002

White # Photons White Less Intense White (grey) Wavelength (nm) 400 500 600 700

White # Photons White Less Intense White (grey) Wavelength (nm) 400 500 600 700 • Note that color and intensity are technically two different things • However, in common usage we use color to refer to both – For example, dark red vs. light red • You will have to use context to extract the meaning 1/24/02 © University of Wisconsin, CS 559 Spring 2002

# Photons Helium Neon Laser Wavelength (nm) 400 500 600 700 • Lasers emit

# Photons Helium Neon Laser Wavelength (nm) 400 500 600 700 • Lasers emit light at a single wavelength, hence they appear colored in a very “pure” way 1/24/02 © University of Wisconsin, CS 559 Spring 2002

# Photons Normal Daylight Wavelength (nm) 400 500 600 700 • The sky is

# Photons Normal Daylight Wavelength (nm) 400 500 600 700 • The sky is blue, so what should this look like? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

# Photons Normal Daylight Wavelength (nm) 400 500 600 700 • Note the hump

# Photons Normal Daylight Wavelength (nm) 400 500 600 700 • Note the hump at short wavelengths - the sky is blue • Other bumps came from solar emission spectra and atmospheric adsorption 1/24/02 © University of Wisconsin, CS 559 Spring 2002

# Photons Tungsten Lightbulb Wavelength (nm) 400 500 600 700 • Most light sources

# Photons Tungsten Lightbulb Wavelength (nm) 400 500 600 700 • Most light sources are not anywhere near white • It is a major research effort to develop light sources with particular properties 1/24/02 © University of Wisconsin, CS 559 Spring 2002

# Photons Red Paint Wavelength (nm) 400 500 600 700 • Red paint absorbs

# Photons Red Paint Wavelength (nm) 400 500 600 700 • Red paint absorbs green and blue wavelengths, and reflects red wavelengths, resulting in you seeing a red appearance 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Frequency Response • Any sensor is defined by its response to a frequency distribution

Frequency Response • Any sensor is defined by its response to a frequency distribution • Expressed as a graph of sensitivity vs. wavelength, ( ) – For each unit of energy at the given wavelength, how much voltage/impulses/whatever the sensor provides • To compute the response, take the integral – E( ) is the incoming energy at the particular wavelength – The integral multiplies the amount of energy at each wavelength by the sensitivity at that wavelength, and sums them all up 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Sensitivity A “Red” Sensor Wavelength (nm) 400 500 600 700 • This sensor will

Sensitivity A “Red” Sensor Wavelength (nm) 400 500 600 700 • This sensor will respond to red light, but not to blue light, and a little to green light 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Sensitivity, The “Red” Sensor Response 600 700 Red 400 500 600 700 High response

Sensitivity, The “Red” Sensor Response 600 700 Red 400 500 600 700 High response 1/24/02 400 #photons, E 500 #photons, E 400 500 600 700 Blue 400 500 Low response © University of Wisconsin, CS 559 Spring 2002

Changing Response • How can you take a “white” sensor and change it into

Changing Response • How can you take a “white” sensor and change it into a “red” sensor? – Hint: Think filters • Can you change a “red” sensor into a “white” sensor? • Assume for the moment that your eye is a “white” sensor. How is it that you can see a “black light” (UV) shining on a surface? – Such surfaces are fluorescent – Your eye isn’t really a white sensor - it just approximates one 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Seeing in Color • The eye contains rods and cones – Rods work at

Seeing in Color • The eye contains rods and cones – Rods work at low light levels and do not see color • That is, their response depends only on how many photons, not their wavelength – Cones come in three types (experimentally and genetically proven), each responds in a different way to frequency distributions 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Color receptors • Each cone type has a different sensitivity curve – Experimentally determined

Color receptors • Each cone type has a different sensitivity curve – Experimentally determined in a variety of ways • For instance, the L-cone responds most strongly to red light • “Response” in your eye means nerve cell firings • How you interpret those firings is not so simple … 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Color Perception • How your brain interprets nerve impulses from your cones is an

Color Perception • How your brain interprets nerve impulses from your cones is an open area of study, and deeply mysterious • Colors may be perceived differently: – Affected by other nearby colors – Affected by adaptation to previous views – Affected by “state of mind” • Experiment: – Subject views a colored surface through a hole in a sheet, so that the color looks like a film in space – Investigator controls for nearby colors, and state of mind 1/24/02 © University of Wisconsin, CS 559 Spring 2002

The Same Color? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

The Same Color? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

The Same Color? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

The Same Color? 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Color Deficiency • Some people are missing one type of receptor – Most common

Color Deficiency • Some people are missing one type of receptor – Most common is red-green color blindness in men – Red and green receptor genes are carried on the X chromosome most red-green color blind men have two red genes or two green genes • Other color deficiencies – Anomalous trichromacy, Achromatopsia, Macular degeneration – Deficiency can be caused by the central nervous system, by optical problems in the eye, injury, or by absent receptors 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Trichromacy • Experiment: – Show a target color beside a user controlled color –

Trichromacy • Experiment: – Show a target color beside a user controlled color – User has knobs that add primary sources to their color – Ask the user to match the colors • By experience, it is possible to match almost all colors using only three primary sources - the principle of trichromacy • Sometimes, have to add light to the target 1/24/02 © University of Wisconsin, CS 559 Spring 2002

The Math of Trichromacy • Write primaries as A, B and C • Many

The Math of Trichromacy • Write primaries as A, B and C • Many colors can be represented as a mixture of A, B, C: M=a. A+b. B+c. C (Additive matching) • Gives a color description system - two people who agree on A, B, C need only supply (a, b, c) to describe a color • Some colors can’t be matched like this, instead, write: M+a. A=b. B+c. C (Subtractive matching) – Interpret this as (-a, b, c) – Problem for reproducing colors 1/24/02 © University of Wisconsin, CS 559 Spring 2002

Color matching functions • Choose primaries, say A, B, C • Given energy function,

Color matching functions • Choose primaries, say A, B, C • Given energy function, E( ), what amounts of primaries will match it? • For each wavelength, determine how much of A, of B, and of C is needed to match light of that wavelength alone. Gives a( ), b( ) and c( ) • Match is: 1/24/02 © University of Wisconsin, CS 559 Spring 2002