EAT 445 REMOTE SENSING DIGITAL IMAGE INTERPRETATION AND

  • Slides: 62
Download presentation
EAT 445 REMOTE SENSING DIGITAL IMAGE INTERPRETATION AND ANALYSIS CO 2: Ability to convert

EAT 445 REMOTE SENSING DIGITAL IMAGE INTERPRETATION AND ANALYSIS CO 2: Ability to convert and analyze environmental data by using digital image processing software.

INTRODUCTION • Digital image interpretation and analysis; – Involves manipulation and interpretation of digital

INTRODUCTION • Digital image interpretation and analysis; – Involves manipulation and interpretation of digital image – With the aid of computer – Often involves procedures that can be mathematically complex.

CENTRAL IDEA BEHIND DIGITAL IMAGE PROCESSING • Digital image is fed into a computer

CENTRAL IDEA BEHIND DIGITAL IMAGE PROCESSING • Digital image is fed into a computer one pixel at a time. • Computer is program to insert data into an equation or series of equations • Results of the computation for each pixel are stored. • The results form new digital image that may be displayed or recorded in pictorial format or may be further manipulated by additional programs.

PROCEDURES IN DIGITAL IMAGE INTERPRETATION 1. 2. 3. 4. 5. 6. 7. Image rectification

PROCEDURES IN DIGITAL IMAGE INTERPRETATION 1. 2. 3. 4. 5. 6. 7. Image rectification and restoration Image enhancement Image classification Data merging and GIS integration Hyperspectral image analysis Biophysical modeling Image transmission and compression

IMAGE RECTIFICATION AND RESTORATION • Aim to correct distorted or degraded image data. •

IMAGE RECTIFICATION AND RESTORATION • Aim to correct distorted or degraded image data. • Create more faithful representation of the original scene. • Involves initial processing of raw image data – to correct for geometric distortions. – To correct the data radiometrically – To eliminate noise present in the data.

Cont… • Highly dependent upon characteristics of the sensor used to acquire image. •

Cont… • Highly dependent upon characteristics of the sensor used to acquire image. • Often term preprocessing operations • Normally precede further manipulation and analysis of the image data to extract specific information.

Geometric Correction • Raw digital images usually contain geometric distortion • Cannot be used

Geometric Correction • Raw digital images usually contain geometric distortion • Cannot be used directly as a map base without subsequent processing. • Sources of distortion: – Variation in altitude, attitude and velocity of the sensor platform – Panoramic distortion – Earth curvature – Atmospheric refraction – Relief displacement – Nonlinearities in the sweep of sensor’s IFOV

Systematic distortion • Predictable • Easily corrected by applying formulas derived by modeling the

Systematic distortion • Predictable • Easily corrected by applying formulas derived by modeling the sources of distortion mathematically • Eg: highly systematic source of distortion involved in multispectral scanning from satellite altitude is the eastward rotation of the earth beneath the satellite during imaging.

Cont… • Causes each optical sweep of scanner to cover an area slightly to

Cont… • Causes each optical sweep of scanner to cover an area slightly to the west of the previous sweep. • This is known as skew distortion. • Deskewing process of the resulting imagery involves offsetting each successive scan line slightly to the west. • Resulting in skewed-parallelogram appearance of satellite multispectral scanner data.

Random or unpredictable distortion • Corrected by analyzing well-distributed ground control points (GCPs) occurring

Random or unpredictable distortion • Corrected by analyzing well-distributed ground control points (GCPs) occurring in an image. • On aerial photograph, GCPs are features of known ground location that can be accurately located on digital imagery. • Good control points; – Highways intersection – Distinct shoreline feature

Cont… • In correction process, numerous GCPs are located in terms of – their

Cont… • In correction process, numerous GCPs are located in terms of – their two image coordinate (column, row numbers) on the distorted image – their ground coordinate (measured from a map or GPS located in the field, in term of UTM coordinates or latitude and longitude) • These values are then submitted to a least squares regression analysis to determine coefficients for 2 coordinate transformation equations

Cont… • Can be used to interrelate geometrically correct coordinates and the distorted image

Cont… • Can be used to interrelate geometrically correct coordinates and the distorted image coordinate. • Once determined, the distorted image coordinate for any map position can be precisely estimated.

Radiometric Correction • Varies widely among sensors. • Radiance measured by any given system

Radiometric Correction • Varies widely among sensors. • Radiance measured by any given system over a given object is influenced by; – – Changes in scene illumination Atmospheric conditions Viewing geometry Instrument response characteristics • Some effect eg: viewing geometry variations are greater in case of airborne data collection than in satellite image acquisition

Cont… • The need to perform correction for any of the influences depends directly

Cont… • The need to perform correction for any of the influences depends directly upon particular application. • For satellite sensing in the visible and near -infrared portion often desirable; – To generate mosaics of images taken at different times – To study changes in the reflectance of ground features at different times or location

Cont… • Necessary to apply a sun elevation correction and earth-sun distance correction. •

Cont… • Necessary to apply a sun elevation correction and earth-sun distance correction. • The sun elevation correction accounts for seasonal position of the sun relative to earth. • The earth-sun distance correction is applied to normalized for the seasonal changes in the distance between the earth and the sun

Noise Removal • Image noise is any unwanted disturbance in image data that is

Noise Removal • Image noise is any unwanted disturbance in image data that is due to limitation in; – Sensing – Signal digitization – Data recording process • Potential source of noise; – Periodic drift or malfunction of detector – Electronic interference between sensors components – Intermittent hic-cups in the data transmission and recording sequence

Cont… • Can degrade or totally mask the true radiometric information content of a

Cont… • Can degrade or totally mask the true radiometric information content of a digital image. • Usually precedes any subsequence enhancement or classification of the image data. • Objective is to restore an image to as close an approximation of the original scene as possible.

IMAGE ENHANCEMENT • Applied to image data to more effectively display or record the

IMAGE ENHANCEMENT • Applied to image data to more effectively display or record the data for subsequence visual interpretation. • Involves techniques for increasing the visual distinctions between features in a scene. • Objective is to create new images from the original image data to increase amount of information that can be visually interpreted from the data.

Cont… • Enhanced images can be displayed interactively on a monitor or recorded in

Cont… • Enhanced images can be displayed interactively on a monitor or recorded in hardcopy format either in black and white or color. • No simple rules for producing single best image for particular application • Often several enhancements made from the same raw images are necessary.

Cont… • May be categorized as; – Points operations – modify brightness value of

Cont… • May be categorized as; – Points operations – modify brightness value of each pixel in an image data set independently. – Local operation – modify the value of each pixel based on neighboring brightness values. • Can be performed on – A single-band (monochrome) images – Individual components of multi-image composites.

Cont… • Most commonly applied digital enhancement techniques; – Contrast manipulation • grey level

Cont… • Most commonly applied digital enhancement techniques; – Contrast manipulation • grey level thresholding, level slicing and contrast stretching – Spatial feature manipulation • spatial filtering, edge enhancement and Fourier analysis – Multi-image manipulation • multispectral band rationing and differencing, principal component, canonical components, vegetation components, intensity-hue-saturation (IHS) color space transformation and decorrelation stretching

Multi-image manipulation – spectral ratioing • Ratio images – enhancement resulting from the division

Multi-image manipulation – spectral ratioing • Ratio images – enhancement resulting from the division of DN values in one spectral band – by corresponding values in another band. • Advantages: – Convey spectral or color characteristics of image features regardless of variations in scene illumination conditions.

Cont… • Eg: – 2 different land cover types (deciduous and coniferous trees) occurring

Cont… • Eg: – 2 different land cover types (deciduous and coniferous trees) occurring on sunlit and shadowed sides of an area. – The DNs for each cover are lower in the shadowed area than in the sunlit area. – However, ratio values for each cover are identical irrespective of illumination condition. – A ratioed image effectively compensates for the brightness variation caused by the varying topography and emphasizes the color content of the data.

Reduction of scene illumination effects through spectral ratioing Digital number Ratio Land cover/ Band

Reduction of scene illumination effects through spectral ratioing Digital number Ratio Land cover/ Band A illumination B (Band A/ Band B) Deciduous Sunlit 48 50 0. 96 Shadow 18 19 0. 96 Coniferous Sunlit 31 45 0. 69 Shadow 11 16 0. 69

Cont… • Useful for discriminating subtle spectral variations in a scene that are masked

Cont… • Useful for discriminating subtle spectral variations in a scene that are masked by the brightness variations in images from individual spectral bands or in standard color composites. • Ratioed images portray variations in the slopes of the spectral reflectance curves between the 2 bands involved regardless of absolute reflectance values observed in the bands.

Cont… • The slopes are quite different for various material types in certain bands

Cont… • The slopes are quite different for various material types in certain bands of sensing • Eg: near infrared to red ratio for healthy vegetation is very high while for stressed vegetation is lower. • Thus the ratioed image can be to differentiate between areas of stressed and nonstressed vegetation.

Cont… • Also used to generate false color composites by combining 3 monochromatic ratio

Cont… • Also used to generate false color composites by combining 3 monochromatic ratio data sets. • Have the advantage of combining data from more than 2 bands and presenting the data in color. • Choosing which ratios to include in a color composites and selecting colors in which to portray them is difficult. • Some trial and error is often necessary in selecting ratio combinations.

IMAGE CLASSIFICATION • Objective: – To replace visual analysis of the image data with

IMAGE CLASSIFICATION • Objective: – To replace visual analysis of the image data with quantitative techniques for automating the identification of features in a scene. – To automatically categorized all pixels in an image into land cover classes or theme. • Involved the analysis of multispectral image data and the application of statistically based decision rules for determining land cover identity of each

cont… • Normally multispectral data are used to perform the classification • Spectral pattern

cont… • Normally multispectral data are used to perform the classification • Spectral pattern within the data for each pixel is used as the numerical basis for categorization. • Different feature types manifest different combination of DNs based on their inherent spectral reflectance and emittance properties.

cont. . . • A spectral pattern is not geometric in character. • The

cont. . . • A spectral pattern is not geometric in character. • The term pattern refers to set of radiance measurement obtained in various wavelength bands for each pixel. • Spectral pattern recognition – the family of classification procedures that utilizes pixelby-pixel spectral information as the basis for automated land cover classification.

cont. . . • Spatial pattern recognition; – Involves categorization of image pixels on

cont. . . • Spatial pattern recognition; – Involves categorization of image pixels on the basis of their spatial relationship with pixel surrounding them. – Consider aspect as image texture, pixel proximity, feature size, shape, directionality, repetition and context. – Attempt to replicate the kind of spatial synthesis done by human analyst during visual interpretation process.

cont. . . • Temporal pattern recognition; – Uses time as an aid in

cont. . . • Temporal pattern recognition; – Uses time as an aid in feature identification – Eg: in agricultural crop surveys – distinct spectral and spatial changes during growing season can permit discrimination on multidate imagery that would be impossible given any single date. – A field of winter wheat might be indistinguishable from bare soil when freshly seeded in the fall and spectrally similar to an alfalfa field in spring. – Interpretation of imagery from either date alone would be unsuccessful regardless the number of spectral band. – If data were analyzed on both dates, the winter wheat could be identified.

Image classification – supervised classification • The image analyst supervises the pixel categorization process

Image classification – supervised classification • The image analyst supervises the pixel categorization process by specifying numerical descriptors of the various land cover types present in a scene. • Representative sample sites of known cover type called training areas are used to compile a numerical interpretation key that describes the spectral attributes for each feature type of interest.

cont. . . • Each pixel in the data set is then compared numerically

cont. . . • Each pixel in the data set is then compared numerically to each category in the interpretation key and labeled with the name of the category it looks most like. • There are 3 basic steps involved in a typical supervised classification procedure; – The training stage – The classification stage – The output stage

Basic steps in supervised classification

Basic steps in supervised classification

Supervised classification - The classification stage • The most important part in supervised classification

Supervised classification - The classification stage • The most important part in supervised classification process. • The spectral patterns in the image data set are evaluated in the computer using predefined decision rules to determine the identity of each pixel. • When implemented numerically, may be applied to any number of channels of data.

cont. . . • Assuming a sample of pixel observations from two-channel digital image

cont. . . • Assuming a sample of pixel observations from two-channel digital image data set. • The two-dimensional digital values or measurement vectors attributed to each pixel may be expressed graphically by plotting them on a scatter diagram. • Assuming that the pixel observations are rom areas of known cover type (from selected training sites)

cont. . . • Each pixel values has been plotted on the scatter diagram

cont. . . • Each pixel values has been plotted on the scatter diagram with a letter indicating the category. • Pixel within each class do not have a single, repeated spectral value. • They illustrate the natural centralizing tendency of the spectral properties found within each cover classes

Pixel observations from selected training sites plotted on scatter diagram

Pixel observations from selected training sites plotted on scatter diagram

Supervised classification – the training stage • The actual classification of multispectrl image data

Supervised classification – the training stage • The actual classification of multispectrl image data is highly automated process. • The training effort requires; – Close interaction between the image analyst and the image data. – substantial reference data – a thorough knowledge of the geographic area to which the data apply. • The quality of the training process determines the success of the classification stage.

cont. . . • Overall objective is to assemble a set of statistics that

cont. . . • Overall objective is to assemble a set of statistics that describe the spectral response pattern for each land cover type to be classified in an image. • During training stage, the location, size, shape and orientation of the points for each land cover class are determined. • To yield acceptable classification result, training data must be representative and complete.

Supervised classification – the output stage • The utility of any image classification is

Supervised classification – the output stage • The utility of any image classification is ultimately dependent on the production of output products that effectively convey the interpretation information to end user. • Here, the boundaries between remote sensing, computer graphics, digital cartography and GIS management become blurred. • A virtually unlimited selection of output products may be generated.

cont. . . • The 3 general forms that are commonly used includes; –

cont. . . • The 3 general forms that are commonly used includes; – Hardcopy graphic product – Tables of area statistics – Digital data files

Image Classification Unsupervised Classification • Do not utilize training data as the basis for

Image Classification Unsupervised Classification • Do not utilize training data as the basis for classification. • Involves algorithm that examine the unknown pixels in an image and aggregate them into a number of classes based on natural grouping or clusters presents in the image value. • The values within a given cover type should be close together in the measurement space. • Data in different classes should be

cont. . . • The classes that result from unsupervised classification are spectral classes.

cont. . . • The classes that result from unsupervised classification are spectral classes. • Since they are base solely on the natural groupings in the image values, the identity of the spectral classes will not be initially known. • The analyst must compare the classified data with some form of reference data to determine the identity and informational value of the spectral classes.

cont. . . • In supervised approach, we define useful information categories and then

cont. . . • In supervised approach, we define useful information categories and then examine their spectral separability • In unsupervised approach, we determine spectrally separable classes and then define their informational utility.

DATA MERGING AND GIS INTEGRATION • Used to combine image data for a given

DATA MERGING AND GIS INTEGRATION • Used to combine image data for a given geographic area with other geographically referenced data sets for the same area. • These other data sets might consist of image data generated on other dates by the same sensor or by other remote sensing systems. • The intent is to combine remotely sensed data with other source of information. • Eg: image data often combine with soil, topography, ownership, zoning and

Data merging – multitemporal data merging • Can take on many different form. •

Data merging – multitemporal data merging • Can take on many different form. • Combining images of the same area taken on more than one date to create a product useful for visual interpretation. • Eg: merging various combinations of bands from different dates to create color composites can aid the interpreter in discriminating various crop types.

cont… • The use of multitemporal data is required to obtain satisfactory cover type

cont… • The use of multitemporal data is required to obtain satisfactory cover type discrimination. • Improves classification accuracy and/or categorized details.

Data merging – change detection procedures • Involves the use of multitemporal data sets

Data merging – change detection procedures • Involves the use of multitemporal data sets to discriminate areas of land cover change between dates of imaging. • The types of changes range from short term phenomena to long term phenomena. • Should involved data acquired by the same sensor and recorded using the same spatial resolution, viewing geometry, spectral bands, radiometric resolution and time of day.

cont… • Accurate spatial registration of the various dates of imagery is required for

cont… • Accurate spatial registration of the various dates of imagery is required for effective change detection. • Registration to within ¼ to ½ pixel is generally required. • When misregistration is greater than 1 pixel, numerous errors will result when comparing the images.

Data merging – multisensor image merging • Combination of image data from more than

Data merging – multisensor image merging • Combination of image data from more than 1 type of sensor. • Often results in a composite image product that offers greater interpretability than an image from 1 sensor. • Can be use for; – Merging of digital photographic and multispectral scanner data. – Combining multispectral scanner and radar image data.

HYPERSPECTRAL IMAGE ANALYSIS • Requires more attention to issues of atmospheric correction • Relies

HYPERSPECTRAL IMAGE ANALYSIS • Requires more attention to issues of atmospheric correction • Relies more heavily on physical and biophysical models rather on purely statistical techniques.

Hyperspectral image analysis – atmospheric correction of hyperspectral images • Atmospheric constituents such as

Hyperspectral image analysis – atmospheric correction of hyperspectral images • Atmospheric constituents such as gases and aerosols have 2 types of effects on the radiance observed by a hyperspectral sensor. • The atmosphere absorbs light at particular wavelengths, decreasing the radiance that can be measured. • At the same time the atmosphere scatter light into the sensor’s field of view adding extraneous source of radiance that is

cont… • Magnitude of adsorption and scattering will vary from place to place and

cont… • Magnitude of adsorption and scattering will vary from place to place and time to time • Depending on concentrations and particles sizes of various atmospheric constituents. • Before comparison are made, an atmospheric correction process must be used to compensate for the transient effects of atmospheric absorption and scattering.

cont… • Advantage – the contiguous, high resolution spectra produce contain a substantial amount

cont… • Advantage – the contiguous, high resolution spectra produce contain a substantial amount of information about atmospheric characteristics at the time of image acquisition. • Atmospheric models can be used with the image data to compute quantities such as total atmospheric column water vapor content and other atmospheric correction parameters.

cont… • Ground measurements of atmospheric transmittance or the optical depth, obtained by instruments

cont… • Ground measurements of atmospheric transmittance or the optical depth, obtained by instruments such as sunphotometers may be incorporated into the atmospheric correction models.

BIOPHYSICAL MODELING • Objective – to relate quantitatively the digital data recorded by a

BIOPHYSICAL MODELING • Objective – to relate quantitatively the digital data recorded by a remote sensing system to biophysical features and phenomena measured on the ground. • 3 basic approaches can be employed to relate digital remote sensing data to biophysical variable; – Physical modeling – Empirical modeling – Combination of physical and empirical modeling

cont… • Physical modeling – data analyst attempts to account mathematically for all known

cont… • Physical modeling – data analyst attempts to account mathematically for all known parameters affecting the radiometric characteristics of the remote sensing data. • Empirical modeling – quantitative relationship between remote sensing data and ground based data is calibrated by interrelating known points of coincident observation of the two.

IMAGE TRANSMISSION AND COMPRESSION • The goal of distributing useful digital image data over

IMAGE TRANSMISSION AND COMPRESSION • The goal of distributing useful digital image data over the internet has spark research into methods for – decomposing imagery into multiple components. – Compressing the component individually for transmission – Recreating an approximate or exact version of the original image upon reception.

cont… • Traditional image transmission methods generally transfer the image data on lineby-line basis.

cont… • Traditional image transmission methods generally transfer the image data on lineby-line basis. • Starting at the top of the image and working toward the bottom. • If image is large and the medium over which the image is being transmitted is slow, the user will have to wait a considerable time to get the information.

cont… • Traditional image transmission methods generally transfer the image data on lineby-line basis.

cont… • Traditional image transmission methods generally transfer the image data on lineby-line basis. • Starting at the top of the image and working toward the bottom. • If image is large and the medium over which the image is being transmitted is slow, the user will have to wait a considerable time to get the information