Friday, October 31, 2014

Digital Change Detection

- Introduction -

Digital change detection allows for analysis of biophysical, environmental, cultural, and socioeconomical change across the Earth's surface. By examining and measuring change in LULC over time, humans gain a more complete understanding of how Earth systems and processes function and interact. This knowledge can lead to better land planning and management and more effective environmental monitoring. Important considerations for change detection are an appropriate time period, the temporal, spatial, spectral, and radiometric resolution of each image, and the environmental factors present in the imagery. In this lab exercise, qualitative change detection will be performed on Landsat 7 (ETM+) imagery of western Wisconsin from the years 1991 and 2011 and quantitative digital change detection will be performed using National Land Cover Datasets of the Milwaukee metropolitan statistical area from the years 2001 and 2006.

- Methods -

Qualitative change detection was performed using the Write Function Memory Insertion method in ERDAS IMAGINE. The red band from 2011, NIR band from 1991, and a copy of the 1991 NIR band from the Landsat 7 (ETM+) imagery of western Wisconsin were stacked. By setting the red band to the red color gun and the NIR bands to the blue and green color guns, areas that showed change over the time period were displayed in red (Results - Figure 2). Qualitative visual analysis of LULC change could then be accomplished.


National Land Cover Datasets of the Milwaukee metropolitan statistical area from the Multi-Resolution Land Characteristics Consortium (MRLC) were used to quantify change for each LULC class and then map five specific LULC to-from changes. To quantify LULC change, each dataset was opened in ERDAS IMAGINE and the histogram values for each class were copied from their attribute tables into a Microsoft Excel spreadsheet. A series of calculations was then done to convert the histogram pixel values into area (Ha) values making the data more user friendly. The percent change for each LULC class was then calculated and can be seen in Table 1.


Figure 1: The model used to create the to-from LULC changes
To map the specific LULC to-from changes, a model was made in ERDAS IMAGINE to create five images each showing a different to-from LULC change (Figure 1). The model uses the Wilson-Lula algorithm and begins with both the 2001 and 2006 National Land Cover Dataset rasters. These rasters are then connected to 5 Either-If-Or functions that masks all LULC classes except one desired class. A pair of functions containing the desired masked values for each date of imagery are then connected to a temporary raster file which in turn connects to a binary masking function that masks the values that do not overlap between the two LULC classes. The resulting raster file contains the areas that overlapped between the two LULC classes or in other words, the area that changed from on class to another. The five raster files were then opened in ArcMap and symbolized appropriately.

- Results -


Figure 2: The result of the Write
Function Memory Insertion.

Map 1: The combined result of the desired LULC to-from changes
produced through the model.

 - Discussion -

Urban features are easily distinguishable as showing change when examining the image created through the Write Function Memory Insertion method (Figure 2). The area between the city of Eau Claire and Chippewa Falls shows exceptional change compared to the rest of the image. Major road networks show up bright red in the image which is likely due to new paving or re-surfacing. Some agricultural fields and areas of bare soil show change while others do not. This is likely due to spectral differences created by farmers engaging in various stages of crop rotation and ley farming. Water features showed change throughout the image due to the inevitable variability in how water is distributed on the Earth's surface over time. The Write Function Memory Insertion method allows for a quick qualitative assessment of change between two or more dates of imagery however, provides no quantitative information.

The five LULC to-from changes in Map 1 were chosen based on a hypothetical situation in which the Wisconsin DNR wished to know about LULC changes in the Milwaukee MSA. The to-from changes were: agriculture to urban, wetlands to urban, forest to urban, wetland to agriculture, and agriculture to base soil. Milwaukee County experienced the least amount of these changes. This is because Milwaukee County has more urban and less vegetated land cover in relation to its size than the other counties. Because such an over welling majority of land in Milwaukee county is already urban, little change was depicted. However, in the southern third of the county, below the city of Milwaukee, there are significant sections of agriculture to urban and forest to urban. Overall, agriculture to urban is the most prevalent change throughout the study area.

- Conclusion -

For a quick and simple qualitative change detection assessment of multiple dates of imagery, the Write Function Memory Insertion method is a viable option. If quantitative information is desired, the histogram values for classified LULC images can be compared and by using the Wilson-Lula algorithm, specific to-from LULC changes can be analyzed. Image differencing, not included in this lab exercise, can also be used by comparing pixel values between bands of multi-date imagery. Identifying changes in LULC through these techniques is a preliminary step in further understanding the relation between the Earth and its processes, and human activities.

- Sources -

Earth Resources Observation and Science Center, USGS. Landsat 7 (ETM+).

Multi-resolution Land Characteristics Consortium (MRLC). National Land Cover Datasets (2001, 2006).





Thursday, October 30, 2014

Classification Accuracy Assessment

- Introduction -

In the previous two lab exercises, unsupervised and supervised classification were performed on the same Landsat 7 (ETM+) image of Eau Claire and Chippewa Counties captured on June 9, 2000. Qualitative confidence-building was performed on the classified LULC images and discussed in pervious blog posts. Now, statistical confidence-building will be performed on each classified LULC image through use of an error matrix. The error matrix will provide an overall accuracy, producer's accuracy and user's accuracy. Kappa statistics will also be used.

To generate an error matrix, ground reference points need to be collected. These points can be collected prior to image classification through GPS point collection or surveying, or generated after image classification by using high resolution imagery or aerial photography as a reference. The pixels corresponding to each ground reference point will be labeled with the appropriate LULC class and this value will then be compared to what the pixel was classified. This comparison is then summarized in an error matrix.

- Methods -

Figure 1: Example of how the ground reference
points were interpreted.
Accuracy assessment was performed using ERDAS IMAGINE by navigating to Raster > Supervised > Accuracy Assessment. The Landsat 7 (ETM+) imagery was opened in the Accuracy Assessment window and a high resolution aerial photograph from the National Agriculture Imagery Program (NAIP) of the United States Department of Agriculture was selected as the reference image. Ground reference points were added by navigating to Edit > Create/Add Random Points. The number of points was changed to 125 and stratified random was chosen as the distribution type to allow for an even distribution of ground reference points throughout the different LULC classes.

After the ground reference points were generated, they appeared in the Accuracy Assessment window. Each point was examined and labeled with the appropriate LULC class based on visual interpretation of the reference image. Once all the ground reference points were interpreted, accuracy assessment was performed by navigating to Report > Accuracy Assessment. Values from the resulting text file were copied into a Microsoft Excel spreadsheet for easier to understand formatting.

- Results -


 
 
 


- Discussion -

In terms of overall accuracy, the statistical confidence-building assessment confirms the conclusions of the qualitative confidence-building assessments. The unsupervised method produced a better classified LULC image because of the quality of urban/built-up training samples collected during the supervised method. As seen in Table 2, The user's accuracy for urban/built-up is only 11%. Of the 36 ground reference points that were placed within the urban/built-up class, only 4 were interpreted as urban/built-up. Forest, bare soil, and agriculture were often confused for urban/built-up by the supervised classifier. The user's accuracy for each LULC class, except for urban/built-up, is higher for the supervised method. By modifying the training samples used for the supervised classification, the user's accuracy for urban/built-up, agriculture, and bare soil could be further increased, along with its overall accuracy.

A threshold of 85% has been established as the minimum overall accuracy needed for a "good" classified image. The overall accuracy for both classified LULC images fell below this threshold indicating that neither should be used for further analysis. A revised supervised classification could potentially reach the 85% threshold or an advanced classifier could be used.

- Conclusion -

In this lab exercise, the accuracy of the classified LULC images was assessed using four different measures of accuracy (overall, producer's, user's, and kappa) obtained from interpreting error matrices. Each method used, unsupervised and supervised, has its advantages and disadvantages though neither was able to reach an appropriate level of accuracy to be used in further analysis.  Advanced classifiers like expert system/decision tree, neural networks, and object-based classifiers were developed for just this reason. In subsequent lab exercises and blog posts, these advanced classifiers will be examined.

- Sources -

Earth Resources Observation and Science Center, United States Geological Survey. Landsat 7 (ETM+).

United States Department of Agriculture (USDA) National Agriculture Imagery Program. High resolution aerial imagery.


Friday, October 24, 2014

Pixel-Based Supervised Classification

- Introduction and Background -

To perform supervised classification, an analyst will collect samples areas of known land cover, commonly referred to as training samples, from the image which will be used to train a classifier. The classifier will then classify the entire image based on the information gathered from the collected training samples. Preferably, training samples would be collected based on prior knowledge of the area using the most accurate means available (GPS, topographic survey, etc.) However, for some applications this is impractical and high resolution imagery can be used to designate areas for training samples.

Important factors to consider when collecting training samples include: the number of training samples, the number of pixels, shape, location, and uniformity. In general, a minimum number of 50 training samples per informational class is required to produce an accurate classified LULC image. The specific number of training samples for individual informational classes may vary depending on the nature of the image (spectral diversity) and project (special emphasis or resource availability). Another factor that may fluctuate is the number of pixels. Generally, 10n pixels, where n is equal to the number of bands in the image, is required to provide enough spectral information for the training sample's informational class to be properly identified. The shape of training samples will normally be a derivation of a polygon. Training samples should be distributed throughout the entire image to account for spectral variability in the informational classes and be located within a uniform and homogeneous land cover. An important concept to keep in mind is the geographic signature extension problem where differences in spectral characteristics of the same informational class result form differences from a variety of factors like, soil moisture and type, water turbidity, and crop species. To reduce the errors that result from the geographic signature extension problem, training samples should cover all possible variations of the desired informational classes (e.g. for vegetation, collect forest and riparian vegetation).

Training samples need to be evaluated before they are used to train a classifier. The histograms of a training sample should not be primarily multimodal. Training samples with more Gaussian histograms will produce a more accurate classified LULC image. If a training sample exhibits mostly multimodal histograms it should be deleted and collected again. Spectral separability will indicate the best bands to use for an analysis based on the separation of the bands of the spectral signatures of the training samples. The larger the separation between spectral signatures for a particular band, the better that band will be for classifying different land covers. Spectral separablilty can be calculated using software like ERDAS IMAGINE and produces a list of the best bands and an average score. The maximum value of the average score is 2000 indicating excellent separation between classes. Values above 1900 indicate good separation and values below 1700 indicate poor separation. If the average score of the training samples is below 1700, an analyst should examine the training sample's spectral profiles and histograms to look for abnormalities and possibly collect more training samples. If the average score is satisfactory, the training sample can be used to train the classifying algorithm (classifier).

Advantages of supervised classification over unsupervised is the control the analyst has over informational classes produced and not having to interpret the spectral clusters generated by unsupervised methods. Also, by analyzing the quality of training samples, the classification can be improved before its actually performed. However, collection of proper training data can be time-consuming, expensive, and may not fully represent the desired informational classes leading to errors in classification.

Pixel-based supervised classification using a maximum likelihood classifier will be performed on the same Landsat 7 ETM+ image of Eau Claire and Chippewa Counties used in the pervious lab exercise where the unsupervised ISODATA method was performed. For this introduction to supervised classification, the size of training samples must be at least 10 pixels, the number of training samples for each informational class will be 15, and the training samples will be polygons collecting from the entire image taking the geographical signature extension problem and uniformity into consideration. The reference for determining training samples will be Google Earth historical imagery near the image collection date of June 9, 2000.

- Methods -

Figure 1: Example of how training samples were collected
and recorded.
The Landsat 7 ETM+ imagery was opened in a viewer in ERDAS IMAGINE 2013 and Google Earth was synced to the viewer. To collect training samples, polygons were drawn on the Landsat imagery that corresponded to homogeneous areas of land cover interpreted from the Google Earth historical imagery. Polygons were drawn by navigating to Home > Drawing > Polygon (in the insert geometry section of the drawing toolbar). Once the polygon had been drawn, its spectral characteristics were recorded by navigating to Raster > Supervised > Signature Editor > and with the polygon selected the Create new signature(s) from AOI icon was selected. After the new signature was added, the name and color was changed to match the appropriate interpreted LULC class (figure 1). After 15 training samples had been collected for an informational class, their spectral profiles were examined. If a spectral profile was noticeably different from the rest, its histograms were analyzed for multimodal histograms. If the training sample had more than 4 multimodal histograms, it was deleted and re-collected.


Figure 2: The optimum bands for analysis are circled in red.
The best average separability is circled in blue.
Training samples were collected for 5 LULC informational classes: water, forest, agriculture, urban/built-up, and bare soil. Once all 75 training samples had been collected and assessed, their spectral separability was analyzed by navigating to Evaluate > Separability in the signature editor window. The layers per combination was changed to 4 and the Transformed Divergence radio button was checked for the distance measurement. The value for best average separability was 1974 which was acceptable and the training samples were kept (figure 2). The 15 training samples of each informational class were merged into one signature. Once all 5 informational class signatures were created, the 75 training sample signatures were deleted. The 5 summarized signatures left were then saved and used to train the maximum likelihood classifier (figure 3). The image was classified by navigating to Raster > Supervised > Supervised Classification. The input and output files were specified, the 5 informational class signature file was selected as the classified file, the non-parametric rule was set to none, and the parametric rule was set to maximum likelihood.
Figure 3: The final 5 informational class signatures that were used
to train the classifier.

- Results -

Map 1: The classified LULC map created through pixel-based
supervised classification using a maximum likelihood classifier.

- Discussion -

Just like the last lab exercise where unsupervised classification was performed with the ISODATA method, qualitative confidence-building assessment was performed on the LULC map generated through pixel-based supervised classification. When compared to the classified LULC map generated through the unsupervised method, the supervised method resulted in a worse classification. The supervised classification method greatly overestimated urban/built-up land, erroneously classifying urban/built-up in areas of bare soil and sparse vegetation. The error in the classification was most likely due to the low number of training samples taken for each informational class and the quality of training samples derived from urban features. As stated in the introduction, a minimum of 50 training samples should be collected for every informational class. For this lab however, only 15 training samples were collected for each informational class. Almost every training sample collected for urban features displayed multimodal histograms no matter how many times these training samples were re-collected. No confidence is given to this map and the training samples would need to be modified if the classified LULC map was to be used for any subsequent analysis. Because urban/built-up is so grossly overestimated, determining the accuracy of the other informational classes is difficult by visual analysis. Once more and better urban/built-up training samples and possibly more agriculture and bare soil training samples were collected, the accuracy of forest and water could be better determined.

- Conclusion -

The supervised classification method did not produce a better classified LULC map compared to the output of the unsupervised ISODATA method like I had hoped. This was because of the low number of training samples taken for informational classes and the poor quality of urban/built-up training samples overall. Even though the supervised method resulted in a poorer LULC map, the results can be tweaked by modifying the current training samples and adding more training samples for informational classes that caused extensive errors. With a proper number of quality training samples, the pixel-base supervised classification method could produce a better quality LULC map. Statistical confidence-building assessments will be performed on both classified LULC maps generated through the unsupervised and supervised methods in the next lab exercise to quantify the difference in accuracy of the two methods.

- Sources -

Earth Resources Observation and Science Center, United States Geological Survey. (2000) Landsat ETM+




Tuesday, October 14, 2014

Unsupervised Classification - ISODATA

- Introduction -

Image 1: Screenshot of the ETM+ image subset of
Eau Claire and Chippewa counties in False Color IR
Extracting land use/land cover (LULC) information from remotely sensed imagery can be performed through multiple methods including: parametric and nonparametric statistics, supervised or unsupervised classification logic, hard or soft set classification logic, per-pixel or object-oriented classification logic, or a hybrid of the aforementioned methods. Unsupervised classification, using the Iterative Self-Organizing Data Analysis Technique (ISODATA) clustering algorithm, will be performed on a Landsat 7 ETM+ image of Eau Claire and Chippewa counties in Wisconsin captured on June 9, 2000 (Image 1). Minimal user input is required to preform unsupervised classification but extensive user interpretation is needed to convert the generated spectral clusters into meaningful informational classes. The conceptual framework of the ISODATA algorithm, along with its available user inputs, and LULC class interpretation will be discussed in this lab exercise.

- Background -

ISODATA is a modification of the k-means clustering algorithm in that it has rules for merging clusters, based on a user defined threshold, and splitting single clusters into two.
ISODATA is considered self-organizing because it requires little user input. The required input includes: a maximum number of clusters to be generated, a maximum number of iterations, a convergence threshold (to determine a percentage of pixel values that will remain unchanged between iterations), a maximum standard deviation (to determine cluster splitting), a minimum percentage of clusters (to determine cluster deletion and reassignment), a split separation value (used in cluster splitting), and a minimum distance between cluster means (to determine cluster merging). The algorithm begins by placing arbitrary cluster means evenly throughout a 2D parallelepiped based on the mean and standard deviation of each band used in the analysis. These cluster means are recalculated and shifted in feature space based on a minimum distance from mean classification rule through each iteration. Once the user defined convergence threshold has been reached, iterations cease and the resulting spectral clusters can then be interpreted.

Advantages of unsupervised classification is no extensive knowledge of the study area is required. Little user input is needed to perform unsupervised classification which minimizes the likelihood of human error. However, the analyst has little control of the classes generated and often these clusters contain multiple land covers making interpretation difficult.

- Methods -

ISODATA was performed in ERDAS IMAGINE 2013, by navigating to Raster > Unsupervised > Unsupervised Classification. In the Unsupervised Classification window, the input raster and output cluster layer were assigned, and the Isodata radio button was selected to activate the user input options. ISODATA was performed twice on the image. Once with a class range of 10 to 10 and again with a class range of 20 to 20. The max iterations was changed to 250 and all other inputs were kept at the default values, with the exception of a 0.92 convergence threshold for the ISODATA with 20 classes. Also, the Approximate True Color radio button was selected in the Color Scheme Options. A value of 250 was chosen for the max iterations to ensure the algorithm would run enough times to reach the convergence threshold, however, both ISODATA algorithms only had to cycle through seven iterations before this was accomplished.



Image 2: Comparison of the original image (left) and the
ISODATA classified image before recoding (right)
The resulting classified image (Image 2) was opened in a viewer and the generated clusters were recoded into thematic information classes by navigating to Table > Show Attributes. With the image attributes open, each cluster was selected one by one and its color was changed to gold making it easy to distinguish compared to the other approximate true colors generated by the algorithm. The classified image was synced with Google Earth historical images to determine which land cover is most associated with each cluster. Once a decision was made the color was changed to either green for forest, blue for water, red for urban/built up, pink for agriculture or sienna for bare soil and given the appropriate name (Image 3). The columns in the attribute window can be modified to allow for easier interpretation by navigating to File > View > View Raster Attributes > and selecting the Column Properties icon.  After all the clusters had been recoded, the attribute window was closed and in the pop-up window, Yes was selected to save the changes. The recoded image was saved by navigating to File > Save As > Top Layer As.


Image 3: Screenshot of the process of determining LULC classes from the ISODATA generated
clusters. Google Earth historical imagery on one monitor was synced to the ERDAS viewer on another monitor.

In order to make a map of the LULC classified image, the image classes needed to be recoded from 10/20 classes down to the 5 desired classes. This was done by navigating to Thematic > Recode. In the Recode window, the New Value field was modified for each record depending on the class name defined by the user earlier in the attribute window. Water was given a value of 1, forest - 2, agriculture - 3, urban/built up - 4, and bare soil - 5. The new values were then saved by selecting Apply in the Recode window and the image was saved by navigating to File > Save As > Top Layer As. Now, when the attribute window was opened, only 5 classes appear, rather than 10 or 20, representing the entire image. The images were then opened in ArcMap 10.2.2, symbolized appropriately, and represented as a map comparing the two ISODATA recoded images.

- Results -
Map 1: Comparison of the two ISODATA classifications

- Discussion -

Qualitative confidence-building assessment was performed by visual comparison between the classified LULC image and Google Earth historical imagery. Statistical confidence-building assessment will be performed in a subsequent lab. The ISODATA classification which generated 20 clusters (ISO[20]) was more accurate than the 10 cluster ISODATA (ISO[10]). Urban/built-up area is greatly overestimated in ISO[10] which is corrected in ISO[20]. However, agriculture is overestimated more in ISO[20]. The differences between urban/built-up, agriculture, and bare soil is caused by the spectral similarities of the features. Agricultural land ranged from healthy crops to fallow fields. Healthy crops had spectral similarities to forest while fallow fields had spectral similarities to bare soil. When determining informational classes from the ISODATA generated clusters, fallow fields were considered agriculture instead of bare soil. This distinction likely caused agriculture to be overestimated in ISO[20]. Some regions of water, mostly smaller rivers, were incorrectly classified agriculture or forest. This is most likely caused by vegetation overlap masking the reflectance of the water.


- Conclusion -

Overall, interpretation was difficult because there was extensive overlap between the 5 desired informational classes and the 10/20 generated clusters. Classes with overlap included: forest and healthy agriculture, bare soil and urban area, and fallow fields/sparse vegetation and bare soil. ISO[20] was more accurate because more clusters were generated which allowed for more specific spectral characteristics to be singled out and classified accordingly. Although ISO[20] was more accurate, it still overestimated agriculture lands and had erroneous classifications for urban/built-up and forested land. To increase the accuracy of the LULC map generated, supervised classification could be used and will be used in the next lab.


- Sources -

Earth Resources Observation and Science Center, United States Geological Survey. (2000) Landsat ETM+




Wednesday, October 8, 2014

Radiometric and Atmospheric Correction

- Introduction -

Remotely sensed images often will contain radiometric errors caused by atmospheric attenuation, atmospheric scattering and absorption, and path radiance, determined by atmospheric attenuation and the topography of the landscape. Removal of error, or noise, from remotely sensed imagery is referred to as atmospheric correction. There are four factors that determine if this noise needs to be removed: (1) the nature of the project, (2) the type of remote sensing data, (3) the amount of in situ data available, and (4) the amount of accuracy needed from biophysical information.  In general, single data land use or land cover characterization and multi-date land use or land cover change detection do not require atmospheric correction. If the objective of the study is focused on water properties, vegetation characteristics, or soil properties or entail image mosaic, band ratio techniques, or multi-sensor data integration, atmospheric correction is required.

Atmospheric correction is divided into two methods: absolute and relative. Absolute atmospheric correction models the atmospheric conditions at the time the image was captured to reduce noise by use of Radiative Transfer Codes (RTC) and large amounts of in situ data. The need for extensive in situ data often limits the use of RTC and absolute atmospheric correction, although, spectral libraries can be used to simulate the in situ data needed. Relative atmospheric correction uses information within the image to reduce noise by either normalizing the pixel brightness values between the different bands for single date imagery or normalizing the pixel brightness values between similar bands in multi-date imagery. In this lab, both absolute (ELC and DOS) and relative (multi-date image normalization) atmospheric correction will be preformed on the same study area and the results will be compared.

- Methods -

Absolute Atmospheric Correction - Empirical Line Calibration (ELC)


Figure 1: Screenshot of the Landsat TM imagery used for
atmospheric correction by ELC
Landsat TM imagery of Eau Claire, WI and its surrounding area captured on 10/3/2014 at 10:41am CST (Figure 1) will be atmospherically corrected using the ELC method with spectral library data to replace nonexistent in situ data.

ELC is preformed through this equation:






Where,
the subscript k indicates band number
DN = the image band to be corrected
M = gain (obtained through regression equation)
L = offset (obtained through regression equation)


Figure 2: The Spectral Analysis Workstation
In ERDAS IMAGINE 2013, the Spectral Analysis Workstation (Figure 2) was opened by navigating to Raster > Hyperspectral > Spectral Analysis Workstation. The desired imagery was loaded into the workstation by navigating to File > Open Analysis Image in the workstation. Landsat 5 TM - 6 bands was chosen in the popup window and the color composite was changed by navigating to View > Preset RGB Combinations - TM False Color.






Figure 3: The Atmospheric Adjustment Tool.
A spectral sample of water has been taken and is then compared
to tap water, the only fresh water reference sample
available in the ASTER library.
Next, the Atmospheric Adjustment Tool (Figure 3) was opened by selecting the Edit Atmospheric Correction icon. The method was then changed to Empirical Line via dropdown menu located at the top of the Atmospheric Adjustment Tool window. Spectral samples of features in the imagery were collected in three steps: (1) visually interpreting features, (2) zooming in until individual pixels can be distinguished, and (3) using the Create a Point Selector tool. After a spectral sample had been collected, an appropriate reference sample from the ASTER spectral library was dragged onto the Spectral Plot. The spectral plot now has two spectral signatures (Figure 3). This process was then repeated for a total of 5 spectral plots: water (ref. Tap Water), forested vegetation (ref. Pine Wood), agricultural land (ref. Grass), metal rooftop (ref. Alunite AL706 NA), and asphalt (ref. Asphaltic Concrete). The regression coefficients needed for ELC have been automatically calculated by the Atmospheric Adjustment Tool. The regression information was saved and the model was run by navigating to View > Preprocess > Atmospheric Adjustment in the Spectral Analysis Workstation. The resulting image then needed to be saved by navigating to File > Save preprocessed image. This image was then compared to the original.

Absolute Atmospheric Correction - Enhanced image based Dark Object Subtraction (DOS)

The same imagery of Eau Claire, WI and its surrounding area that was used for the ELC method was used for the DOS method. DOS is preformed in two steps: (1) conversion of the satellite image to at-satellite spectral radiance and (2) conversion of the at-satellite reflectance to true surface reflectance.

Step one is carried out through this equation:

Where,
Qcal = the band to be corrected
All other variables found in image metadata.



Figure 4: The 6 models to create the at-satellite
spectral radiance images.

To apply the equation, six models (Figure 4) were made in ERDAS IMAGINE 2013 by navigating to Toolbox > Model Maker. Each model applies the above equation to a different reflective band from the original image and outputs an at-satellite spectral radiance image. Each at-satellite spectral radiance image's histogram was opened and path radiance was estimated by visually identifying the distance between the start of the x-axis and the beginning of the histogram. This path radiance number is used in step two.




 Step two is carried out through this equation:




Where,
D = distance between earth and sun (found in an ancillary table)
L(lambda) = the at-satellite spectral radiance image created in step one
L(lambda haze) = path radiance (estimated from the at-satellite spectral radiance image's histogram)
TAU(v) = atmospheric transmittance from ground to sensor (obtained through use of a sun photometer, this data was not available so a value of 1 was used)
Esun(lambda) = mean atmospheric spectral irradiance (found in an ancillary table)
Theta(s) = sun zenith angle (calculated by [90 - sun elevation angle], sun elevation angle found in image metadata)
TAU(z) = atmospheric transmittance from sun to ground (found in an ancillary table)

The true surface reflectance images were created by a similar process to the at-satellite spectral radiance images. Six models were made in model maker where the above equation was applied to the at-satellite spectral radiance images and true surface reflectance images were generated. These true surface reflectance images were then stacked to create a color composite image that was then compared to the original image and the image produced through the ELC method.


 Relative Atmospheric Correction - Multi-date Image Normalization

Figure 6: A screenshot of the two images used for multidate
 image normalization. The base image (2000) is on the left
and the image to be corrected (2009) is on the right.

Multi-date image normalization was preformed on Landsat TM imagery of Chicago and the surrounding area captured on May 3rd, 2000 and May 20th, 2009 (figure 6). The image from 2000 was chosen as the base image. Fifteen Pseudo-invariant features (PIFs) were selected from the base image and corresponding PIFs were collected from the 2009 image. The mean values of these PIF pairs were then used to build linear regression equations to calculate gain and bias needed for atmospheric correction.


Figure 7: Screenshot of each image, their associated PIFs, and
spectral profiles
To collect PIFs, the images were opened in ERDAS IMAGINE 2013 and a spectral profile window was opened for each image by navigating to Multispectral > Spectral Profile. PIFs were collected in equal proportions from Lake Michigan, urban features, and rivers/lakes. Once all 15 PIFs were collected (figure 7), the mean pixel values for each band were found by navigating to View > Tabular data... in both spectral profile windows. The mean pixel values for each separate band were paired between the images and plotted in Microsoft Excel with the 2000 values on the Y axis and the 2009 values on the X axis. A linear trend line was added. The equation of this line contained the gain(slope) and bias(y-intercept) values needed for atmospheric correction.

Six models were arranged in model maker, the same as in figure 4.

However this time, the equation,


, is applied to each band of the 2009 image to normalize it's radiometric properties to that of the 2000 image. This equation is the same as the linear regression equations developed in Excel with L(lambdasensor) replacing Y and DN replacing X. After the model had run, the images were stacked into a color composite image and compared to the 2000 image to assess the quality of the correction.


- Results -

Absolute Atmospheric Correction

Figure 8(a): Comparison of the spectral profiles for water between the original image (left) and
the ELC corrected image (right). Compare to Figure 8(b) below.

Figure 8(b): Comparison of the spectral profiles for water between the original image (left) and
the DOS corrected image (right).


Relative Atmospheric Correction



Figure 9(a): Comparison of the spectral profiles for the same feature between the original 2000 image (left) and the original 2009 image (right). Compare to Figure 9(b) below.

Figure 9(b): Comparison of the spectral profiles for the same feature between the original 2000 image (left) and the normalized 2009 image (right)


- Discussion -

In evaluating the effectiveness of the ELC and DOS methods, the spectral profiles of five different land surface features were compared to the original (water is given as an example in Figures 8(a) and 8(b)). Overall, the ELC method did a good job at correcting errors for forested areas and agricultural land but a poor job at correcting errors for water. The DOS method did a better job at correcting errors for all features compared to the ELC method. Spectral libraries were used in the ELC method because of a lack of in situ data. This limited the correction ability of the ELC method to how well the samples would compare to the available reference samples. Also, the DOS method took more into account than the ELC method such as distance from the sun, solar zenith angle, and path radiance.

Just like in the evaluation of the ELC and DOS methods, the spectral profiles of five land surface features were compared between the original and normalized images to determine the effectiveness in the multi-date image normalization (an urban rooftop is given as an example in Figures 9(a) and 9(b)). This method resulted in noticeable correction for all features in bands 2, 3, and 4 while little change took place in bands 1, 5, and 6. Overall, the normalization seemed to do what was intended because the spectral profiles of the normalized 2009 image better reflected the spectral profiles of the original 2000 image.


- Conclusion -

In most cases, atmospheric interference will be present in remotely sensed images and will need to be removed by preforming atmospheric correction. Absolute atmospheric correction relies on large amounts of in situ data and RTCs. Because of this, absolute atmospheric correction has high effectiveness but it's applications are limited. Relative atmospheric correction uses data present in the metadata and in ancillary tables making it more practical for use on most current and historical images, although it is not as effective as absolute. The DOS method did a better job at correcting errors than the ELC method because it took more variables into account. The multi-date image normalization did what was expected and transformed the radiometric properties of an image captured in 2009 to that of an image captured in 2000, though the spectral profiles of features were only similar, not the same.


- Sources -

Earth Resources Observation and Science Center, United States Geological Survey. (2000) (2009)(2011). Landsat TM