Thursday, December 11, 2014

Hyperspectral Remote Sensing

- Introduction -

The last topic of the semester is an introduction to hyperspectral remotely sensed imagery. Over the course of the semester, multispectral imagery (ex. Landsat TM) has been used for different types of analysis. Hyperspectral imagery (ex. AVIRIS) differs from multispectral in the number of bands and range of the electromagnetic spectrum covered. Typically, multispectral imagery will have under 15 or so bands that cover broader ranges of the electromagnetic spectrum in multiple spectral ranges (ex. VIS, NIR, SWIR, and MWIR). Hyperspectral imagery however, can have hundreds of bands that range in a single spectral channel, allowing for much more distinction in specific land surface features. In this lab, bad band removal, anomaly detection, and target detection will be explored.

- Methods -

Image 1: The largest viewer shows both the anomaly mask
and the original image with a swipe function for comparison.
Both anomaly detection and target detection were performed on AVIRIS imagery in Erdas Imagine 2010 with all bands and again with bad bands excluded, for comparison. Anomaly detection was performed by navigating to Raster > Hyperspectral > Anomaly Detection and following the steps of the Anomaly Detection Wizard. After the anomaly mask had been created it was opened in the Spectral Analysis Workstation and compared to the original image (Image 1). This first mask was created using all bands. The process was then repeated with bad bands excluded through an additional step in the Anomaly Detection Wizard.




In the Bad Band Specification window in the Anomaly Detection Wizard, Bad Band Selection tool (Image 2) was opened. In the Bad Band Selection tool, all 224 bands of the AVIRIS image could be cycled through and analyzed by their individual histograms and mean plot window. Bands with multimodal histograms and visible differences in the mean plot window signified bands with low signal-to-noise ratio and should not be included in analysis. These bands were then singled out and saved as a bad band list file that was then used in both anomaly detection and target detection later on.

Image 2: Tool used to select bad bands. Areas of red in the
mean plot window indicate selected bad bands.

Target Detection was performed by navigating to Raster > Hyperspectral > Target Detection and following the steps of the Target Detection Wizard. The first target detection used a custom derived Buddingtonite spectrum library file and used all 224 bands. The second target detection used the USGS Buddingtonite_NHB2301 spectrum library file and excluded the bad bands designated earlier. The resulting target masks were then compared (Image 3).


Image 3: The largest (main) viewer shows the target detection mask
and the original image for comparison.

- Discussion -

In both the anomaly detection and target detection, masked area increased after the bad bands were excluded. The size of the masked areas typically increased and new masked areas were created. This illustrates that using bands with low signal-to-noise ratio resulted in less accurate results. The results of the anomaly detection could be further improved by using only a subset of the image instead of the whole image. This is because the anomalies are distinguished based on an estimated background spectra of the area used. By using a smaller area, more anomalies could potentially be identified. The detection of the specific mineral Buddingtonite would not be possible with multispectral imagery and illustrates the advantages of hyperspectral data in collection of specific land cover types.

- Conclusion -

The selection of bad bands is typically not essential for analysis of multispectral imagery, but as this lab has demonstrated, is essential of hyperspectral imagery. Band bands in hyperspectral imagery can result from atmospheric effects or sensor malfunction. Band histograms can be used to identify the bad bands with low signal-to-noise ratio that should be excluded from analysis. Hyperspectral data is useful for examining specific wavelengths which help analysts determine land cover types and calculate band ratios with far more specificity.

- Sources -

Erdas Imagine, 2010. Modified Spectral Analysis Workstation Tour Guide.


Wednesday, December 10, 2014

Lidar Remote Sensing

- Introduction -

Lidar is an active remote sensing technology that uses the backscattered time differential of self generated laser pulses to accurately model the earth's surface. Lidar stands for light detection and ranging and typically uses NIR radiation around 1.64 micrometers to detect land surface features. Multiple products can be generated through use of Lidar data due to the volume and nature of the light that is produced by the system. For example, the light can penetrate vegetation cover resulting in multiple returns for the top of the tree canopy, branches, lower vegetation, and ground. These returns can be used to extract different information and create different surfaces. In this lab exercise, Lidar data will be visualized in 2D and 3D and multiple derivative products will be created.

- Methods -

Using ArcMap 10.2.2, a LAS dataset was created and Lidar data for the City of Eau Claire, WI was imported in. In the LAS Dataset Properties window, general information on the whole LAS dataset, information like point count, point spacing, and Z min and max for individual LAS files, statistics, XY coordinate system, and Z coordinate system can be viewed and modified. Using the LAS toolbar, the different returns can be viewed as points (Images 1 - 4) or as a triangulated irregular network (TIN) surfaces representing elevation (Image 5), slope (Image 6), or aspect (Image 7). Contours can also be created and visualized (Image 8). Depending on which return is used, digital surface models (DSM) or digital terrain models (DTM) can be created. Using first returns will generate a DSM that represents the surface of the landscape including surface features like trees and buildings. Using ground returns will generate a DTM that represents the actual elevation of the landscape without any surface features. With the LAS Dataset to Raster tool in ArcMap and using the proper returns, both a DSM and DTM were created for the City of Eau Claire. Hillshades were then created of both the DSM and DTM for visual comparison (Image 10). The last derivative product created was an intensity image (Image 11). Intensity is stored in the first returns and the resulting image can be used as ancillary data in image classification. This is because the light used by the Lidar system is within the NIR channel which can be used to parse out different land covers. Lighter areas in the intensity image are reflecting more NIR radiation, signifying bare ground and some urban features. Darker areas represent thick vegetation and the darkest areas are water.


Image 1: All returns symbolized by elevation


Image 2: First return symbolized by elevation


Image 3: Non ground return symbolized by elevation


Image 4: Last (ground) return symbolized by elevation


Image 5: TIN surface of all returns symbolized by elevation


Image 6: TIN surface of all returns symbolized by slope


Image 7: TIN surface of all returns symbolized by aspect


Image 8: Five foot contours


Image 9: 2D and 3D views of a bridge


- Results -

Image 10: Comparison of the hillshades for the DSM (left) and DTM (right) 


Image 11: Intensity image


- Discussion -

Lidar data can lead to exceedingly highly accurate representations of earths surface and can provide meaningful information on earth surface features. The quality of the Lidar data depends on the amount of points collected. This data had an average point spacing of about 1.5 feet. As seen in the images above, water features are sometimes not modeled very well. This is because of waters ability to absorb the NIR radiation coming from the Lidar system resulting in less points and a less accurate surface. If water features are of primary concern, the light produced by the Lidar system can be changed to a wavelength around 0.53 micrometers, within the blue/green channels. Not only can the intensity of the first return be used as ancillary data for image classification but also sections of elevation can be singled out and used as well. Using first and intermediate returns in vegetated areas can give measures of forest biomass. Road networks can be easily distinguished by using ground returns even when not visible in imagery due to vegetation. The list of applications that Lidar data can be used for goes on and on.

- Conclusion -

The applications of Lidar data are numerous and still being explored. This lab exercise was an introduction to using Lidar data that was already processed and ready for use. Pre-processing of Lidar data can be quite complicated, but once completed, the data is a valuable resource. Elevation, slope, aspect, DSM and DTM surfaces can be generated and by using different combinations of returns, a variety of biophysical, economic, and cultural information can be extracted.

- Sources -

Eau Claire County, 2013. Lidar point cloud and tile index.



Tuesday, December 2, 2014

Object-based Classification

- Introduction -

The last classification technique of the semester, object-based classification, is a fairly new method which attempts to succeed where per-pixel or sub-pixel classifiers fail. Pixel-based classifiers only account for spectral properties in an image when determining informational classes which often results in the salt and pepper effect and similar pixelated landscape patterns. By accounting for spatial properties, like distance, texture, and shape, an object-based classifier results in a more natural looking and often times more accurate classified image. Object-based classification segments an image into areas based on both spectral and spatial homogeneity criteria. An analyst can then classify specific objects and use them as training samples to classify the entire image. In this lab exercise, a Landsat TM image of Eau Claire and Chippewa Counties, WI was classified through object-based classification and a nearest neighbor algorithm.

- Methods -

Object-based classification was performed with eCognition software. A new project was created and the image of Eau Claire and Chippewa Counties, WI was imported into the project. The image was segmented by navigating to Process > Process Tree and creating a new pair of parent and child processes. Multi-resolution segmentation was chosen as the algorithm, a value of 0.2 was given for the shape scale parameter, and a value of 0.4 was given for the compactness scale parameter. Individual layer weights could also be modified at this point, however, the default values of 1 were accepted for this exercise. Clicking 'Execute' in the Edit Process window initiated the image segmentation.

Figure 1: Example of the image objects
created after image segmentation
Before image objects were selected as training samples, the desired informational classes needed to be created and the classification algorithm needed to be defined. Five classes were created in the Class Hierarchy window, opened by navigating to Classification > Class Hierarchy. Nearest neighbor was selected as the classification algorithm and was modified by navigating to Classification >  Nearest Neighbor > Edit Standard NN Feature Space. Certain image objects were then selected for training samples based on visual interpretation by first navigating to Classification > Samples > Select Samples. To classify an image object, the desired informational class was selected in the Class Hierarchy window and then the image object was double-clicked.

Once the training samples were collected, a new pair of parent and child processes was created in Process Tree window for the classification. The active classes were chosen under Algorithm parameters in the edit process window and 'execute' was chosen. The classification was performed after 'execute' was chosen and the result was modified by selecting new training samples as needed, performing manual editing, and then re-running the classification. The final classified image was then exported and made into a map using ArcMap.

- Results -

Map 1: The final classified image produced through object-based classification

- Discussion -

The classified image produced through object-based classification was a vast improvement compared to the classified images produced early in the semester through pixel-based classification. The salt and pepper effect of inaccurate urban classification throughout the image, common in previous classifications, was eliminated. The overall time to classify the image was also drastically reduced. No bare ground class was used for this classification which did overestimate agricultural land but the other classes represented the landscape fairly well. Spectral properties of the image were given more influence for this classification because much of the area is more natural and less urbanized. For study areas that consist of mostly urban landscape, more emphasis should be placed on shape/spatial properties for a better classification.

- Conclusion -

Object-based classification produced the most natural looking classified image of all the methods used throughout the semester and took less time to produce. The benefits of object-based classification are blatant and it is a powerful (relatively) new method to determine land cover/land use. Many insecurities in accuracy of classified images produced through pixel-based methods were reduced by using the object-based method.

- Sources -

Earth Resources Observation and Science Center, USGS. Landsat TM imagery.