Sunday, November 26, 2017

Lab 6: Geometric Correction

Goals and Background 

The goal for this lab is to introduce an image preprocessing exercise called geometric correction. This will develop skills using two major types of geometric correction that are typically performed on satellite images as part of the preprocessing activities that come before the extraction of biophysical and sociocultural information from satellite images. Rectification will be used in this lab. Rectification is the process of converting a data file coordinate or grid system known as a reference system.

  1. Image-to-Map Rectification: this type of geometric correction utilized a map coordinate system to rectify/transform the image pixel coordinates.
  2. Image-to-Image Rectification: this type of geometric correction uses a previously corrected image of the same location to rectify/transform the image data pixel coordinates. 



Methods

Image-to-Map Rectification was the first type of geometic correction used. The USGS 7.5 minute digital raster graphic (DRG) was used to correct the Landsat TM image. Control Points will be used in ERGAS Imagine under the Multispectral tab to perform the correction. The Geometic Model is set to Polynomial and the first order polynomial equation is used. The USGS DRG is also set as the reference map. When you are using a first order polynomial, there needs to be a minimal of 3 ground control points (GCP)
To correct the image, GCP will be placed on both of the images in the same locations. Once four GCPs were placed they need to be adjusted. By looking at the Root Mean Square (RMS) error, it can determine the accuracy between the exact locations on the two images. The industry standard for remote sensing is 0.5 RMS error or below. Since this is an introductory lab, the RMS error needed to be less than 2. The points could be adjusted until the values were below the recommended values. The RMS error total for this exercise ended up being 0.1413 (Figure 1).

Figure 1. Image-to-Map Rectification: Multipoint Geometic Correction with 4 GCP and a RMS error value of 0.1413. 

For part one, even with an RMS error value of 0.1413, there was minimal error in the image that was corrected to begin with. Therefore, displaying the correction was difficult, shown in Figure 2. The final images shown below were resampled using the defaults of the display function to create the new geometrically corrected image. 
Figure 2. Corrected image next to original image with error. 

The second method was Image-to-Image Rectification. The settings for Control Points were all the same except that the equation was changed to a third order polynomial. A third order needs a minimum of 10 GCP. This lab included 12 GCP to get an RMS error total of 0.2027 (Figure 3).
Figure 3. Image-to-Image Rectification: Multipoint Geometic Correction with 12 GCP and a RMS error value of 0.2027.

When the correction was completed, the image was resampled this time using the bilinear interpolation. Using the slide function in ERDAS Imagine, the corrected image can be seen on top of the original image (Figure 4).
Figure 4. Viewer swipe with corrected image and reference image.

Results 

 This lab created a basic understanding of geometric correction. In order to have an accurate analysis, geometrically corrected images are essential. The error may not always be obvious, but when it is zoomed in a difference can be detected. Locating areas that are good for GCP takes practice. But you can never have enough GCP.

Sources

Satellite images are from Earth Resources Observation and Science Center, United States Geological Survey.
Digital raster graphic (DRG) is from Illinois Geospatial Data Clearing House.

Wednesday, November 8, 2017

Lab 5: LiDAR remote sensing

Goals and Background

The goal for this lab is to gain basic knowledge on LiDAR data structure and processing. The data will be used is LiDAR point clouds int eh LAS file format. Objectives include:
  1. Processing and retrieval of various surface and terrain models.
  2. Processing and creation of intensity image and other derivative products from point cloud.
Recently, LiDAR is becoming more known due to the expanding areas in remote sensing. Now, there is new job creation and a significant growth in remote sensing fields.


Methods 

By first importing the .las friles into Erdas Imagine, a visual understanding is created of the points collected from the flight (Figure 1).
Figure 1. A display of each individual LAS file of the lidar point clouds. 

In ArcMap, a new LAS Dataset is created and the .las files are added to the new dataset. In the LAS Dataset Properties, statistics can be calculated pertaining to the values of all of the LAS files that were compiled (Figure 2).
Figure 2. The individual statistics provide additional information not reported under the Statistics tab for the entire LAS dataset. The minimum Z value is 517.85 and the maximum value is 1845.01.

The coordinate system for the new LAS dataset was also changed. The horizontal coordinate system (X,Y) is NAD 1983 HARN WISCRS EauClaire County (Feet) and the vertical coordinate system (Z) was adjusted to North American Vertical Datum 1988 (Meters).

Next the LAS dataset is added to ArcMap and the classification is changed to 8 classes. There is multiple ways using the LAS Dataset toolbar to examine different surface data. There are four different methods/conversion tools: elevation, aspect, slope, and contour. For this analysis, the elevation tool was used. Figure 3 shows the change in elevation with points. Figure 4 uses a interpolated elevation, where the points become shapes.
Figure 3. Point Elevation using lidar point cloud data.

Figure 4. Interpolated Elevation using lidar point cloud data. 

The high elevation points that are within Half Moon Lake are not supposed to be there. This is due to the fact that there is no defined breaklines when inputting the data. Therefore, the water that is supposed to be flat was affected by the returns during flight. 

The Profile View and the 3D View are two features in the LAS Dataset Toolbar used to examine the first returns of the elevation point cloud image. One section was zoomed into that was vegetated (Figure 5). The low areas are expressed in the green color, as the points go up in elevation this could the shrubbery and then the trees in the highest elevation. There is one point that is an anomaly of a high elevation value. These points can be effected from something as simple as birds or a missed return during flight.
Figure 5. 3D View within ArcMap of lidar point cloud data.
Lidar has the capability to derive 3D images from the data. In this lab, a digital surface model (DSM) and a digital terrain model (DTM) will be created. The DSM is produced with the first return points at a spectral resolution of 2 meters. Then a hillshade will be created from the DTM and DSM. 

The parameters need to first be set in ArcMap.  The layer is set to display the points by elevation and only utilized the first returns.  Using LAS Dataset to Raster tool in ArcMap the specifications are set as follows: Value Field = Elevation, Cell Type = Maximum, Void Filling = Natural Neighbors, Cell Size =  6.56168 (approximately 2 meters). Figure 6 shows a 3D model of the earth's surface. This includes showing the elevation changes in the buildings as well as trees and other objects.

The parameters for the DTM LAS Dataset to Raster tool include: Interpolation = Binning; Cell Assignment Type = Minimum; Void Fill method = Natural_Neighbor; Sampling Type = CellSize; Sampling Value is same as that set for the DSM above. Figure 7 shows only the bare earth. This is easier to analyze the terrain when trying to see the elevation change of only the ground.
Figure 6. DSM with hillshade applied.

Figure 7. DTM with hillshade applied.

The final objective generates a lidar intensity image.The LAS Dataset is set to Points and filtered to First Return. The intensity is always captured by the first return echoes. Value Field should be set to
INTENSITY, Binning Cell Assignment Type =Average, Void fill = Natural_Neighbor, Cell Size is the same used for the DTM and DSM derived products above.


Results 

The differences between the two hillshades can be seen using the Effects Toolbar. The swipe function puts the two images side by side and the tools allows for on image to swipe over the top to see variances in the images (Figure 8).
Figure 8. Deriving DSM and DTM products from point clouds

The intensity image is dark when displayed in ArcMap. To see the true intensity of the image, the image is viewed in Erdas Imagine (Figure 9).
Figure 9. Deriving Lidar Intensity image from point cloud

Sources 

Lidar point cloud and Tile Index are from Eau Claire County, 2013.
Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014. 

Lab 8: Spectral signature analysis & resource monitoring

Goals and Background  The goal of this lab is to gain experience on the measurement and interpretation of spectral signatures. This lab co...