Friday, November 15, 2013

Remote Sensing Lab 5

In this lab I have been introduced to a number of skills that take multiple images and put them together to create one seamless image.  In Erdas Imagine, I have used the following tools: RGB to IHS transform,  IHS to RGB transform, Image mosaic, spacial image enhancement, spectral image enhancement, band ratio, and binary change detection. 

In the first part of the lab I have explored color theory and how to transform an image from RGB (Red,Green,Blue) to IHS (Intensity,Hue,Saturation) and back.  I began with a regular RGB image that appeared similar to one you would see in real life colors and transformed it into an IHS image. Below, on the left is the original image and on the right is the newly transformed IHS image.

The new IHS image has totally different colors.  The regular RGB image has similar to real life colors, whereas the IHS image has only tones of red and green.  In the histogram, band one covers a larger brightness area in the IHS image, band two in the IHS image has a brightness concentration towards the edges but is overall low frequencies and is worse contrast than the RGB image.  Band three has a larger range of brightness in the original image but the IHS image has more large spikes in frequency.  Overall, the original image has a more uniform histogram distribution.

Next,  I transformed the IHS image back to RGB and observed the differences between the new RGB image and the original RGB image.   In the histogram, the new RGB image has a wider brightness distribution or higher contrast in Band one.  The histogram appears almost identical for band two.  For band three, the histogram for the original image appears much wider at the base, but the new RGB image has higher frequencies in a more concentrated brightness range.  The colors in the two images appear very similar, but there are some slight differences in tone.  The band combinations are also different.  The original image has a 3,2,1 combination and the new re transformed image has a 1,2,3 combination.

I then repeated the IHS to RGB transformation,  but this time used the stretch I&S tool and then compared the newly stretched RGB image to both the non-stretched and origional RGB images.  Below is a picture of the newly stretched RGB images.


When you switch the color gun to 3,2,1 on the stretched and non-stretched images they turn to a brown and tan tone, while the original image has more of a real looking color range.  You can tell there is differences in the spectral resolution between the three images and the histograms reflect that.  The stretched image is a bit easier to interpret and identify elements when zoomed in.

In the second part of the lab I preformed an image mosaic.  This is preformed by bringing multiple, adjacent satellite images into one seamless image.  I first preformed the image mosaic using mosaic express to produce the image below.


The colors in the output image do not have a smooth transition between the two input images.  There is a very clear boundary between one image and the next.  Both images have mostly tones of black, grey and red, but they are clearly different.  The images have different radiometric properties so they do not appear as one seamless image.

Next, I made another mosaic of the two images but this time using Mosaic Pro using the histogram matching tool.  Then I compared the mosaic images using mosaic pro and mosaic express.  Below is a screenshot of the image created using Mosaic Pro.

 

   The output image using MosaicPro is a much more seamless image than the Mosaic Express image.  The MosaicPro images blend into each other much better and the tones are similar between the two images.  The tones are similar in the Mosaic Pro image because of the histogram matching color correction function.  The Mosaic Express image does not appear as seamless because the input images have different radiometric properties and therefor do not blend together well.

In the third part of the lab I preformed band ratioing using the normalized difference vegetation Index.  I then analyzed the colors in the image using NDVI and found the following.

  I would expect that the very white areas would be areas with the most vegetation.  If you zoom into the image, you can tell that roads, rivers and buildings appear darker, while other areas like fields appear very white in color because of the presence of vegetation.   The areas that appear medium grey likely have very little vegetation and the areas that are black likely have almost no vegetation.  The rivers and lakes appear black in the image because no vegetation is detected.  Many of the medium grey spots in the image are more densely populated areas, so they would have less vegetation.

In the fourth part of the lab I explored spatial and spectral enhancement techniques.  First I used spatial enhancement,  specifically high and low pass filters to change images frequency.  A high frequency image is an image that has large changes or variance in brightness values over short distances.  A high frequency image may need to be suppressed using spatial filtering to improve the spatial frequency.    A low frequency image has small differences in brightness over larger distances.  The image may appear hazy and not clear or defined, requiring filtering in order to improve the spatial frequency.

First I used a 5x5 low pass filter and compared the output image with the origional image.  The original image is more clear and defined than the 5x5 low pass filtered image.  When you zoom in on the two images, you can see more detail and it is easier to interpret the original image.  The low pass filter image has a lowered spatial frequency, so it has smaller changes in brightness over areas in the image.

Then I preformed a 5x5 high pass filter and compared the output image to the origional image.   The high pass filtered image is much more defined and clear than the original image.  The high pass filtered image uses a black tone to replace a lighter grey tone in the original image. When you zoom in to the images, the original appears cloudier and less defined due to the lower frequency.  The high pass filtered image is easier to interpret and has more contrast.

Next,  I worked with edge enhancement, using a laplacian convolution filter.    A laplacian convolution filter is a linear edge enhancement tool used to delineate edges in an image in order to make elements of an image more clear and easy to interpret.  It calculates a new derivative between pixels which creates a higher contrast in order to make an enhanced image.  I took an image and applied the laplacian convolution filter.  The original image appears much different than the laplacian edge detection image.  When zoomed out, the original image has tones of mostly green at the top and red at the bottom, whereas the laplacian edge detection image appears darker in color, with mostly tones of green and purple and a cloudy checkered look.  When zoomed in, the laplacian edge detection image is darker in color but has brightly colored blue and green specs and red lines.  The original image is mostly a mix of red and green features that are easier to differentiate between.

In the next section,  I worked with spectral enhancement, preforming two types of linear contrast stretches.  First I used a minimum-maximum contrast stretch because this is a low contrast histogram, with a high frequency in a very small brightness range.  It is somewhat uniform or gausian.  One peak or mode.   It is appropriate to use a min-max contrast stretch because the input image has a low contrast, with a high frequency in a very small brightness range and a min-max contrast stretch can be used to expand the brightness range, giving the image a higher contrast. The histogram will become much wider and cover more brightness values.  Min-max contrast stretch is appropriate for gausian or near gausian histograms.  Below is a screen shot of the contrast stretch.


I then preformed a piecewise contrast stretch on a different image.  This image
Histogram has multiple peaks that represent modes.  It is not perfectly gausian and it has a higher contrast that covers the lower end of the brightness values.  A piecewise contrast stretch is necessary because it can expand the brightness ranges of multiple modes or peaks.  Because the histogram is not gausian, piecewise stretch is appropriate.  Below is a screen shot of the piecwise contrast image.

 


I then compared the piecewise contrasted image with the origional input image.   The piecewise contrast stretched image is definitely more clear and easy to look at.  It really brought out the medium grey tones so they stand out more from the light grey tones and you can tell the black became much darker and solid.  The piecewise contrast stretched image is definitely easier to interpret.

In the next section I used Histogram equilization to improve the contrast of an image.  The origional image histogram has one mode and is fairly gausian.  It is low contrast and hits a very small brightness range.  The image looks to be a little hazy and white in color in the more urban areas and darker everywhere else. The histogram equalized image has a better range of color and brings out the details much more than the original image.  You can see vegetation very well in the histogram equalized image.  The histogram for the new image appears totally different, it is completely filled in the bottom of the brightness range and then has steps going up and down at right angles.  The brightness range also starts at 39 instead of 0 on the histogram.

In the fifth part of the lab I used Binary change detection and image differencing to estimate brightnessvalues of pixels between two images of Eau Claire,  one taken in 1991 and the other taken in 2011.  I used the histogram of the differenced image to find the upper limit of change-no change threshhold and the lower end change-no change threshold.  I also found the upper and lower limits using this histogram:

 

Finally,  I used model maker to model the change between the two images.  This is what I found regarding the spatial distribution of areas changed over the 20 year perioud:

  It looks like you see the most change in urban centers, but you do see specs of change spread throughout.  You see the most change along the river and in the northwest corner of the image. There are definitely patches that have large amounts of change that appear darker.  There are dark patches surrounding the Eau Claire city center and around the Chippewa Falls area.

Works Cited

 NASA Landsat Program, 1991 - 2011, Landsat ETM+, SLC-Off, USGS, Sioux Falls, 11/11/13.


































 



















No comments:

Post a Comment