Google Earth Engine is a cloud-based platform that enables large-scale processing of satellite imagery to detect changes, map trends, and quantify differences on the Earth’s surface. This course covers the full range of topics in Earth Engine to give the participants practical skills to master the platform and implement their remote sensing projects.
If you already have a Google Earth Engine account, you can skip this step.
Visit https://signup.earthengine.google.com/ and sign-up with your Google account. You can use your existing gmail account to sign-up. It usually takes a 1-2 days for approval. Hence do this step as soon as possible.
Tips to ensure smooth sign-up process:
This class needs about 2-hours of pre-work. Please watch the following videos to get a good understanding of remote sensing and how Earth Engine works. Videos are available online and can be streamed using video links below.
This video introduces the remote sensing concepts, terminology and techniques.
This video gives a broad overview of Google Earth Engine with selected case studies and application. The video also covers the Earth Engine architecture and how it is different than traditional remote sensing software.
After you watch the videos, please complete the following 2 Quizzes
The course material and exercises are in the form of Earth Engine scripts shared via a code repository.
users/ujavalgandhi/End-to-End-GEE
in the Scripts tab in the Reader section.Code Editor After Adding the Class Repository
Module 1 is designed to give you basic skills to be able to find datasets you need for your project, filter them to your region of interest, apply basic processing and export the results. Mastering this will allow you to start using Earth Engine for your project quickly and save a lot of time pre-processing the data.
This script introduces the basic Javascript syntax. To learn more, visit Introduction to JavaScript for Earth Engine section of the Earth Engine User Guide.
print('Hello World');
// Variables
var city = 'Bengaluru';
var country = 'India';
print(city, country);
var population = 8400000;
print(population);
// List
var majorCities = ['Mumbai', 'Delhi', 'Chennai', 'Kolkata'];
print(majorCities);
// Dictionary
var cityData = {
'city': city,
'population': 8400000,
'elevation': 930
;
}print(cityData);
// Function
var greet = function(name) {
return 'Hello ' + name;
;
}print(greet('World'));
// This is a comment
// These are the 5 largest cities in the world:
// Tokyo, Delhi, Shanghai, Mexico City, Sao Paulo
// Create a list named 'largeCities'
// The list should have names of all the above cities
// Print the list
When you modify any script for the course repository, you may want to save a copy for yourself. If you try to click the Save button, you will get an error message like below
This is because the shared class repository is a Read-only repository. You can click Yes to save a copy in your own repository. If this is the first time you are using Earth Engine, you will be prompted to choose the name of your home folder. Choose the name carefully, as it cannot be changed once created.
Most datasets in Earth Engine come as a ImageCollection
. An ImageCollection is a dataset that consists of images takes at different time and locations - usually from the same satellite or data provider. You can load a collection by searching the Earth Engine Data Catalog for the ImageCollection ID. Search for the Sentinel-2 Level 1C dataset and you will find its id COPERNICUS/S2_SR
.
Before you do anything with the collection, it is useful to see information about a single image contained in the collection. The first()
function extracts the first image in the collection. You can then print()
the result to see the bands and metadata contained in that image.
To see the collection on a map, we need to use Map.addLayer()
function. But before that we need to know some parameters. The Earth Engine data catalog provides suggested visualization parameters for all collections. Visit the Sentinel-2, Level 1C page and see Explore in Earth Engine section to find the visualization parameters. We can use these to add the collection to the map.
/**
* Function to mask clouds using the Sentinel-2 QA band
* @param {ee.Image} image Sentinel-2 image
* @return {ee.Image} cloud masked Sentinel-2 image
*/
function maskS2clouds(image) {
var qa = image.select('QA60');
// Bits 10 and 11 are clouds and cirrus, respectively.
var cloudBitMask = 1 << 10;
var cirrusBitMask = 1 << 11;
// Both flags should be set to zero, indicating clear conditions.
var mask = qa.bitwiseAnd(cloudBitMask).eq(0)
.and(qa.bitwiseAnd(cirrusBitMask).eq(0));
return image.updateMask(mask).divide(10000);
}
var dataset = ee.ImageCollection('COPERNICUS/S2_SR')
.filterDate('2020-01-01', '2020-01-30')
// Pre-filter to get less cloudy granules.
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',20))
.map(maskS2clouds);
var visualization = {
min: 0.0,
max: 0.3,
bands: ['B4', 'B3', 'B2'],
;
}
Map.setCenter(77.5925, 12.9407, 12);
Map.addLayer(dataset.mean(), visualization, 'RGB');
// Find the 'Sentinel-2 Level-2A' dataset page
// https://developers.google.com/earth-engine/datasets
// Copy/page the code snippet
// Change the code to display images for your home city
The collection contains all imagery ever collected by the sensor. The entire collections are not very useful. Most applications require a subset of the images. We use filters to select the appropriate images. There are many types of filter functions, look at ee.Filter...
module to see all available filters. Select a filter and then run the filter()
function with the filter parameters.
We will learn about 3 main types of filtering techniques
ee.Filter.eq()
, ee.Filter.lt()
etc. You can filter by PATH/ROW values, Orbit number, Cloud cover etc.ee.Filter.date()
.ee.Filter.bounds()
. You can also use the drawing tools to draw a geometry for filtering.var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
// Filter by metadata
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
// Filter by date
var filtered = s2.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
// Filter by location
var filtered = s2.filter(ee.Filter.bounds(geometry))
// Instead of applying filters one after the other, we can 'chain' them
// Use the . notation to apply all the filters together
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
print(filtered.size())
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}Map.addLayer(filtered, rgbVis, 'Filtered')
var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
// Add one more filter that only shows image from the satellite
// Sentinel-2A
// Hint1: Use the 'SPACECRAFT_NAME' property
// Hint2: Use the ee.Filter.eq() filter
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
print(filtered.size())
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}Map.addLayer(filtered, rgbVis, 'Filtered')
The default order of the collection is by date. So when you display the the collection, it implicitely creates a mosaic with the latest pixels in top. You can call .mosaic()
on a ImageCollection to create a mosaic image from the pixels at the top.
We can also create composite image by applying a selection criteria to each pixel from all pixels in the stack. Here we use the median()
function to create a composite where each pixel value is the median of all pixels from the stack.
Tip: If you need to create a mosaic where the images are in a specific order, you can use the
.sort()
function to sort your collection by a property first.
var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
var mosaic = filtered.mosaic()
var medianComposite = filtered.median();
Map.addLayer(filtered, rgbVis, 'Filtered Collection');
Map.addLayer(mosaic, rgbVis, 'Mosaic');
Map.addLayer(medianComposite, rgbVis, 'Median Composite')
var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
var image2019 = filtered.median();
Map.addLayer(image2019, rgbVis, '2019')
// Create a median composite for 2020 and load it to the map
Feature Collections are similar to Image Collections - but they contain Features, not images. They are equivalent to Vector Layers in a GIS. We can load, filter and display Feature Collections using similar techniques that we have learnt so far.
Search for GAUL Second Level Administrative Boundaries and load the collection. This is a global collection that contain all Admin2 boundaries. We can apply a filter using the ADM1_NAME
property to get all Admin2 boundaries (i.e. Districts) from a state.
var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var karnataka = admin2.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var visParams = {'color': 'red'}
Map.addLayer(karnataka, visParams, 'Karnataka Districts')
var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var karnataka = admin2.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
// Apply a filter to select only the 'Bangalore Urban' district
// Display only the selected district
// Hint: The district names are in ADM2_NAME property
var visParams = {'color': 'red'}
You can import vector or raster data into Earth Engine. We will now import a shapefile of Urban Areas for Natural Earth. Unzip the ne_10m_urban_areas.zip
into a folder on your computer. In the Code Editor, go to Assets → New → Table Upload → Shape Files. Select the .shp
, .shx
, .dbf
and .prj
files. Enter ne_10m_urban_areas
as the Asset Name and click Upload. Once the upload and ingest finishes, you will have a new asset in the Assets tab. The shapefile is imported as a Feature Collection in Earth Engine. Select the ne_10m_urban_areas
asset and click Import. You can then visualize the imported data.
// Let's import some data to Earth Engine
// Upload the ne_10m_urban_areas.shp shapefile
// Import the collection
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas");
// Visualize the collection
Map.addLayer(urban, {color: 'blue'}, 'Urban Areas')
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas");
// Apply a filter to select only large urban areas
// Use the property 'area_sqkm' and select all features
// with area > 400 sq.km
It is often desirable to clip the images to your area of interest. You can use the clip()
function to mask out an image using a geometry.
While in a Desktop software, clipping is desirable to remove unneccesary portion of a large image and save computation time, in Earth Engine clipping can actually increase the computation time. As described in the Earth Engine Coding Best Practices guide, avoid clipping the images or do it at the end of your script.
var s2 = ee.ImageCollection("COPERNICUS/S2")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")
// Find the feature id by adding the layer to the map and using Inspector.
var filtered = urban.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = filtered.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
var clipped = image.clip(geometry)
Map.addLayer(clipped, rgbVis, 'Clipped')
var s2 = ee.ImageCollection("COPERNICUS/S2")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")
// Change the filter to your home city or any urban area of your choice
var filtered = urban.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = filtered.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
var clipped = image.clip(geometry)
Map.addLayer(clipped, rgbVis, 'Clipped')
Earth Engine allows for exporting both vector and raster data to be used in an external program. Vector data can be exported as a CSV
or a Shapefile
, while Rasters can be exported as GeoTIFF
files. We will now export the Sentinel-2 Composite as a GeoTIFF file.
var s2 = ee.ImageCollection("COPERNICUS/S2")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")
var filtered = urban.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = filtered.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
var clipped = image.clip(geometry)
Map.addLayer(clipped, rgbVis, 'Clipped')
var exportImage = clipped.select('B.*')
.image.toDrive({
Exportimage: exportImage,
description: 'Bangalore_Composite_Raw',
folder: 'earthengine',
fileNamePrefix: 'bangalore_composite_raw',
region: geometry,
scale: 20,
maxPixels: 1e9
})
// Rather than exporting raw bands, we can apply a rendered image
// visualize() function allows you to apply the same parameters
// that are used in earth engine which exports a 3-band RGB image
print(clipped)
var visualized = clipped.visualize(rgbVis)
print(visualized)
// Now the 'visualized' image is RGB image, no need to give visParams
Map.addLayer(visualized, {}, 'Visualized Image')
.image.toDrive({
Exportimage: visualized,
description: 'Bangalore_Composite_Visualized',
folder: 'earthengine',
fileNamePrefix: 'bangalore_composite_visualized',
region: geometry,
scale: 20,
maxPixels: 1e9
})
var s2 = ee.ImageCollection("COPERNICUS/S2")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")
var filtered = urban.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = filtered.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2020-01-01'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
var clipped = image.clip(geometry)
Map.addLayer(clipped, rgbVis, 'Clipped')
var exportImage = clipped.select('B.*')
// Change the filter to your city
// Write the export function to export the results
// Assignment
// Export the Night Lights images for May,2015 and May,2020
// Workflow:
// Load the VIIRS Nighttime Day/Night Band Composites collection
// Filter the collection to the date range
// Extract the 'avg_rad' band which represents the nighttime lights
// Clip the image to the geometry of your city
// Export the resulting image as a GeoTIFF file.
// Hint1:
// There are 2 VIIRS Nighttime Day/Night collections
// Use the one that corrects for stray light
// Hint2:
// The collection contains 1 global image per month
// After filtering for the month, there will be only 1 image in the collection
// You can use the following technique to extract that image
// var image = ee.Image(filtered.first())
Module 2 builds on the basic Earth Engine skills you have gained. This model introduces the parallel programming concepts using Map/Reduce - which is key in effectively using Earth Engine for analyzing large volumes of data. You will learn how to use the Earth Engine API for calculating various spectral indicies, do cloud masking and then use map/reduce to do apply these computations to collections of imagery. You will also learn how to take long time-series of data and create charts.
This script introduces the basics of the Earth Engine API. When programming in Earth Engine, you must use the Earth Engine API so that your computations can use the Google Earth Engine servers. To learn more, visit Earth Engine Objects and Methods section of the Earth Engine User Guide.
// Let's see how to take a list of numbers and add 1 to each element
var myList = ee.List.sequence(1, 10);
// Define a function that takes a number and adds 1 to it
var myFunction = function(number) {
return number + 1;
}print(myFunction(1));
//Re-Define a function using Earth Engine API
var myFunction = function(number) {
return ee.Number(number).add(1);
}
// Map the function of the list
var newList = myList.map(myFunction);
print(newList);
// Extracting value from a list
var value = newList.get(0)
print(value)
// Casting
// Let's try to do some computation on the extracted value
//var newValue = value.add(1)
//print(newValue)
// You get an error because Earth Engine doesn't know what is the type of 'value'
// We need to cast it to appropriate type first
var value = ee.Number(value)
var newValue = value.add(1)
print(newValue)
// Dates
// For any date computation, you should use ee.Date module
var date = ee.Date('2019-01-01')
var futureDate = date.advance(1, 'year')
print(futureDate)
As a general rule, you should always use Earth Engine API methods in your code, there is one exception where you will need to use client-side Javascript method. If you want to get the current time, the server doesn’t know your time. You need to use javascript method and cast it to an Earth Engine object.
var now = Date.now() print(now) var now = ee.Date(now) print(now)
var s2 = ee.ImageCollection("COPERNICUS/S2");
var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241]);
var now = Date.now()
var now = ee.Date(now)
// Apply another filter to the collection below to filter images
// collected in the last 1-week
// Do not hard-code the dates, it should always show images
// from the past 1-week whenever you run the script
// Hint: Use ee.Date.advance() function
// to compute the date 1 week before now
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.bounds(geometry))
Spectral Indices are central to many aspects of remote sensing. Whether you are stydying vegetation or tracking fires - you will need to compute a pixel-wise ratio of 2 or more bands. The most commonly used formula for calculating an index is the Normalized Difference between 2 bands. Earth Engine provides a helper function normalizedDifference()
to help calculate normalized indices, such as Normalized Difference Vegetation Index (NDVI). For more complex formulae, you can also use the expression()
function to describe the calculation.
var s2 = ee.ImageCollection("COPERNICUS/S2");
= ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
admin2
var bangalore = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = bangalore.geometry()
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
// Calculate Normalized Difference Vegetation Index (NDVI)
// 'NIR' (B8) and 'RED' (B4)
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
// Calculate Modified Normalized Difference Water Index (MNDWI)
// 'GREEN' (B3) and 'SWIR1' (B11)
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']);
// Calculate Soil-adjusted Vegetation Index (SAVI)
// 1.5 * ((NIR - RED) / (NIR + RED + 0.5))
// For more complex indices, you can use the expression() function
var savi = image.expression(
'1.5 * ((NIR - RED) / (NIR + RED + 0.5))', {
'NIR': image.select('B8'),
'RED': image.select('B4'),
.rename('savi');
})
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var ndviVis = {min:0, max:1, palette: ['white', 'green']}
var ndwiVis = {min:0, max:0.5, palette: ['white', 'blue']}
Map.addLayer(image.clip(geometry), rgbVis, 'Image');
Map.addLayer(mndwi.clip(geometry), ndwiVis, 'mndwi')
Map.addLayer(savi.clip(geometry), ndviVis, 'savi')
Map.addLayer(ndvi.clip(geometry), ndviVis, 'ndvi')
Mosaic with MNDWI, SAVI and NDVI images
var s2 = ee.ImageCollection("COPERNICUS/S2");
= ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
admin2
var bangalore = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = bangalore.geometry()
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
var image = filtered.median();
// Exercise
// Calculate the Normalized Difference Built-Up Index (NDBI) for the image
// Hint: NDBI = (SWIR1 – NIR) / (SWIR1 + NIR)
// Visualize the built-up area using a 'red' palette
So far we have learnt how to run computation on single images. If you want to apply some computation - such as calculating an index - to many images, you need to use map()
. You first define a function that takes 1 image and returns the result of the computation on that image. Then you can map()
that function over the ImageCollection which results in a new ImageCollection with the results of the computation. This is similar to a for-loop that you maybe familiar with - but using map()
allows the computation to run in parallel. Learn more at Mapping over an ImageCollection
var s2 = ee.ImageCollection("COPERNICUS/S2");
var admin1 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level1");
var karnataka = admin1.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var geometry = karnataka.geometry()
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
var composite = filtered.median().clip(karnataka)
Map.addLayer(composite, rgbVis, 'Karnataka Composite')
// Write a function that computes NDVI for an image and adds it as a band
function addNDVI(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename('ndvi');
return image.addBands(ndvi);
}
// Map the function over the collection
var withNdvi = filtered.map(addNDVI);
var composite = withNdvi.median()
var ndviComposite = composite.select('ndvi').clip(karnataka)
var palette = [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718',
'74A901', '66A000', '529400', '3E8601', '207401', '056201',
'004C00', '023B01', '012E01', '011D01', '011301'];
var ndviVis = {min:0, max:0.5, palette: palette }
Map.addLayer(ndviComposite, ndviVis, 'ndvi')
var s2 = ee.ImageCollection("COPERNICUS/S2");
var admin1 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level1");
var karnataka = admin1.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var geometry = karnataka.geometry()
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
var composite = filtered.median().clip(karnataka)
Map.addLayer(composite, rgbVis, 'Karnataka Composite')
// This function calculates both NDVI an d NDWI indices
// and returns an image with 2 new bands added to the original image.
function addIndices(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename('ndvi');
var ndwi = image.normalizedDifference(['B3', 'B8']).rename('ndwi');
return image.addBands(ndvi).addBands(ndwi);
}
// Map the function over the collection
var withIndices = filtered.map(addIndices);
// Composite
var composite = withIndices.median()
print(composite)
// Extract the 'ndwi' band and display a NDWI map
// use the palette ['white', 'blue']
Masking pixels in an image makes those pixels transparent and excludes them from analysis and visualization. To mask an image, we can use the updateMask()
function and pass it an image with 0 and 1 values. All pixels where the mask image is 0 will be masked.
Most remote sensing datasets come with a QA or Cloud Mask band that contains the information on whether pixels is cloudy or not. Your Code Editor contains pre-defined funtions for masking clouds for popular datasets under Scripts Tab → Examples → Cloud Masking.
The script below takes the Sentinel-2 masking function and shows how to apply it on an image.
var image = ee.Image('COPERNICUS/S2/20190703T050701_20190703T052312_T43PGP')
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
Map.centerObject(image)
Map.addLayer(image, rgbVis, 'Full Image')
// Write a function for Cloud masking
function maskS2clouds(image) {
var qa = image.select('QA60')
var cloudBitMask = 1 << 10;
var cirrusBitMask = 1 << 11;
var mask = qa.bitwiseAnd(cloudBitMask).eq(0).and(
.bitwiseAnd(cirrusBitMask).eq(0))
qareturn image.updateMask(mask)
.select("B.*")
.copyProperties(image, ["system:time_start"])
}
var maskedImage = ee.Image(maskS2clouds(image))
Map.addLayer(maskedImage, rgbVis, 'Masked Image')
Applying pixel-wise QA bitmask
// Get the Level-2A Surface Reflectance image
var imageSR = ee.Image('COPERNICUS/S2_SR/20190703T050701_20190703T052312_T43PGP')
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}Map.centerObject(imageSR)
Map.addLayer(imageSR, rgbVis, 'SR Image')
// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
var cloudProb = image.select('MSK_CLDPRB');
var snowProb = image.select('MSK_SNWPRB');
var cloud = cloudProb.lt(5);
var snow = snowProb.lt(5);
var scl = image.select('SCL');
var shadow = scl.eq(3); // 3 = cloud shadow
var cirrus = scl.eq(10); // 10 = cirrus
// Cloud probability less than 5% or cloud shadow classification
var mask = (cloud.and(snow)).and(cirrus.neq(1)).and(shadow.neq(1));
return image.updateMask(mask);
}
// Exercise
// Apply the above cloud masking function to SR image
// Add the masked image to the map
// Hint: After adding the masked image to the map, turn-off
// the original image layer to see the result of the masking function
If you are using Sentinel-2 data, do check out the an alternative cloud masking techninque using the S2 Cloudless dataset. Learn more
var imageSR = ee.Image('COPERNICUS/S2_SR/20190703T050701_20190703T052312_T43PGP') var s2Cloudless = ee.Image('COPERNICUS/S2_CLOUD_PROBABILITY/20190703T050701_20190703T052312_T43PGP') var clouds = s2Cloudless.lt(50) var cloudlessMasked = imageSR.mask(clouds) var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']}; Map.addLayer(cloudlessMasked, rgbVis, 'S2 Cloudless Masked Image')
When writing parallel computing code, a Reduce operation allows you to compute statsitics on a large amount of inputs. In Earth Engine, you need to run reduction operation when creating composites, calculating statistics, doing regression analysis etc. The Earth Engine API comes with a large number of built-in reducer functions (such as ee.Reducer.sum()
, ee.Reducer.histogram()
, ee.Reducer.linearFit()
etc.) that can perform a variety of statistical operations on input data. You can run reducers using the reduce()
function. Earth Engine supports running reducers on all data structures that can hold multiple values, such as Imags (reducers run on different bands), ImageCollection, FeatureCollection, List, Dictionary etc. The script below introduces basic concepts related to reducers.
var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
// Computing stats on a list
var myList = ee.List.sequence(1, 10);
print(myList)
// Use a reducer to compute min and max in the list
var minMax = myList.reduce(ee.Reducer.minMax());
print(minMax);
// Apply a reducer on a image collection
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
print(filtered.size())
var collMinMax = filtered.reduce(ee.Reducer.minMax());
print(collMinMax);
// Apply a reducer on an image
var image = ee.Image(filtered.first())
var imageMinMax = image.reduce(ee.Reducer.minMax());
print(imageMinMax)
print(image)
// If we want to compute min and max for each band, use reduceRegion instead
var stats = image.reduceRegion({
reducer: ee.Reducer.minMax(),
geometry: image.geometry(),
scale: 100,
maxPixels: 1e10
})print(stats);
// Result of reduceRegion is a dictionary.
// We can extract the values using .get() function
print('Min value in B4', stats.get('B4_min'))
print('Max value in B4', stats.get('B4_max'))
var geometry = ee.Geometry.Polygon([[
82.60642647743225, 27.16350437805251],
[82.60984897613525, 27.1618529901377],
[82.61088967323303, 27.163695288375266],
[82.60757446289062, 27.16517483230927]
[;
]])
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var image = ee.Image('COPERNICUS/S2/20190223T050811_20190223T051829_T44RPR')
Map.addLayer(image, rgbVis, 'Image')
Map.addLayer(geometry, {color: 'red'}, 'Farm')
Map.centerObject(geometry)
// Exercise
// Compute the average NDVI for the farm from the given image
// Hint: Use the reduceRegion() function
Now we can put together all the skills we have learnt so far - filter, map, reduce, and cloud-masking to create a chart of average NDVI values for a given farm over 1 year. Earth Engine API comes with support for charting functions based on the Google Chart API. Here we use the ui.Chart.image.series()
function to create a time-series chart.
var s2 = ee.ImageCollection("COPERNICUS/S2");
var geometry = ee.Geometry.Polygon([[
82.60642647743225, 27.16350437805251],
[82.60984897613525, 27.1618529901377],
[82.61088967323303, 27.163695288375266],
[82.60757446289062, 27.16517483230927]
[;
]])Map.addLayer(geometry, {color: 'red'}, 'Farm')
Map.centerObject(geometry)
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var filtered = s2
.filter(ee.Filter.date('2017-01-01', '2017-12-31'))
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.bounds(geometry))
// Write a function for Cloud masking
function maskS2clouds(image) {
var qa = image.select('QA60')
var cloudBitMask = 1 << 10;
var cirrusBitMask = 1 << 11;
var mask = qa.bitwiseAnd(cloudBitMask).eq(0).and(
.bitwiseAnd(cirrusBitMask).eq(0))
qareturn image.updateMask(mask)//.divide(10000)
.select("B.*")
.copyProperties(image, ["system:time_start"])
}
var filtered = filtered.map(maskS2clouds)
// Write a function that computes NDVI for an image and adds it as a band
function addNDVI(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename('ndvi');
return image.addBands(ndvi);
}
// Map the function over the collection
var withNdvi = filtered.map(addNDVI);
// Display a time-series chart
var chart = ui.Chart.image.series({
imageCollection: withNdvi.select('ndvi'),
region: geometry,
reducer: ee.Reducer.mean(),
scale: 20})
print(chart);
// Delete the farm boundary from the previous script
// and add another farm at a location of your choice
// Print the chart.
var terraclimate = ee.ImageCollection("IDAHO_EPSCOR/TERRACLIMATE");
var geometry = ee.Geometry.Point([77.54849920033682, 12.91215102400037]);
// Assignment
// Use Gridded Climate dataset to chart a 40+ year time series
// if temparature at any location
// Workflow
// Load the TerraClimate collection
// Select the 'tmmx' band
// Scale the band values
// Filter the scaled collection
// Use ui.Chart.image.series() function to create the chart
// Hint1
// Data needed to be scaled by 0.1
// map() a function and multiply each image
// Multiplying creates a new image that doesn't have the same properties
// Use copyProperties() function to copy timestamp to new image
var tmax = terraclimate.select('tmmx')
var tmaxScaled = tmax.map(function(image) {
return image.multiply(0.1)
.copyProperties(image,['system:time_start']);
})
// Hint2
// You will need to specify a scale in meters for charting
// Use projection().nominalScale() to find the
// image resolution in meters
var image = ee.Image(terraclimate.first())
print(image.projection().nominalScale())
Assignment2 Expected Output
Supervised classification is arguably the most important classical machine learning techniques in remote sensing. Applications range from generating Land Use/Land Cover maps to change detection. Google Earth Engine is unique suited to do supervised classification at scale. The interactive nature of Earth Engine development allows for iterative development of supervised classification workflows by combining many different datasets into the model. This module covers basic supervised classification workflow, accuracy assessment, hyperparameter tuning and change detection.
We will learn how to do a basic land cover classification using training samples collected from the Code Editor using the High Resolution basemap imagery provided by Google Maps. This method requires no prior training data and is quite effective to generate high quality classification samples anywhere in the world. The goal is to classify each source pixel into one of the following classes - urban, bare, water or vegetation. Using the drawing tools in the code editor, you create 4 new feature collection with points representing pixels of that class. Each feature collection has a property called landcover
with values of 0, 1, 2 or 3 indicating whether the feature collection represnts urban, bare, water or vegetation respectively. We then train a Random Forest classifier using these training set to build a model and apply it to all the pixels of the image to create a 4 class image.
Fun fact: The classifiers in Earth Engine API have names starting with smile - such as
ee.Classifier.smileRandomForest()
. The smile part refers to the Statistical Machine Intelligence and Learning Engine (SMILE) JAVA library which is used by Google Earth Engine to implement these algorithms.
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary")
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
// The following collections were created using the
// Drawing Tools in the code editor
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/urban_gcps")
var bare = ee.FeatureCollection("users/ujavalgandhi/e2e/bare_gcps")
var water = ee.FeatureCollection("users/ujavalgandhi/e2e/water_gcps")
var vegetation = ee.FeatureCollection("users/ujavalgandhi/e2e/vegetation_gcps")
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(bangalore))
.select('B.*')
var composite = filtered.median().clip(bangalore)
// Display the input composite.
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}Map.addLayer(composite, rgbVis, 'image');
var gcps = urban.merge(bare).merge(water).merge(vegetation)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
collection: gcps,
properties: ['landcover'],
scale: 10
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})// // Classify the image.
var classified = composite.classify(classifier);
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
// Display the GCPs
// We use the style() function to style the GCPs
var palette = ee.List(['gray','brown','blue','green'])
var landcover = ee.List([0, 1, 2, 3])
var gcpsStyled = ee.FeatureCollection(
.map(function(lc){
landcovervar color = palette.get(landcover.indexOf(lc));
var markerStyle = { color: 'white', pointShape: 'diamond',
pointSize: 4, width: 1, fillColor: color}
return gcps.filter(ee.Filter.eq('landcover', lc))
.map(function(point){
return point.set('style', markerStyle)
}).flatten();
}))
Map.addLayer(gcpsStyled.style({styleProperty:"style"}), {}, 'GCPs')
Supervised Classification Output
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
var urbanAreas = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")
// Perform supervised classification for your city
// Find the feature id by adding the layer to the map and using Inspector.
var city = urbanAreas.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = city.geometry()
Map.centerObject(geometry)
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(geometry))
.select('B.*')
var composite = filtered.median().clip(geometry)
// Display the input composite.
var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
Map.addLayer(composite, rgbVis, 'image');
// Exercise
// Add training points for 4 classes
// Assign the 'landcover' property as follows
// urban: 0
// bare: 1
// water: 2
// vegetation: 3
// After adding points, uncomments lines below
// var gcps = urban.merge(bare).merge(water).merge(vegetation)
// // Overlay the point on the image to get training data.
// var training = composite.sampleRegions({
// collection: gcps,
// properties: ['landcover'],
// scale: 200,
// tileScale: 16
// });
// print(training)
// // Train a classifier.
// var classifier = ee.Classifier.smileRandomForest(50).train({
// features: training,
// classProperty: 'landcover',
// inputProperties: composite.bandNames()
// });
// // // Classify the image.
// var classified = composite.classify(classifier);
// Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
It is important to get a quantitative estimate of the accuracy of the classification. To do this, a common strategy is to divide your training samples into 2 random fractions - one used for training the model and the other for validation of the predictions. Once a classifier is trained, it can be used to classify the entire image. We can then compare the classified values with the ones in the validation fraction. We can use the ee.Classifier.confusionMatrix()
method to calculate a Confusion Matrix representing expected accuracy.
Don’t get carried away tweaking your model to give you the highest validation accuracy. You must use both qualitative measures (such as visual inspection of results) along with quantitative measures to assess the results.
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(boundary))
.select('B.*')
var composite = filtered.median().clip(boundary)
// Display the input composite.
Map.addLayer(composite, rgbVis, 'image');
// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()
// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
collection: trainingGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50)
.train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})
// Classify the image.
var classified = composite.classify(classifier);
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
//**************************************************************************
// Accuracy Assessment
//**************************************************************************
// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
collection: validationGcp,
properties: ['landcover'],
tileScale: 16,
scale: 10,
;
})
var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy());
One feature that make Earth Engine data model really well suited for machine learning tasks is its ability to easily incoprporate data sources of different spatial resolutions, projections and data types together easily. By giving additional information to the classifier, it is able to separate different classes easily. Here we take the same example and augment it with different spectral indicies such as - NDVI, NDBI, MNDWI and BSI. We also add slope and elevation bands from the ALOS DEM. The result is a much improve classification.
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");
var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
var cloudProb = image.select('MSK_CLDPRB');
var snowProb = image.select('MSK_SNWPRB');
var cloud = cloudProb.lt(10);
var scl = image.select('SCL');
var shadow = scl.eq(3); // 3 = cloud shadow
var cirrus = scl.eq(10); // 10 = cirrus
// Cloud probability less than 10% or cloud shadow classification
var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
return image.updateMask(mask).divide(10000);
}
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(boundary))
.map(maskCloudAndShadowsSR)
.select('B.*')
var composite = filtered.median().clip(boundary)
var addIndices = function(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']);
var bsi = image.expression(
'(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
'X': image.select('B11'), //swir1
'Y': image.select('B4'), //red
'A': image.select('B8'), // nir
'B': image.select('B2'), // blue
.rename('bsi');
})return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}
var composite = addIndices(composite);
var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');
var composite = composite.addBands(elev).addBands(slope);
var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');
// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()
// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
collection: trainingGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})print(training)
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50)
.train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})
// Classify the image.
var classified = composite.classify(classifier);
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
//**************************************************************************
// Accuracy Assessment
//**************************************************************************
// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
collection: validationGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})
var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy());
A recommended best practice for improving the accuracy of your machine learning model is to tune different parameters. For example, when using the ee.Classifer.smileRandomForest()
classifier, we must specify the Number of Trees. We know that higher number of trees result in more computation requirement, but it doesn’t necessarily result in better results. Instead of guessing, we programatically try a range of values and choose the smallest value possible that results in the highest accuracy.
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");
var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
var cloudProb = image.select('MSK_CLDPRB');
var snowProb = image.select('MSK_SNWPRB');
var cloud = cloudProb.lt(10);
var scl = image.select('SCL');
var shadow = scl.eq(3); // 3 = cloud shadow
var cirrus = scl.eq(10); // 10 = cirrus
// Cloud probability less than 10% or cloud shadow classification
var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
return image.updateMask(mask).divide(10000);
}
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(boundary))
.map(maskCloudAndShadowsSR)
.select('B.*')
var composite = filtered.median().clip(boundary)
var addIndices = function(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']);
var bsi = image.expression(
'(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
'X': image.select('B11'), //swir1
'Y': image.select('B4'), //red
'A': image.select('B8'), // nir
'B': image.select('B2'), // blue
.rename('bsi');
})return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}
var composite = addIndices(composite);
var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');
var composite = composite.addBands(elev).addBands(slope);
var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');
// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()
// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
collection: trainingGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})print(training)
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50)
.train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})
//**************************************************************************
// Hyperparameter Tuning
//**************************************************************************
// Run .explain() to see what the classifer looks like
print(classifier.explain())
var test = composite.sampleRegions({
collection: validationGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})
// Tune the numberOfTrees parameter.
var numTreesList = ee.List.sequence(10, 150, 10);
var accuracies = numTreesList.map(function(numTrees) {
var classifier = ee.Classifier.smileRandomForest(numTrees)
.train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})
// Here we are classifying a table instead of an image
// Classifiers work on both images and tables
return test
.classify(classifier)
.errorMatrix('landcover', 'classification')
.accuracy();
;
})
var chart = ui.Chart.array.values({
array: ee.Array(accuracies),
axis: 0,
xLabels: numTreesList
.setOptions({
})title: 'Hyperparameter Tuning for the numberOfTrees Parameters',
vAxis: {title: 'Validation Accuracy'},
hAxis: {title: 'Number of Tress', gridlines: {count: 15}}
;
})print(chart)
Supervised Classification Output
When working with complex classifiers over large regions, you may get a User memory limit exceeded or Computation timed out error in the Code Editor. The reason for this is that there is a fixed time limit and smaller memory allocated for code that is run with the On-Demand Computation mode. For larger computations, you can use the Batch mode with the Export
functions. Exports run in the background and can run longer than 5-minutes time allocated to the computation code run form the Code Editor. This allows you to process very large and complex datasets. Here’s an example showing how to export your classification results to Google Drive.
We can only export Images or FeatureCollections. What if you wanted to export a number that is the result of a long computation? A useful hack is to create a FeatureCollection with just 1 feature containing
null
geometry and a property containing the number you want to export.
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");
var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
var cloudProb = image.select('MSK_CLDPRB');
var snowProb = image.select('MSK_SNWPRB');
var cloud = cloudProb.lt(10);
var scl = image.select('SCL');
var shadow = scl.eq(3); // 3 = cloud shadow
var cirrus = scl.eq(10); // 10 = cirrus
// Cloud probability less than 10% or cloud shadow classification
var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
return image.updateMask(mask).divide(10000);
}
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
.filter(ee.Filter.date('2019-01-01', '2019-12-31'))
.filter(ee.Filter.bounds(boundary))
.map(maskCloudAndShadowsSR)
.select('B.*')
var composite = filtered.median().clip(boundary)
var addIndices = function(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']);
var bsi = image.expression(
'(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
'X': image.select('B11'), //swir1
'Y': image.select('B4'), //red
'A': image.select('B8'), // nir
'B': image.select('B2'), // blue
.rename('bsi');
})return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}
var composite = addIndices(composite);
var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');
var composite = composite.addBands(elev).addBands(slope);
var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');
// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()
// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
collection: trainingGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})print(training)
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(10)
.train({
features: training,
classProperty: 'landcover',
inputProperties: composite.bandNames()
;
})
// Classify the image.
var classified = composite.classify(classifier);
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
collection: validationGcp,
properties: ['landcover'],
scale: 10,
tileScale: 16
;
})
var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy());
//**************************************************************************
// Exporting Results
//**************************************************************************
// Create a Feature with null geometry and the value we want to export.
var fc = ee.FeatureCollection([
.Feature(null, {'accuracy': testConfusionMatrix.accuracy()})
ee
])
.table.toDrive({
Exportcollection: fc,
description: 'Accuracy_Export',
folder: 'earthengine',
fileNamePrefix: 'accuracy',
fileFormat: 'CSV'
})
.image.toDrive({
Exportimage: classified.visualize({min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}),
description: 'Classified_image',
folder: 'earthengine',
fileNamePrefix: 'classified',
region: boundary,
scale: 20,
maxPixels: 1e10,
})
.image.toDrive({
Exportimage: composite.visualize(visParams),
description: 'Composite',
folder: 'earthengine',
fileNamePrefix: 'composite',
region: boundary,
scale: 20,
maxPixels: 1e10,
})
Now that we have the results of out classification, we will learn how to calculate area for pixels in each class. Calculating area for features is done using the area()
function and for images using the ee.Image.pixelArea()
function. The ee.Image.pixelArea()
function creates an image where each pixel’s value is the area of the pixel. We multiply this pixel area image with our image and sum up the area using the reduceRegion()
function.
var classified = ee.Image("users/ujavalgandhi/e2e/bangalore_classified");
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary");
Map.addLayer(bangalore, {color: 'blue'}, 'Bangalore City')
Map.addLayer(classified,
min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']},
{'Classified Image 2019');
// Calling .geometry() on a feature collection gives the
// dissolved geometry of all features in the collection
// .area() function calculates the area in square meters
var cityArea = bangalore.geometry().area()
// We can cast the result to a ee.Number() and calculate the
// area in square kilometers
var cityAreaSqKm = ee.Number(cityArea).divide(1e6).round()
print(cityAreaSqKm)
// Area Calculation for Images
var vegetation = classified.eq(3)
// If the image contains values 0 or 1, we can calculate the
// total area using reduceRegion() function
// The result of .eq() operation is a binary image with pixels
// values of 1 where the condition matched and 0 where it didn't
Map.addLayer(vegetation, {min:0, max:1, palette: ['white', 'green']}, 'Green Cover')
// Since our image has only 0 and 1 pixel values, the vegetation
// pixels will have values equal to their area
var areaImage = vegetation.multiply(ee.Image.pixelArea())
// Now that each pixel for vegetation class in the image has the value
// equal to its area, we can sum up all the values in the region
// to get the total green cover.
var area = areaImage.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: bangalore.geometry(),
scale: 10,
maxPixels: 1e10
})// The result of the reduceRegion() function is a dictionary with the key
// being the band name. We can extract the area number and convert it to
// square kilometers
var vegetationAreaSqKm = ee.Number(area.get('classification')).divide(1e6).round()
print(vegetationAreaSqKm)
Calculating Green Cover from Classified Image
var classified = ee.Image("users/ujavalgandhi/e2e/bangalore_classified");
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary");
var cityArea = bangalore.geometry().area()
var cityAreaSqKm = ee.Number(cityArea).divide(1e6).round()
var vegetation = classified.eq(3)
var areaImage = vegetation.multiply(ee.Image.pixelArea())
var area = areaImage.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: bangalore.geometry(),
scale: 10,
maxPixels: 1e10
})var vegetationAreaSqKm = ee.Number(area.get('classification')).divide(1e6).round()
// Exercise
// Print the percentage green cover of the city
We learnt how to calculate area for a single class. But typically when you have a classified image, you want to compute area covered by each class. We must follow a similar process as before, but using a Grouped Reducer.
var classified = ee.Image("users/ujavalgandhi/e2e/bangalore_classified");
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary");
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');
// Create a 2 band image with the area image and the classified image
// Divide the area image by 1e6 so area results are in Sq Km
var areaImage = ee.Image.pixelArea().divide(1e6).addBands(classified);
// Calculate Area by Class
// Using a Grouped Reducer
var areas = areaImage.reduceRegion({
reducer: ee.Reducer.sum().group({
groupField: 1,
groupName: 'classification',
,
})geometry: bangalore,
scale: 100,
tileScale: 4,
maxPixels: 1e10
;
})
var classAreas = ee.List(areas.get('groups'))
print(classAreas)
var areaChart = ui.Chart.image.byClass({
image: areaImage,
classBand: 'classification',
region: bangalore,
scale: 100,
reducer: ee.Reducer.sum(),
classLabels: ['urban', 'bare', 'water', 'vegetation'],
.setOptions({
})hAxis: {title: 'Classes'},
vAxis: {title: 'Area Km^2'},
title: 'Area by class',
series: {
0: { color: 'gray' },
1: { color: 'brown' },
2: { color: 'blue' },
3: { color: 'green' }
};
})print(areaChart);
Calculating Green Cover from Classified Image
Many earth observation datasets are available at regular intervals over long periods of time. This enables us to detect changes on the Earth’s surface. Change detection technique in remote sensing falls in the following categories
In the following section, we will apply the supervised classification techniques for change detection to detect urban growth.
This technique of change detection is also known as One-pass Classification or Direct Multi-date Classification. Here we create a single stacked image containing bands from before and after images. We train a classifer with training data sampled from the stacked image and apply the classifier on the stacked image to find all change pixels.
var bangalore = ee.FeatureCollection('users/ujavalgandhi/public/bangalore_boundary');
var change = ee.FeatureCollection('users/ujavalgandhi/e2e/bangalore_change_gcps');
var nochange = ee.FeatureCollection('users/ujavalgandhi/e2e/bangalore_nochange_gcps')
var s2 = ee.ImageCollection("COPERNICUS/S2")
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
// Write a function for Cloud masking
function maskS2clouds(image) {
var qa = image.select('QA60')
var cloudBitMask = 1 << 10;
var cirrusBitMask = 1 << 11;
var mask = qa.bitwiseAnd(cloudBitMask).eq(0).and(
.bitwiseAnd(cirrusBitMask).eq(0))
qareturn image.updateMask(mask)//.divide(10000)
.select("B.*")
.copyProperties(image, ["system:time_start"])
}
var filtered = s2
.filter(ee.Filter.date('2019-01-01', '2019-02-01'))
.filter(ee.Filter.bounds(bangalore))
.map(maskS2clouds)
var image2019 = filtered.median().clip(bangalore)
// Display the input composite.
Map.addLayer(image2019, rgbVis, '2019');
var filtered = s2
.filter(ee.Filter.date('2020-01-01', '2020-02-01'))
.filter(ee.Filter.bounds(bangalore))
.map(maskS2clouds)
var image2020 = filtered.median().clip(bangalore)
Map.addLayer(image2020, rgbVis, '2020');
var stackedImage = image2019.addBands(image2020)
// Overlay the point on the image to get training data.
var training = stackedImage.sampleRegions({
collection: change.merge(nochange),
properties: ['class'],
scale: 10
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
features: training,
classProperty: 'class',
inputProperties: stackedImage.bandNames()
;
})
// Classify the image.
var classified = stackedImage.classify(classifier);
Map.addLayer(classified, {min: 0, max: 1, palette: ['white', 'red']}, 'change');
All pixels that changed from bare ground to built-up
var bangalore = ee.FeatureCollection('users/ujavalgandhi/public/bangalore_boundary');
var change = ee.FeatureCollection('users/ujavalgandhi/e2e/bangalore_change_gcps');
var nochange = ee.FeatureCollection('users/ujavalgandhi/e2e/bangalore_nochange_gcps')
var s2 = ee.ImageCollection("COPERNICUS/S2")
Map.centerObject(bangalore)
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
// Write a function for Cloud masking
function maskS2clouds(image) {
var qa = image.select('QA60')
var cloudBitMask = 1 << 10;
var cirrusBitMask = 1 << 11;
var mask = qa.bitwiseAnd(cloudBitMask).eq(0).and(
.bitwiseAnd(cirrusBitMask).eq(0))
qareturn image.updateMask(mask)//.divide(10000)
.select("B.*")
.copyProperties(image, ["system:time_start"])
}
var filtered = s2
.filter(ee.Filter.date('2019-01-01', '2019-02-01'))
.filter(ee.Filter.bounds(bangalore))
.map(maskS2clouds)
var image2019 = filtered.median().clip(bangalore)
// Display the input composite.
Map.addLayer(image2019, rgbVis, '2019');
var filtered = s2
.filter(ee.Filter.date('2020-01-01', '2020-02-01'))
.filter(ee.Filter.bounds(bangalore))
.map(maskS2clouds)
var image2020 = filtered.median().clip(bangalore)
Map.addLayer(image2020, rgbVis, '2020');
// Exercise
// Let's add an NDBI band that can improve the detection
var addNDBI = function(image) {
var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
return image.addBands(ndbi)
}
// use addNDBI() function to add the NDBI band to both 2019 and 2020 composite images
// Hint1: You can save the resulting image in the same variable to avoid changing
// a lot of code.
// var image = addNDBI(image)
var stackedImage = image2019.addBands(image2020)
// Overlay the point on the image to get training data.
var training = stackedImage.sampleRegions({
collection: change.merge(nochange),
properties: ['class'],
scale: 10
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
features: training,
classProperty: 'class',
inputProperties: stackedImage.bandNames()
;
})
// Classify the image.
var classified = stackedImage.classify(classifier);
Map.addLayer(classified, {min: 0, max: 1, palette: ['white', 'red']}, 'change');
We dealing with multi-class images, a useful metric for change detection is to know how many pixels from class X changed to class Y. This can be accomplished using the ee.Reducer.frequencyHistogram()
reducer as shown below.
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/urban_gcps")
var bare = ee.FeatureCollection("users/ujavalgandhi/e2e/bare_gcps")
var water = ee.FeatureCollection("users/ujavalgandhi/e2e/water_gcps")
var vegetation = ee.FeatureCollection("users/ujavalgandhi/e2e/vegetation_gcps")
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
Map.centerObject(bangalore)
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
var filtered = s2
.filter(ee.Filter.date('2019-01-01', '2019-01-31'))
.filter(ee.Filter.bounds(bangalore))
.select('B.*')
var before = filtered.median().clip(bangalore)
// Display the input composite.
Map.addLayer(before, rgbVis, 'before');
var training = urban.merge(bare).merge(water).merge(vegetation)
// Overlay the point on the image to get training data.
var training = before.sampleRegions({
collection: training,
properties: ['landcover'],
scale: 10
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
features: training,
classProperty: 'landcover',
inputProperties: before.bandNames()
;
})
// // Classify the image.
var beforeClassified = before.classify(classifier);
Map.addLayer(beforeClassified,
min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, 'before_classified');
{
// 2020 Jan
var after = s2
.filter(ee.Filter.date('2020-01-01', '2020-01-31'))
.filter(ee.Filter.bounds(bangalore))
.select('B.*')
.median()
.clip(bangalore)
Map.addLayer(after, rgbVis, 'after');
// Classify the image.
var afterClassified= after.classify(classifier);
Map.addLayer(afterClassified,
min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, 'after_classified');
{
// Reclassify from 0-3 to 1-4
var beforeClasses = beforeClassified.remap([0, 1, 2, 3], [1, 2, 3, 4])
var afterClasses = afterClassified.remap([0, 1, 2, 3], [1, 2, 3, 4])
// Show all changed areas
var changed = afterClasses.subtract(beforeClasses).neq(0)
Map.addLayer(changed, {min:0, max:1, palette: ['white', 'red']}, 'Change')
// We multiply the before image with 100 and add the after image
// The resulting pixel values will be unique and will represent each unique transition
// i.e. 102 is urban to bare, 103 urban to water etc.
var merged = beforeClasses.multiply(100).add(afterClasses).rename('transitions')
// Use a frequencyHistogram to get a pixel count per class
var transitionMatrix = merged.reduceRegion({
reducer: ee.Reducer.frequencyHistogram(),
geometry: bangalore,
maxPixels: 1e10,
scale:10,
tileScale: 16
})// This prints number of pixels for each class transition
print(transitionMatrix.get('transitions'))
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary")
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/urban_gcps")
var bare = ee.FeatureCollection("users/ujavalgandhi/e2e/bare_gcps")
var water = ee.FeatureCollection("users/ujavalgandhi/e2e/water_gcps")
var vegetation = ee.FeatureCollection("users/ujavalgandhi/e2e/vegetation_gcps")
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
Map.centerObject(bangalore)
var rgbVis = {
min: 0.0,
max: 3000,
bands: ['B4', 'B3', 'B2'],
;
}
var filtered = s2
.filter(ee.Filter.date('2019-01-01', '2019-01-31'))
.filter(ee.Filter.bounds(bangalore))
.select('B.*')
var before = filtered.median().clip(bangalore)
// Display the input composite.
Map.addLayer(before, rgbVis, 'before');
var training = urban.merge(bare).merge(water).merge(vegetation)
// Overlay the point on the image to get training data.
var training = before.sampleRegions({
collection: training,
properties: ['landcover'],
scale: 10
;
})
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
features: training,
classProperty: 'landcover',
inputProperties: before.bandNames()
;
})
// // Classify the image.
var beforeClassified = before.classify(classifier);
Map.addLayer(beforeClassified,
min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, 'before_classified');
{
// 2020 Jan
var after = s2
.filter(ee.Filter.date('2020-01-01', '2020-01-31'))
.filter(ee.Filter.bounds(bangalore))
.select('B.*')
.median()
.clip(bangalore)
Map.addLayer(after, rgbVis, 'after');
// Classify the image.
var afterClassified= after.classify(classifier);
Map.addLayer(afterClassified,
min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, 'after_classified');
{
// Reclassify from 0-3 to 1-4
var beforeClasses = beforeClassified.remap([0, 1, 2, 3], [1, 2, 3, 4])
var afterClasses = afterClassified.remap([0, 1, 2, 3], [1, 2, 3, 4])
// Show all changed areas
var changed = afterClasses.subtract(beforeClasses).neq(0)
// Exercise
// Show all areas where water became other classes and display the result
// Hint1: Select class 3 pixels from before image and NOT class 3 pixels from after image
// Hint2: use the .and() operation to select pixels matching both conditions
Lost water pixels between 2019 and 2020
Below are step-by-step video-based walkthrough of implementing real-world projects using Earth Engine. You can continue their learning journey by implementing these projects for their region of interest after the class.
users/ujavalgandhi/End-to-End-Projects
in the Scripts tab in the Reader section.Calculating Rainfall Deviation from the 30-year mean using CHIRPS Gridded Rainfall Data
Extracting a 10-year NDVI time-series over multiple polygons using MODIS data.
This course material is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are free to use the material for any non-commercial purpose. Kindly give appropriate credit to the original author.
© 2020 Ujaval Gandhi www.spatialthoughts.com
This course is offered as an instructor-led online class. Visit Spatial Thoughts to know details of upcoming sessions.