Introduction

GIS and Remote Sensing plays a critical role in the management of water resources. Many practitioners in this field are constrained by the availability of tools and computing resources to use these techniques effectively. Recent advances in cloud computing technology have given rise to platforms such as Google Earth Engine, which provide free access to a large pool of computational resources and datasets. The course is designed for researchers in the water sector, academicians, water managers, and stakeholders with basic knowledge of Remote Sensing. It will enable them to leverage this platform for water resource management applications.

View Presentation

View the Presentation ↗

Setting up the Environment

Sign-up for Google Earth Engine

If you already have a Google Earth Engine account, you can skip this step.

Visit https://signup.earthengine.google.com/ and sign-up with your Google account. You can use your existing gmail account to sign-up. It usually takes 1-2 days for approval. Hence do this step as soon as possible.

Tips to ensure smooth sign-up process:

  • Use Google Chrome browser.
  • When signing up for Earth Engine, please log out of all Google accounts and ensure you are logged into only 1 account which you want associated with Earth Engine.
  • Access to Google Earth Engine is granted via Google Groups. The default settings should be fine, but verify you have the correct setting enabled.
    • Visit groups.google.com
    • Click on Settings (gear icon) and select Global Settings.
    • Make sure the option Allow group managers to add me to their groups is checked.

Get the Course Materials

The course material and exercises are in the form of Earth Engine scripts shared via a code repository.

  1. Click this link to open Google Earth Engine code editor and add the repository to your account.
  2. If successful, you will have a new repository named users/ujavalgandhi/GEE-Water-Resources-Management in the Scripts tab in the Reader section.
  3. Verify that your code editor looks like below

If you do not see the repository in the Reader section, refresh your browser tab and it will show up.

Code Editor After Adding the Class Repository

Code Editor After Adding the Class Repository

Module 1: Google Earth Engine Fundamentals

In Module 1, you will gain the essential skills to find datasets, filter them to your region, clip them to your boundary, calculate remote sensing indices, extract water-bodies using binary thresholding, and export raster data.

01. Hello World

This script introduces the basic Javascript syntax and the video covers the programming concepts you need to learn when using Earth Engine. To learn more, visit Introduction to JavaScript for Earth Engine section of the Earth Engine User Guide.

Video

The Code Editor is an Integrated Development Environment (IDE) for Earth Engine Javascript API. It offers an easy way to type, debug, run and manage code. Type the code below and click Run to execute it and see the output in the Console tab.

Tip: You can use the keyboard shortcut Ctrl+Enter to run the code in the Code Editor

Hello World

Hello World

print('Hello World');

// Variables
var city = 'Bengaluru';
var country = 'India';
print(city, country);

var population = 8400000;
print(population);
 
// List
var majorCities = ['Mumbai', 'Delhi', 'Chennai', 'Kolkata'];
print(majorCities);

// Dictionary
var cityData = {
  'city': city,
  'population': 8400000,
  'elevation': 930
};
print(cityData);

// Function
var greet = function(name) {
    return 'Hello ' + name;
};
print(greet('World'));

// This is a comment

Open in Code Editor ↗

Exercise

// These are the 5 largest cities in the world: 
// Tokyo, Delhi, Shanghai, Mexico City, Sao Paulo

// Create a list named 'largeCities'
// The list should have names of all the above cities
// Print the list 

Try in Code Editor ↗

Saving Your Work

When you modify any script for the course repository, you may want to save a copy for yourself. If you try to click the Save button, you will get an error message like below

This is because the shared class repository is a Read-only repository. You can click Yes to save a copy in your repository. If this is the first time you are using Earth Engine, you will be prompted to choose the name of your home folder. Choose the name carefully, as it cannot be changed once created.

02. Working with Image Collections

Most datasets in Earth Engine come as a ImageCollection. An ImageCollection is a dataset that consists of images takes at different time and locations - usually from the same satellite or data provider. You can load a collection by searching the Earth Engine Data Catalog for the ImageCollection ID. Search for the Sentinel-2 Level 2A dataset and you will find its id COPERNICUS/S2_SR. Visit the Sentinel-2, Level 2A page and see Explore in Earth Engine section to find the code snippet to load and visualize the collection. This snippet is a great starting point for your work with this dataset. Click the Copy Code Sample button and paste the code into the code editor. Click Run and you will see the image tiles load in the map.

In the code snippet, You will see a function Map.setCenter() which sets the viewport to a specific location and zoom level. The function takes the X coordinate (longitude), Y coordinate (latitude) and Zoom Level parameters ranging from 1 (whole world) to 22 (pixel level). Replace the X and Y coordinates with the coordinates of your city and click Run to see the images of your city.

/**
 * Function to mask clouds using the Sentinel-2 QA band
 * @param {ee.Image} image Sentinel-2 image
 * @return {ee.Image} cloud masked Sentinel-2 image
 */
function maskS2clouds(image) {
  var qa = image.select('QA60');

  // Bits 10 and 11 are clouds and cirrus, respectively.
  var cloudBitMask = 1 << 10;
  var cirrusBitMask = 1 << 11;

  // Both flags should be set to zero, indicating clear conditions.
  var mask = qa.bitwiseAnd(cloudBitMask).eq(0)
      .and(qa.bitwiseAnd(cirrusBitMask).eq(0));

  return image.updateMask(mask).divide(10000);
}

var dataset = ee.ImageCollection('COPERNICUS/S2_SR')
                  .filterDate('2020-01-01', '2020-01-30')
                  // Pre-filter to get less cloudy granules.
                  .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',20))
                  .map(maskS2clouds);

var visualization = {
  min: 0.0,
  max: 0.3,
  bands: ['B4', 'B3', 'B2'],
};

Map.setCenter(77.5925, 12.9407, 12);

Map.addLayer(dataset.mean(), visualization, 'RGB');

Open in Code Editor ↗

Exercise

/**
 * Function to mask clouds using the Sentinel-2 QA band
 * @param {ee.Image} image Sentinel-2 image
 * @return {ee.Image} cloud masked Sentinel-2 image
 */
function maskS2clouds(image) {
  var qa = image.select('QA60');

  // Bits 10 and 11 are clouds and cirrus, respectively.
  var cloudBitMask = 1 << 10;
  var cirrusBitMask = 1 << 11;

  // Both flags should be set to zero, indicating clear conditions.
  var mask = qa.bitwiseAnd(cloudBitMask).eq(0)
      .and(qa.bitwiseAnd(cirrusBitMask).eq(0));

  return image.updateMask(mask).divide(10000);
}

var dataset = ee.ImageCollection('COPERNICUS/S2_SR')
                  .filterDate('2020-01-01', '2020-01-30')
                  // Pre-filter to get less cloudy granules.
                  .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',20))
                  .map(maskS2clouds);

var visualization = {
  min: 0.0,
  max: 0.3,
  bands: ['B4', 'B3', 'B2'],
};

Map.setCenter(77.5925, 12.9407, 12);

Map.addLayer(dataset.mean(), visualization, 'RGB');

Try in Code Editor ↗

03. Filtering Image Collections

The collection contains all imagery ever collected by the sensor. The entire collections are not very useful. Most applications require a subset of the images. We use filters to select the appropriate images. There are many types of filter functions, look at ee.Filter... module to see all available filters. Select a filter and then run the filter() function with the filter parameters.

We will learn about 3 main types of filtering techniques

  • Filter by metadata: You can apply a filter on the image metadata using filters such as ee.Filter.eq(), ee.Filter.lt() etc. You can filter by PATH/ROW values, Orbit number, Cloud cover etc.
  • Filter by date: You can select images in a particular date range using filters such as ee.Filter.date().
  • Filter by location: You can select the subset of images with a bounding box, location or geometry using the ee.Filter.bounds(). You can also use the drawing tools to draw a geometry for filtering.

After applying the filters, you can use the .size() function to check how many images match the filter criteria.

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
Map.centerObject(geometry, 10)

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

// Filter by metadata
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))

// Filter by date
var filtered = s2.filter(ee.Filter.date('2019-01-01', '2020-01-01'))

// Filter by location
var filtered = s2.filter(ee.Filter.bounds(geometry))

// Let's apply all the 3 filters together on the collection

// First apply metadata fileter
var filtered1 = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
// Apply date filter on the results
var filtered2 = filtered1.filter(
  ee.Filter.date('2019-01-01', '2020-01-01'))
// Lastly apply the location filter
var filtered3 = filtered2.filter(ee.Filter.bounds(geometry))

// Instead of applying filters one after the other, we can 'chain' them
// Use the . notation to apply all the filters together
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))
  
print(filtered.size())

var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
Map.addLayer(filtered, rgbVis, 'Filtered') 

Open in Code Editor ↗

Exercise

// Add one more filter in the script below to select images
// from only one of the satellites - Sentinel-2A - from the
// Sentinel-2 constellation

// Hint1: Use the 'SPACECRAFT_NAME' property
// Hint2: Use the ee.Filter.eq() filter

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
Map.centerObject(geometry, 10)

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))
print(filtered.size())

Try in Code Editor ↗

04. Mosaics and Composites

The default order of the collection is by date. So when you display the collection, it implicitly creates a mosaic with the latest pixels on top. You can call .mosaic() on a ImageCollection to create a mosaic image from the pixels at the top (i.e) first image in the stack.

We can also create a composite image by applying selection criteria to each pixel from all pixels in the stack. Here we use the .median() function to create a composite where each pixel value is the median of all pixels values at each pixel from the stack for all matching bands.

Tip: If you need to create a mosaic where the images are in a specific order, you can use the .sort() function to sort your collection by a property first.

Mosaic vs. Median Composite

Mosaic vs. Median Composite

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))
 
var mosaic = filtered.mosaic() 
 
var medianComposite = filtered.median();

Map.centerObject(geometry, 10)
Map.addLayer(filtered, rgbVis, 'Filtered Collection');
Map.addLayer(mosaic, rgbVis, 'Mosaic');
Map.addLayer(medianComposite, rgbVis, 'Median Composite')

Open in Code Editor ↗

Exercise

// Create a median composite for the year 2020 and load it to the map

// Compare both the composites to see the changes in your city

Try in Code Editor ↗

05. Feature Collection

Feature Collections are similar to Image Collections - but they contain Features, not images. They are equivalent to Vector Layers in a GIS. We can load, filter and display Feature Collections using similar techniques that we have learned so far.

Search for GAUL Second Level Administrative Boundaries and load the collection. This is a global collection that contains all Admin2 boundaries. We can apply a filter using the ADM1_NAME property to get all Admin2 boundaries (i.e. Districts) from a state.

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");

var karnataka = admin2.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))

var visParams = {'color': 'red'}
Map.addLayer(karnataka, visParams, 'Karnataka Districts')

Open in Code Editor ↗

Exercise

// Add the admin2 layer to the map using Map.addLayer() function
// Go to your home city and inspect the layer to find the name of the region
// Use the ADM2_NAME property and apply a filter
// Display only the selected polygon on the map.

Try in Code Editor ↗

06. Clipping

It is often desirable to clip the images to your area of interest. You can use the clip() function to mask an image using geometry.

While in Desktop softwares, clipping is used to remove the unnecessary portion of a large image to save computation time, but in the Earth Engine, clipping can increase the computation time. As described in the Earth Engine Coding Best Practices guide, avoid clipping or do it at the very end of your process.

Full Image vs. Clipped Image

Full Image vs. Clipped Image

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))

var geometry = selected.geometry()
Map.centerObject(geometry)

var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'], 
};
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median();

var clipped = image.clip(geometry)


Map.addLayer(clipped, rgbVis, 'Clipped')

Open in Code Editor ↗

Exercise

// Add the GAUL admin boundary to the Map
// Search for your city and inspect the ADM2_NAME
// Replace the name with the ADM2_NAME of your selected region
// And display the clipped composite on the map.

Try in Code Editor ↗

07. Calculating Indices

Spectral Indices are central to many aspects of remote sensing. For almost all research/work, you will need to compute a pixel-wise ratio of 2 or more bands. The most commonly used formula for calculating an index is computing the Normalized Difference between 2 bands. Earth Engine provides a helper function normalizedDifference() to help calculate normalized indices, such as Modified Normalized Difference Water Index (MNDWI).

For more complex formulae such as Automated Water Extraction Index (AWEI), you can use the expression() function to describe the calculation. AWEI is a moden technique to do surface water mapping with high accuracy. Learn more about AWEI

RGB, MNDWI, NDVI and AWEI images

RGB, MNDWI, NDVI and AWEI images

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = selected.geometry()

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median().clip(geometry); 


// Calculate  Normalized Difference Vegetation Index (NDVI)
// 'NIR' (B8) and 'RED' (B4)
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
 
// Calculate Modified Normalized Difference Water Index (MNDWI)
// 'GREEN' (B3) and 'SWIR1' (B11)
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 

// Calculate Automated Water Extraction Index (AWEI)
// AWEI is a Multi-band Index that uses the following bands
// 'BLUE' (B2), GREEN' (B3), 'NIR' (B8), 'SWIR1' (B11) and 'SWIR2' (B12)

// Formula for AWEI is as follows
// AWEI = 4 * (GREEN - SWIR2) - (0.25 * NIR + 2.75 * SWIR1)

// For more complex indices, you can use the expression() function

// Note: 
// For the AWEI formula, the pixel values need to converted to reflectances
// Multiplyng the pixel values by 'scale' gives us the reflectance value
// The scale value is 0.0001 for Sentinel-2 dataset

var awei = image.expression(
    '4*(GREEN - SWIR1) - (0.25*NIR + 2.75*SWIR2)', {
      'GREEN': image.select('B3').multiply(0.0001),
      'NIR': image.select('B8').multiply(0.0001),
      'SWIR1': image.select('B11').multiply(0.0001),
      'SWIR2': image.select('B12').multiply(0.0001),

}).rename('awei');


var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var ndviVis = {min:0, max:1, palette: ['white', 'green']}
var ndwiVis = {min:0, max:0.5, palette: ['white', 'blue']}

Map.centerObject(geometry, 10)
Map.addLayer(image, rgbVis, 'Image');
Map.addLayer(ndvi, ndviVis, 'ndvi')
Map.addLayer(mndwi, ndwiVis, 'mndwi')
Map.addLayer(awei, ndwiVis, 'awei') 

Open in Code Editor ↗

Exercise

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = selected.geometry()

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median().clip(geometry); 

// AWEI is also known as AWEInsh (no shadows)
// AWEInsh works best where there are no shadows in the scene
var awei = image.expression(
    '4*(GREEN - SWIR1) - (0.25*NIR + 2.75*SWIR2)', {
      'GREEN': image.select('B3').multiply(0.0001),
      'NIR': image.select('B8').multiply(0.0001),
      'SWIR1': image.select('B11').multiply(0.0001),
      'SWIR2': image.select('B12').multiply(0.0001),

}).rename('awei_nsh');

// There is another version of AWEI called AWEIsh
// This index is useful where there are mountain or building shadows

// The expression for AWEI (sh) is as follows
// AWEI (sh) = BLUE + 2.5*GREEN - 1.5*(NIR + SWIR1) - 0.25*SWIR2

// Works best where there are shadows but no bright reflective surfaces (snow, roof)
var aweiSh = image.expression(
    'BLUE + 2.5*GREEN - 1.5*(NIR + SWIR1) - 0.25*SWIR2', {
      'BLUE': image.select('B2').multiply(0.0001),
      'GREEN': image.select('B3').multiply(0.0001),
      'NIR': image.select('B8').multiply(0.0001),
      'SWIR1': image.select('B11').multiply(0.0001),
      'SWIR2': image.select('B12').multiply(0.0001),

}).rename('awei_sh');


var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
var ndviVis = {min:0, max:1, palette: ['white', 'green']}
var ndwiVis = {min:0, max:0.5, palette: ['white', 'blue']}

Map.centerObject(geometry, 10)
Map.addLayer(image, rgbVis, 'Image');

// Exercise
// Replace the name with the ADM2_NAME of your selected region
// Add both aweiSh and aweiNsh layers to the map and compare the results.

Try in Code Editor ↗

08. Computation on Images

A standard method to extract information from an image is to threshold the image. We can write conditions for thresholding. The result will be a binary image with pixel values 1 or 0. For all pixel matching, the condition value will be 1, and for other pixels where the condition fails value will be 0. The conditions can be written using the logical operators like greater than .gt(), lesser than .lt(), equal to .eq(), greater than or equal to .gte(), etc. We can also combine multiple layers to create a condition using the boolean operators like AND .and(), OR .or() functions.

RGB and Thresholded images

RGB and Thresholded images

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = selected.geometry()

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median().clip(geometry); 

Map.centerObject(geometry, 10)
Map.addLayer(image, rgbVis, 'Image');

var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 
var awei = image.expression(
    '4*(GREEN - SWIR1) - (0.25*NIR + 2.75*SWIR2)', {
      'GREEN': image.select('B3').multiply(0.0001),
      'NIR': image.select('B8').multiply(0.0001),
      'SWIR1': image.select('B11').multiply(0.0001),
      'SWIR2': image.select('B12').multiply(0.0001),
}).rename('awei');

// Simple Thresholding
var waterMndwi = mndwi.gt(0)
var waterAwei = awei.gt(0)

// Combining Multiple Conditions
var waterMultiple = ndvi.lt(0).and(mndwi.gt(0))

var rgbVis = {min: 0, max: 3000, bands: ['B4', 'B3', 'B2']};
var waterVis = {min:0, max:1, palette: ['white', 'blue']}

Map.addLayer(image, rgbVis, 'Image');
Map.addLayer(waterMndwi, waterVis, 'MNDWI - Simple threshold')
Map.addLayer(waterAwei, waterVis, 'AWEI - Simple threshold')
Map.addLayer(waterMultiple, waterVis, 'MNDWI and NDVI Threshold')

Open in Code Editor ↗

Exercise

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = selected.geometry()

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median().clip(geometry); 

var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 

// Simple Thresholding
var waterMndwi = mndwi.gt(0)

// Excercise

// Replace the selected region with your choice
// And adjust the MNDWI thresholds to extract the waterbodies
// Try different thresholds and add images to the map to compare
// Hint: Try negative thresholds -0.1 and -0.2
// Hint: Try positive thresholds 0.1 and 0.2

var rgbVis = {min: 0, max: 3000, bands: ['B4', 'B3', 'B2']};
var waterVis = {min:0, max:1, palette: ['black', 'white']}

Map.centerObject(geometry, 10)
Map.addLayer(image, rgbVis, 'Image');
Map.addLayer(waterMndwi, waterVis, 'MNDWI (Threshold 0)')

Try in Code Editor ↗

09. Export

Earth Engine allows for exporting both vector and raster data to be used in an external program. Vector data can be exported as a CSV or a Shapefile, while Rasters can be exported as GeoTIFF files. We will now export the Sentinel-2 Composite as a GeoTIFF file.

During the export, we can export either a raw image or visualized image. The raw image will retain the original reflectance value of each pixel. The visualized image will generate RGB or grayscale visualization of the image. The image will look like it is in the earth engine map view, but the band reflectance information will be lost. The visualized image should not use it for analysis purposes.

Tip: Code Editor supports autocompletion of API functions using the combination Ctrl+Space. Type a few characters of a function and press Ctrl+Space to see autocomplete suggestions. You can also use the same key combination to fill all parameters of the function automatically.

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var s2 = ee.ImageCollection("COPERNICUS/S2_SR");

var selected = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Bangalore Urban'))
var geometry = selected.geometry()

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))

var image = filtered.median().clip(geometry); 

var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 

var waterMndwi = mndwi.gt(0)

var rgbVis = {min: 0, max: 3000, bands: ['B4', 'B3', 'B2']};
var waterVis = {min:0, max:1, palette: ['black', 'white']}

Map.centerObject(geometry, 10)

Map.addLayer(image, rgbVis, 'Image');
Map.addLayer(waterMndwi, waterVis, 'MNDWI')

var exportImage = image.select(['B4', 'B3', 'B2'])

Export.image.toDrive({
  image: exportImage,
  description: 'Raw_Composite_Image',
  folder: 'earthengine',
  fileNamePrefix: 'composite',
  region: geometry,
  scale: 10,
  maxPixels: 1e10})

var visualizedImage = image.visualize(rgbVis)

Export.image.toDrive({
  image: visualizedImage,
  description: 'Visualized_Composite_Image',
  folder: 'earthengine',
  fileNamePrefix: 'composite_visualized',
  region: geometry,
  scale: 10,
  maxPixels: 1e10}) 

Open in Code Editor ↗

Once you run this script, the Tasks tab will be highlighted. Switch to the tab and you will see the tasks waiting. Click Run next to each task to start the process. On clicking the Run button, you will be prompted for a confirmation dialog. Verify the settings and click Run to start the export.

Console, Tasks and Confirmation.

Console, Tasks and Confirmation.

Once the Export finishes, a GeoTiff file for each export task will be added to your Google Drive in the specified folder. You can download them and use it in a GIS software.

Visualized vs Raw

Visualized vs Raw

Exercise

// Export the waterMndwi image to your drive 

Try in Code Editor ↗

Module 2: Surface Water Mapping

In this Module, we will learn create an vectore surface water map for a water shed boundary in HydroSheds using JRC - Global Surface Water. This data contains more accurate global-level surface water information. Using this, we will map the surface water, then learn to water mask and get specific information, execute morphological operations, and create a vector map for a river basin.

View Presentation

View the Presentation ↗

01. Load Global Surface Water Data

Lets load the JRC Global Surface Water Mapping Layers. This data is the Spatio-temporal distribution of surface water. It is a single image with 7 bands, where each band contains unique information. Let’s use the occurrence band, which includes information on the frequency of the water present from 1984-2020. The pixel value ranges from 0-100, where 0 represents No trace of water in any year, and 100 represents water present in all 36 years.

Use the Inspector to find the frequency of water present in the location.

Frequency of water present over Bellandur lake

Frequency of water present over Bellandur lake

Open in Code Editor ↗

Exercise

The max_extent band gives information on whether a pixel ever contained water or not, 0 represents the pixel has no water present in any of the months, and 1 represents the pixel has contained water in it for at least 1 month.

Load the max_extent band and visualize it with correct parameters.

// Create a map showing everywhere surface water was present.
// Select the 'max_extent' band and display the results

Try in Code Editor ↗

02. Create a Water Mask

Earth Engine has a mask function to represent the No Data values in an Image. Any masked pixel of an image will not be used for analysis or visualization. There are two types of masks selfMask and updateMask.

The selfMask will remove all the pixel values with zero. In the max_extent band visualization, we saw the pixel value is either 0 or 1, the pixel value with 0 contains no information, but the pixel value with 1 represents the water occurrence. So to remove the No Data pixel within the image, we can use the selfMask function. This will mask all pixel value with 0.

The updateMask will mask a primary image using a mask image. A mask image is an image with 0 1 pixel value. The primary image pixels overlaying with 0-pixel value in mask image will be set as No Data and masked out.

Before applying Mask vs After applying Mask

Before applying Mask vs After applying Mask

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
// Select the 'max_extent' band
var water = gsw.select('max_extent');

// max_extent band has values 0 or 1
// We can remove the 0 values by masking the layer
// Mask needs to have 0 or 1 values
// 0 - remove the pixel
// 1 - keep the pixel

var masked = water.updateMask(water)

// There is a handy short-hand for masking out 0 values for a layer
// You can call selfMask() which will mask the image with itself
// Effectively removing 0 values and retaining non-zero values
var masked = water.selfMask()
Map.addLayer(masked, {}, 'Water');

Open in Code Editor ↗

Exercise

The seasonality band gives information on the number of months water was present in a pixel. Pixel value ranges from 1 to 12, 1 represents water present for 1 month, and 12 represents water present for 12 months in a year.

Select all pixels in which water present for more than 5 months in a year.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");


// Exercise
// Use the 'seasonality' band to map permanent water
// Permanent water can be defined where water is 
// present > 5 months in a year

// Mask the layer and display the map in blue color

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
var seasonality = gsw.select('seasonality');

Try in Code Editor ↗

03. Find Lost Waterbodies

The transition band has water class changes between 1986 and 2020. It has 10 classes representing different changes, in which classes 3 and 6 represent permanent and seasonal water loss. We can calculate the total area of surface water lost by selecting both classes.

The binary operator eq is used to select a particular class and create a binary image representing the availability of the class. The boolean operator or is used to create a binary image where either class value is present.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater")
// Select the 'transition' band
var water = gsw.select('transition'); 

var lostWater = water.eq(3).or(water.eq(6))

var visParams = {
  min:0,
  max:1,
  palette: ['white','red']
}

// Mask '0' value pixels
var lostWater = lostWater.selfMask()
Map.addLayer(lostWater, visParams, 'Lost Water') 

Open in Code Editor ↗

Exercise

Use the transition band and find the total surface water gain.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater")
var water = gsw.select('transition'); 

// Exercise
// Find the surface water that was gained between 1984-2020 
// Display the results on the map

Try in Code Editor ↗

04. Get Yearly Water History

The JRC Yearly Water Classification History image collection contains the spatiotemporal map of surface water from 1984 to 2020. A total of 37 images represent 1 image for each year.

Let us filter this collection to visualize the surface water map of the year 2020. This data contains 4 bands, where bands 2 and 3 indicate water presence.

After filtering the Image Collection, the resultant would be a collection even if it contains just 1 image. We can use the function .first() to extract the particular image.

var gswYearly = ee.ImageCollection("JRC/GSW1_3/YearlyHistory");

// Filter using the 'year' property

var filtered = gswYearly.filter(ee.Filter.eq('year', 2020))
var gsw2020 = ee.Image(filtered.first())

// Select permanent or seasonal water
var water2020 = gsw2020.eq(2).or(gsw2020.eq(3))

// Mask '0' value pixels
var water2020 = water2020.selfMask()

var visParams = {
  min:0,
  max:1,
  palette: ['white','blue']
}
Map.addLayer(water2020, visParams, '2020 Water')

Open in Code Editor ↗

Exercise

Use the JRC Yearly Water Classification History dataset and visualize the surface water map for the year 2010.

// Get the surface water image for the year 2010
// Display it on the map

Try in Code Editor ↗

05. Locate a SubBasin

The HydroSHEDS is a series of data primarily developed by the World Wildlife Fund. It contains a global level geo-reference stream network, watershed boundaries, and drainage directions. We will use the hydroBASINS, a feature collection containing drainage basin boundaries. This data is created using SRTM DEM data. There are multiple levels of hydroBASINS, ranging from 1 to 11. In level 1, the basin boundaries are delineated as huge groups. Moving levels up, these boundaries will be sub-divided into smaller boundaries.

We will use the Level 7 hydroBASINS feature collection to filter and locate a sun-basin called Arkavathy. The hydroBASINS doesn’t contain basin names, so basins should be visually located and filtered using the HYBAS_ID property to filter a particular basin boundary.

var hydrobasins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var basin = hydrobasins.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
Map.centerObject(basin)
Map.addLayer(basin, {color: 'grey'}, 'Arkavathy Sub Basin')

Open in Code Editor ↗

Exercise

Using the Level 7 hydroBASINS, locate and filter a watershed basin in your region of interest.

// Add the hydrobasins layer to the map
// Zoom to your region on interest and inspect the layer
// Find the id for a sub basin of interest and add it to the map.

Try in Code Editor ↗

06. Create a Surface Water Map

A raster image output can be post-processed using morphological operations to create a clean-looking surface map. There are different types of operations like Morphological Opening, Morphological Closing, and Majority Filtering.

  • The opening operation can reduce the salt and pepper noise in the image.
  • The closing operation can fill holes and create a complete class.
  • A majority filter will smooth the image. This operation should be used for mapping purposes only and not used while calculating statistics, as it will significantly alter the result.

Earth Engine has many functions to perform morphological operations under ee.Image. Let’s use the Max_extent band from Global Surface Water and fill holes to create a surface water map. An Erosion operation follows a Dilation operation to perform a closing operation, which translates as the focalMin function chained with focalMax in the earth engine. Different kernelType are available, let’s use the Square kernel with 30 as search radius as the JRC’s Global Surface Water spatial resolution is 30m. We can also define a custom kernel if the required kernel is unavailable in kernelType.

Before Operation vs After Operation

Before Operation vs After Operation

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
var hydrobasins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var basin = hydrobasins.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = basin.geometry()
Map.centerObject(geometry, 10)

var water = gsw.select('max_extent');

var clipped = water.clip(geometry)

var visParams = {min:0, max:1, palette: ['white','blue']}
Map.addLayer(clipped, visParams, 'Surface Water') 

// Do unmask() to replace masked pixels with 0
// This avoids extra pixels along the edges
var clipped = clipped.unmask(0)
// Perform a morphological closing to fill holes in waterbodies
var waterProcessed = clipped
  .focalMax({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'})
  .focalMin({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'});

Map.addLayer(waterProcessed, visParams, 'Surface Water (Processed)')

Open in Code Editor ↗

Exercise

By default, the total iteration is 1. We can increase it as required, so the kernel will pass through the image to perform the operation as many times as required.

Set the iteration parameter to 2 and execute the morphological closing operation.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
var hydrobasins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var basin = hydrobasins.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = basin.geometry()
Map.centerObject(geometry, 10)

var water = gsw.select('max_extent');

var clipped = water.clip(geometry)

var visParams = {min:0, max:1, palette: ['white','blue']}
Map.addLayer(clipped, visParams, 'Surface Water') 

// Do unmask() to replace masked pixels with 0
// This avoids extra pixels along the edges
var clipped = clipped.unmask(0)
// Perform a morphological closing to fill holes in waterbodies
var waterProcessed = clipped
  .focal_max({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'})
  .focal_min({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'});
Map.addLayer(waterProcessed, visParams, 'Surface Water (Processed)')

// Exercise
// Add an 'iterations' parameter with value 2 to 
// focal_max and focal_min functions to further process the data

Try in Code Editor ↗

07. Convert Raster to Vector

After creating a final raster map, we can vectorize it. The reduceToVectors function should be used with countEvery reducer to convert a raster to a vector. This reducer will connect all linked pixels of the same class into a single feature and return a feature collection. By default, the connectedness of a pixel is checked in cardinal directions only. To check the connectedness of a pixel in the diagonal direction, eightConnected should be set as true.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
var hydrobasins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var basin = hydrobasins.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = basin.geometry()
Map.centerObject(geometry, 10)

var water = gsw.select('max_extent');

var clipped = water.clip(geometry)

var visParams = {min:0, max:1, palette: ['white','blue']}
Map.addLayer(clipped, visParams, 'Surface Water', false) 

// Do unmask() to replace masked pixels with 0
// This avoids extra pixels along the edges
var clipped = clipped.unmask(0)
// Perform a morphological closing to fill holes in waterbodies
var waterProcessed = clipped
  .focal_max({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'})
  .focal_min({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'});
    

var vector = waterProcessed.reduceToVectors({
  reducer: ee.Reducer.countEvery(),
  geometry: geometry,
  scale: 30,
  maxPixels: 1e10,
  eightConnected: false
})
var visParams = {color: 'blue'}
Map.addLayer(vector, visParams, 'Surface Water Polygons')  

Open in Code Editor ↗

Exercise

The final feature collection will contain the class value from the pixel as the label property. Since the raster is a binary image with 0 - 1 value, the label with value 1 will represent the water bodies.

Use the eq filter and filter the water bodies alone.

var gsw = ee.Image("JRC/GSW1_3/GlobalSurfaceWater");
var hydrobasins = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var basin = hydrobasins.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = basin.geometry()
Map.centerObject(geometry, 10)

var water = gsw.select('max_extent');

var clipped = water.clip(geometry)

var visParams = {min:0, max:1, palette: ['white','blue']}
Map.addLayer(clipped, visParams, 'Surface Water', false) 

// Do unmask() to replace masked pixels with 0
// This avoids extra pixels along the edges
var clipped = clipped.unmask(0)
// Perform a morphological closing to fill holes in waterbodies
var waterProcessed = clipped
  .focal_max({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'})
  .focal_min({
    'radius':30,
    'units': 'meters',
    'kernelType': 'square'});
    

var vector = waterProcessed.reduceToVectors({
  reducer: ee.Reducer.countEvery(),
  geometry: geometry,
  scale: 30,
  maxPixels: 1e10,
  eightConnected: false
})

// Exercise
// The vector variable is a FeatureCollection
// It contains polygons for both water and non-water areas
// Apply a filter to select only water polygons
// Display the results on the map
// Hint: Use ee.Filter.eq() on the 'label' property

var visParams = {color: 'blue'}

Try in Code Editor ↗

Module 3: Precipitation Time Series Analysis

Time series analysis is a technique to plot a sequence of data collected over a period of time. Remote sensing observations are such big data that can be crunched to user needs to create a graphical representation of the data. With the earth engine’s computation capacity, this analysis can be done at a large scale. But to attain this computation power earth engine uses a different method of computation which we will learn in this module.

Unlike traditional network programming, where conditional statements and loops are used to filter data and perform a task over a collection, the Earth Engine servers use the MapReduce concept to increase the efficiency by parallel computing. Earth Engine servers don’t understand anything apart from earth engine objects. So, all the objects before being mapped into a function should be converted as earth engine objects.

In this module, we will learn about earth engine objects, creating user-defined functions, mapping a function, and using reducers. As a final result, we will create a time series chart and do a Spatio-temporal trend analysis of precipitation.

View Presentation

View the Presentation ↗

01. Mapping a Function

This script introduces the basics of the Earth Engine API. Here we map a list of objects over a function to perform a task on all the objects. While defining a function, it should return a result after computation, which can be stored in a variable. To learn more, visit the Earth Engine Objects and Methods section of the Earth Engine User Guide.

Apart from other regular strings and numbers, the earth engine can understand dates. Under ee.Date there are many functions by which dates can be defined or converted from one format to another.

// Let's see how to take a list of numbers and add 1 to each element
var myList = ee.List.sequence(1, 10);

// Define a function that takes a number and adds 1 to it
var myFunction = function(number) {
  return number + 1;
}
print(myFunction(1));

//Re-Define a function using Earth Engine API
var myFunction = function(number) { 
  return ee.Number(number).add(1);
}

// Map the function of the list
var newList = myList.map(myFunction);
print(newList); 

// Let's write another function to square a number
var squareNumber = function(number) {
  return ee.Number(number).pow(2)
}
var squares = myList.map(squareNumber)
print(squares)

// Extracting value from a list

var value = newList.get(0)
print(value)

// Casting

// Let's try to do some computation on the extracted value
//var newValue = value.add(1)
//print(newValue)

// You get an error because Earth Engine doesn't know what is the type of 'value'
// We need to cast it to appropriate type first
var value = ee.Number(value)
var newValue = value.add(1)
print(newValue)

// Dates
// For any date computation, you should use ee.Date module

var date = ee.Date.fromYMD(2019, 1, 1)
var futureDate = date.advance(1, 'year')
print(futureDate) 

Open in Code Editor ↗

Exercise

Define the function createDate, map the weeks list as input. The function should return dates incremented by the week number.

var startDate = ee.Date.fromYMD(2019, 1, 1)

// Exercise
// Create a list of dates for start of every week of the year

// We first create a list of week numbers 
var weeks = ee.List.sequence(0, 51)

// Write a function that takes a week number 'n'
// and returns a date 'n' weeks in advance.
var createDate = function(n) {
  // Use the .advance() function to find the date 'n' weeks from start
}
// map() the function on list of weeks

// print the result

Try in Code Editor ↗

02. Reducers

A Reduce operation allows you to compute statistics on a large amount of inputs. In Earth Engine, you need to run reduction operation when creating composites, calculating statistics, doing regression analysis etc. The Earth Engine API comes with a large number of built-in reducer functions (such as ee.Reducer.sum(), ee.Reducer.histogram(), ee.Reducer.linearFit() etc.) that can perform a variety of statistical operations on input data. You can run reducers using the reduce() function. Earth Engine supports running reducers on all data structures that can hold multiple values, such as Images (reducers run on different bands), ImageCollection, FeatureCollection, List, Dictionary etc. The script below introduces basic concepts related to reducers.

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
    
// Computing stats on a list
var myList = ee.List.sequence(1, 10);
print(myList)

// Use a reducer to compute min and max in the list
var mean = myList.reduce(ee.Reducer.mean());
print(mean);


// Apply a reducer on a image collection
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))
  .select('B.*')

print(filtered.size())
var collMean = filtered.reduce(ee.Reducer.mean());
print('Reducer on Collection', collMean);

var image = ee.Image(filtered.first())
// If we want to compute min and max for each band, use reduceRegion instead
var stats = image.reduceRegion({
  reducer: ee.Reducer.mean(),
  geometry: image.geometry(),
  scale: 100,
  maxPixels: 1e10
  })
print(stats);

Open in Code Editor ↗

Exercise

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])
var s2 = ee.ImageCollection("COPERNICUS/S2");
    
// Computing stats on a list
var myList = ee.List.sequence(1, 10);
print(myList)

// Use a reducer to compute min and max in the list
var mean = myList.reduce(ee.Reducer.mean());
print(mean);


// Apply a reducer on a image collection
var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2020-01-01'))
  .filter(ee.Filter.bounds(geometry))
  .select('B.*')

print(filtered.size())
var collMean = filtered.reduce(ee.Reducer.mean());
print('Reducer on Collection', collMean);

var image = ee.Image(filtered.first())

// Compute the 'mean' of each band
var stats = image.reduceRegion({
  reducer: ee.Reducer.mean(),
  geometry: image.geometry(),
  scale: 100,
  maxPixels: 1e10
  })
print(stats);

// Extract the average value for B8 and print it.
// Tip: Result of reduceRegion is a dictionary. 

Try in Code Editor ↗

Introduction to Gridded Rainfall Datasets

The main advantage of the gridded precipitation dataset is it can be temporally correlated at any given period. There are many datasets available in the google earth engine, and these datasets are derived from Satellite data, Satellite + Ground Stations, and Reanalysis. These data come in different spatial and temporal resolutions based on the user’s requirement to select the data.

View Presentation

View the Presentation ↗

03. Calculating Total Rainfall

The CHIRPS Pentad is 5 days composite of rainfall at ~6 km spatial resolution. To compute this data, both satellite and grounds values are considered. To read more about the methodology, visit nature.com

Using this image collection, lets filter all the images for 2017. To get a yearly rainfall we can reduce the collection using reduce function with sum reducer. This will return a single global image where each pixel represent the total annual rainfall value in mm at that location. Now to get a annual average rainfall at a particular region of interest, we can do a reduceRegions with a mean reducer over the region. This reducer will reduce the region’s image and return a dictionary. The dictionary will contain the average value of all the bands in the image, which can be extracted using the get function.


var geometry = ee.Geometry.Polygon([[[77.4383382656616, 13.049764921558324],
          [77.4383382656616, 12.814196240882762],
          [77.72123621488035, 12.814196240882762],
          [77.72123621488035, 13.049764921558324]]])

Map.addLayer(geometry, {}, 'ROI')

var chirps = ee.ImageCollection("UCSB-CHG/CHIRPS/PENTAD")

var startDate = ee.Date.fromYMD(2017, 1,1)
var endDate = startDate.advance(1, 'year')
var filtered = chirps
  .filter(ee.Filter.date(startDate, endDate))

// Calculate yearly rainfall
var total = filtered.reduce(ee.Reducer.sum())

var palette = ['#ffffcc','#a1dab4','#41b6c4','#2c7fb8','#253494']
var visParams = {
  min:0,
  max: 2000,
  palette: palette
}
Map.addLayer(total, visParams, 'Total Precipitation')


Map.centerObject(geometry, 10)

// Calculate average rainfall across a basin
var stats = total.reduceRegion({
  reducer: ee.Reducer.mean(),
  geometry: geometry,
  scale: 5000,
  })
print(stats)
print('Average rainfall across ROI :', stats.get('precipitation_sum'))

Open in Code Editor ↗

Output

Average rainfall across ROI :
1345.867775034789

Exercise

Update the date rage for monsoon season(1 June - 30 September) in India.

// Update the script to calculate total rainfall for the monsoon season
// Monsoon season in India is from 1 June - 30 September

Try in Code Editor ↗

04. Aggregating Time Series

In the previous exercise, we aggregated to get the average rainfall across a region. Let’s compute a time series representing the total rainfall of each month in 2019.

Earth Engine has a function called ee.Date.fromYMD to create a date from Year, Month, and Date. Using this function and .advance(), we can create the filter date to filter the image collection to a particular month. Then with reduce function with sum as reducer, the monthly image collection is aggregated to the monthly total image. This newly created image won’t have any properties. Hence using the .set() function, we have to assign the Year, Month, system:time_start, and system:time_end properties. With the above procedure in the sequence, we can create a function to input the month’s number and return the monthly total precipitation image.

We can create the month’s list using ee.List.sequence and map it to the function built. Since the mapped object is of type List, the resultant will also be a list with 12 images. This list of images can be converted back as Image Collection using ee.ImageCollection.fromYMD. Once we have the collection print, it will contain 12 images representing the month’s total precipitation.

The system:time_start is a unique property that the earth engine can natively understand. It is the Unix Timestamp that is used to create a time series chart.

var chirps = ee.ImageCollection("UCSB-CHG/CHIRPS/PENTAD")


var year = 2019
var startDate = ee.Date.fromYMD(year, 1, 1)
var endDate = startDate.advance(1, 'year')
var yearFiltered = chirps
  .filter(ee.Filter.date(startDate, endDate))
// CHIRPS collection has 1 image for every pentad (5-days)
// The collection is filtered for 1 year and the time-series
// now has 72 images
print(yearFiltered)

// We need to aggregate this time series to compute
// monthly images

// Create a list of months
var months = ee.List.sequence(1, 12)

// Write a function that takes a month number
// and returns a monthly image
var createMonthlyImage = function(month) {
  var startDate = ee.Date.fromYMD(year, month, 1)
  var endDate = startDate.advance(1, 'month')
  var monthFiltered = yearFiltered
    .filter(ee.Filter.date(startDate, endDate))
  // Calculate total precipitation
  var total = monthFiltered.reduce(ee.Reducer.sum())
  return total.set({
    'system:time_start': startDate.millis(),
    'system:time_end': endDate.millis(),
    'year': year,
    'month': month})
}

// map() the function on the list  of months
// This creates a list with images for each month in the list
var monthlyImages = months.map(createMonthlyImage)
// Create an imagecollection
var monthlyCollection = ee.ImageCollection.fromImages(monthlyImages)
print('Monthly Collection', monthlyCollection)

Open in Code Editor ↗

Exercise

Filter the monthlyCollection for the months June, July, August and September.

// Apply a filter to select images for the months
// June, July, August and September
// Print the collection to verify

Try in Code Editor ↗

05. Charting Monthly Rainfall

Now we can put together all the skills we have learned so far - filter, map, and reduce to create a chart of monthly total rainfall in the year 2019 over a region. Earth Engine API supports charting functions based on the Google Chart API. There are various charts to use, from which we use the ui.Chart.image.series() function to create a time-series chart.

Image series chart accepts an image collection, geometry, scale, and reducer. The geometry in the region of interest, reducer in the reduction operation the region of interest should be aggregated if the geometry is a polygon. Scale is the resolution at which the image should be reduced. Using .setOptions, we can customize the chart to our needs.

Charting itself is vast, and all function are self explanatory. Google Earth Engine documentation provides all types of charting examples

var geometry = ee.Geometry.Point([77.58253382230242, 12.942038375108362]);
Map.addLayer(geometry, {}, 'POI')

var chirps = ee.ImageCollection("UCSB-CHG/CHIRPS/PENTAD")

var year = 2019
var startDate = ee.Date.fromYMD(year, 1, 1)
var endDate = startDate.advance(1, 'year')
var yearFiltered = chirps
  .filter(ee.Filter.date(startDate, endDate))
// CHIRPS collection has 1 image for every pentad (5-days)
// The collection is filtered for 1 year and the time-series
// now has 72 images
print(yearFiltered)

// We need to aggregate this time series to compute
// monthly images

// Create a list of months
var months = ee.List.sequence(1, 12)

// Write a function that takes a month number
// and returns a monthly image
var createMonthlyImage = function(month) {
  var startDate = ee.Date.fromYMD(year, month, 1)
  var endDate = startDate.advance(1, 'month')
  var monthFiltered = yearFiltered
    .filter(ee.Filter.date(startDate, endDate))
  // Calculate total precipitation
  var total = monthFiltered.reduce(ee.Reducer.sum())
  return total.set({
    'system:time_start': startDate.millis(),
    'system:time_end': endDate.millis(),
    'year': year,
    'month': month})
}

// map() the function on the list  of months
// This creates a list with images for each month in the list
var monthlyImages = months.map(createMonthlyImage)
// Create an imagecollection
var monthlyCollection = ee.ImageCollection.fromImages(monthlyImages)
print(monthlyCollection)

// Create a chart of monthly rainfall for a location
var chart = ui.Chart.image.series({
  imageCollection: monthlyCollection,
  region: geometry,
  reducer: ee.Reducer.mean(),
  scale: 5566
}).setOptions({
      lineWidth: 1,
      pointSize: 3,
      title: 'Monthly Rainfall at Bengaluru',
      vAxis: {title: 'Rainfall (mm)'},
      hAxis: {title: 'Month', gridlines: {count: 12}}
})
print(chart)

Open in Code Editor ↗

2019 - Monthly rainfall

2019 - Monthly rainfall

Exercise

Change the geometry to your location, comput the monthly total rainfall. Download the timeseries chart generated.

// Create a chart of monthly rainfall at your selected location
// print the chart and download it as a PNG

Try in Code Editor ↗

06. Import

You can import vector or raster data into Earth Engine. We will now import a shapefile of Taluk for a state in India. Unzip the Taluk.rar into a folder on your computer. In the Code Editor, go to Assets → New → Table Upload → Shape Files. Select the .cpj, .sbn, .shp, .shx, .dbfand .prj files. Enter Taluk_Boundary as the Asset Name and click Upload. In the Tasks tab you can see the progress of the upload. Once the ingestion is complete you will have a new asset in the Assets tab. The shapefile will be imported as a Feature Collection in Earth Engine. Select the Taluk_Boundary asset and click Import. You can then visualize the imported data.

Importing a Shapefile

Importing a Shapefile

// Let's import some data to Earth Engine

// Download the 'Taluk' shapefiles from K-GIS
// https://kgis.ksrsac.in/kgis/downloads.aspx

// Uncompress and upload the 'Taluk_Boundary.shp' shapefile 


// Import the collection
var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");

// Visualize the collection
Map.centerObject(taluks)
Map.addLayer(taluks, {color: 'gray'}, 'Taluks')

// Function to add a new property 'area_sqkm'
// Function takes a feature and returns the feature
// with a new property
var addArea = function(f) {
  var area = ee.Number(f.get('SHAPE_STAr')).divide(1e6)
  return f.set('area_sqkm', area)
}
var talukWithArea = taluks.map(addArea)
print(talukWithArea)

Open in Code Editor ↗

Exercise

Explore the uploaded Feature Collection, SHAPE_STAr property contains the area in meter_square, using addArea function area is converted into kilometer_square and stored as area_sqkm. Using this property filter all regions with area greater than 1000 kmsq.

// Let's import some data to Earth Engine

// Download the 'Taluk' shapefiles from K-GIS
// https://kgis.ksrsac.in/kgis/downloads.aspx

// Uncompress and upload the 'Taluk_Boundary.shp' shapefile 


// Import the collection
var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");

// Function to add a new property 'area_sqkm'
// Function takes a feature and returns the feature
// with a new property
var addArea = function(f) {
  var area = ee.Number(f.get('SHAPE_STAr')).divide(1e6)
  return f.set('area_sqkm', area)
}
var talukWithArea = taluks.map(addArea)
print(talukWithArea)

// Exercise
// Apply a filter to select all polygons with area > 1000 sq. km.
// Add the results to the map

Try in Code Editor ↗

07. Zonal Statistics

Now that we have uploaded a shapefile with many Taluk boundaries let us calculate the mean rainfall for each district. Before in reduceRegion, we learned to reduce a single image over a whole geometry. But by using the reduceRegions, we can reduce the image for each geometry in the feature collection. This function will return a Feature Collection, which contains all the features with an additional property of the reduced value.

Zonal Statistics

Zonal Statistics

var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");
var chirps = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD')

var year = 2019
var startDate = ee.Date.fromYMD(year, 1, 1)
var endDate = startDate.advance(1, 'year')

var filtered = chirps
  .filter(ee.Filter.date(startDate, endDate))

// Calculate the total rainfall for the year
var total = filtered.reduce(ee.Reducer.sum())

// Display the total rainfall image.
var palette = ['#ffffcc','#a1dab4','#41b6c4','#2c7fb8','#253494']
var visParams = {
  min:0,
  max: 2000,
  palette: palette
}

Map.addLayer(total, visParams, 'Total Precipitation')

// Compute the total rainfall for each feature
// reduceRegions() function allows you to do zonal stats
var withRainfall = total.reduceRegions({
  collection: taluks,
  reducer: ee.Reducer.mean(),
  scale: 5566})
  
Map.centerObject(taluks)
Map.addLayer(withRainfall, {color: 'red'}, 'Taluks')

Open in Code Editor ↗

Exercise

Write an export function to export the Feature Collection as a CSV file.

var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");
var chirps = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD')

var year = 2019
var startDate = ee.Date.fromYMD(year, 1, 1)
var endDate = startDate.advance(1, 'year')

var filtered = chirps
  .filter(ee.Filter.date(startDate, endDate))

// Calculate the total rainfall for the year
var total = filtered.reduce(ee.Reducer.sum())

// Display the total rainfall image.
var palette = ['#ffffcc','#a1dab4','#41b6c4','#2c7fb8','#253494']
var visParams = {
  min:0,
  max: 2000,
  palette: palette
}

Map.addLayer(total, visParams, 'Total Precipitation')

// Compute the total rainfall for each feature
// reduceRegions() function allows you to do zonal stats
var withRainfall = total.reduceRegions({
  collection: taluks,
  reducer: ee.Reducer.mean(),
  scale: 5566})
  
Map.centerObject(taluks)
Map.addLayer(withRainfall, {color: 'blue'}, 'Taluks') 

// Exercise
// Export the resulting FeatureCollection with average rainfall as a CSV
// Use the Export.table.toDrive() function

// You can use the .select() function to select only columns you need
// You can also rename the columns
var exportCol = withRainfall.select(['KGISTalukN', 'mean'], ['taluk', 'average_rainfall'])
print(exportCol)

// Export the exportCol as a CSV
// Hint: Give the list of fields you want to export in the 'selectors' argument

Try in Code Editor ↗

08. Trend Analysis

Trend Analysis is a technique that is used to find the pattern. Let us do a Non-Parametric Trend Analysis using the Sen Slope technique to find the pattern of rainfall over time.

The sens slope is a reduction operation that expects two-band X and Y. The X is the constant band with value year, and the Y is the total precipitation of that pixel in that year. After creating the image collection in this format we can use the ee.Reducer.sensSlope() with the collection to compute the slope at each year.

This function returns an image with a positive pixel value representing the increase in precipitation and vice-versa.

An advantage of doing trend analysis in earth engine is computing pre-pixel calculation other wise which would be computation intestiive task to perform.

var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");
var chirps = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD')

// We will compute the trend of total seasonal rainfall

// Rainy season is June - September
var createSeasonalImage = function(year) {
  var startDate = ee.Date.fromYMD(year, 6, 1)
  var endDate = ee.Date.fromYMD(year, 9, 30)
  var seasonFiltered = chirps
    .filter(ee.Filter.date(startDate, endDate))
  // Calculate total precipitation
  var total = seasonFiltered.reduce(ee.Reducer.sum())
  return total.set({
    'system:time_start': startDate.millis(),
    'system:time_end': endDate.millis(),
    'year': year,
  })
}

// Aggregate Precipitation Data over 40 years
var years = ee.List.sequence(1981, 2013)
var yearlyImages = years.map(createSeasonalImage)
print(yearlyImages)

var yearlyCol = ee.ImageCollection.fromImages(yearlyImages)


// Prepare data for Sen's Slope Estimation
// We need an image with 2 bands for X and Y varibles
// X = year
// Y = total precipitation

var processImage = function(image) {
  var year = image.get('year');
  var yearImage = ee.Image.constant(ee.Number(year)).toShort();
  return ee.Image.cat(yearImage, image).rename(['year', 'prcp']).set('year', year);
}

var processedCol = yearlyCol.map(processImage)
print(processedCol)

// Calculate time series slope using sensSlope().
var sens = processedCol.reduce(ee.Reducer.sensSlope());

// The resulting image has 2 bands: slope and intercept
// We select the 'slope' band
var slope = sens.select('slope')
// Set visualisation parameters
var visParams = {min: -10, max: 10, palette: ['brown', 'white', 'blue']};
Map.addLayer(slope, visParams, 'Trend');

Open in Code Editor ↗

Exercise

The trend analysis is done for a particular season. Change the filter dates to compute the annual precipitation trend.

var taluks = ee.FeatureCollection("users/ujavalgandhi/gee-water-resources/kgis_taluks");
var chirps = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD')

// We will compute the trend of total seasonal rainfall

// Rainy season is June - September
var createSeasonalImage = function(year) {
  var startDate = ee.Date.fromYMD(year, 6, 1)
  var endDate = ee.Date.fromYMD(year, 9, 30)
  var seasonFiltered = chirps
    .filter(ee.Filter.date(startDate, endDate))
  // Calculate total precipitation
  var total = seasonFiltered.reduce(ee.Reducer.sum())
  return total.set({
    'system:time_start': startDate.millis(),
    'system:time_end': endDate.millis(),
    'year': year,
  })
}

// Aggregate Precipitation Data over 40 years
var years = ee.List.sequence(1981, 2013)
var yearlyImages = years.map(createSeasonalImage)
print(yearlyImages)

var yearlyCol = ee.ImageCollection.fromImages(yearlyImages)


// Prepare data for Sen's Slope Estimation
// We need an image with 2 bands for X and Y varibles
// X = year
// Y = total precipitation

var processImage = function(image) {
  var year = image.get('year');
  var yearImage = ee.Image.constant(ee.Number(year)).toShort();
  return ee.Image.cat(yearImage, image).rename(['year', 'prcp']).set('year', year);
}

var processedCol = yearlyCol.map(processImage)
print(processedCol)

// Calculate time series slope using sensSlope().
var sens = processedCol.reduce(ee.Reducer.sensSlope());

// The resulting image has 2 bands: slope and intercept
// We select the 'slope' band
var slope = sens.select('slope')
// Set visualisation parameters
var visParams = {min: -10, max: 10, palette: ['brown', 'white', 'blue']};
Map.addLayer(slope, visParams, 'Trend'); 

// Exercise
// Change the code above to compute annual precipitation trend

Try in Code Editor ↗

Module 4: Land Use Land CoverClassification

Introduction to Machine Learning and Supervised Classification

Supervised classification is a classical machine learning technique in remote sensing. There is a variety of applications that can be performed using this technique. Google Earth Engine is uniquely suited for supervised classification at a large scale, and it has all the resources pre-loaded for convenience. The interactive nature of Earth Engine development allows for iterative development of supervised classification workflows by combining many different datasets into the model. This module covers basic supervised classification workflow, accuracy assessment, hyperparameter tuning, and using the ESA world cover data.

View Presentation

View the Presentation ↗

01. Basic Supervised Classification

We will learn the technique to do a basic land cover classification using training points collected using the Code Editor with the help of the High-Resolution base map imagery provided by Google Maps. Even if no field data is available for training the model, this method is very effective in generating high-quality classification samples anywhere in the world. The goal is to classify Sentinel 2A pixel into one of the following classes - urban, bare, water, or vegetation. Using the drawing tools in the code editor lets us create 4 new feature collections with points representing pixels of each class. Each feature collection will be created with a property landcover where each collection has values of 0, 1, 2, or 3, representing urban, bare, water, or vegetation, respectively. We use these collected samples to train a Random Forest classifier and apply it to all the image pixels to create a land cover image with 4 classes.

Fun fact: The classifiers in Earth Engine API have names starting with smile - such as ee.Classifier.smileRandomForest(). The smile part refers to the Statistical Machine Intelligence and Learning Engine (SMILE) JAVA library which is used by Google Earth Engine to implement these algorithms.

var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary")
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
// The following collections were created using the 
// Drawing Tools in the code editor 
var urban = ee.FeatureCollection("users/ujavalgandhi/e2e/urban_gcps")
var bare = ee.FeatureCollection("users/ujavalgandhi/e2e/bare_gcps")
var water = ee.FeatureCollection("users/ujavalgandhi/e2e/water_gcps")
var vegetation = ee.FeatureCollection("users/ujavalgandhi/e2e/vegetation_gcps")

 
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
  .filter(ee.Filter.bounds(bangalore))
  .select('B.*')

var composite = filtered.median().clip(bangalore) 

// Display the input composite.
var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
Map.addLayer(composite, rgbVis, 'image');

var gcps = urban.merge(bare).merge(water).merge(vegetation)

// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
  collection: gcps, 
  properties: ['landcover'], 
  scale: 10
});


// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50).train({
  features: training,  
  classProperty: 'landcover', 
  inputProperties: composite.bandNames()
});
// // Classify the image.
var classified = composite.classify(classifier);
Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019'); 

// Display the GCPs
// We use the style() function to style the GCPs
var palette = ee.List(['gray','brown','blue','green'])
var landcover = ee.List([0, 1, 2, 3])

var gcpsStyled = ee.FeatureCollection(
  landcover.map(function(lc){
    var color = palette.get(landcover.indexOf(lc));
    var markerStyle = { color: 'white', pointShape: 'diamond', 
      pointSize: 4, width: 1, fillColor: color}
    return gcps.filter(ee.Filter.eq('landcover', lc))
                .map(function(point){
                  return point.set('style', markerStyle)
                })
      })).flatten();
      
Map.addLayer(gcpsStyled.style({styleProperty:"style"}), {}, 'GCPs')
Map.centerObject(gcpsStyled)


// landcover=2 is water
var waterClass = classified.eq(2)
Map.addLayer(waterClass, {min: 0, max: 1, palette: ['white', 'blue']}, 'Water');

Open in Code Editor ↗

Supervised Classification Output

Supervised Classification Output

Exercise

Select and city of your choice from urbanAreas data. Create Feature collection with property landcover and collect training points for every four classes urban, bare, water, and vegetation. Then use the code to classify and create a land cover of the city.

var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
var urbanAreas = ee.FeatureCollection("users/ujavalgandhi/e2e/ne_10m_urban_areas")

// Perform supervised classification for your city
// Find the feature id by adding the layer to the map and using Inspector.
var city = urbanAreas.filter(ee.Filter.eq('system:index', '00000000000000002bf8'))
var geometry = city.geometry()
Map.centerObject(geometry)

var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
  .filter(ee.Filter.bounds(geometry))
  .select('B.*')

var composite = filtered.median().clip(geometry) 

// Display the input composite.

var rgbVis = {min: 0.0, max: 3000, bands: ['B4', 'B3', 'B2']};
Map.addLayer(composite, rgbVis, 'image');

// Exercise
// Add training points for 4 classes
// Assign the 'landcover' property as follows

// urban: 0
// bare: 1
// water: 2
// vegetation: 3

// After adding points, uncomments lines below

// var gcps = urban.merge(bare).merge(water).merge(vegetation)



// // Overlay the point on the image to get training data.
// var training = composite.sampleRegions({
//   collection: gcps, 
//   properties: ['landcover'], 
//   scale: 200,
//   tileScale: 16
// });
// print(training)


// // Train a classifier.
// var classifier = ee.Classifier.smileRandomForest(50).train({
//   features: training,  
//   classProperty: 'landcover', 
//   inputProperties: composite.bandNames()
// });
// // // Classify the image.
// var classified = composite.classify(classifier);
// Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019'); 

// // Extract water
// var waterClass = classified.eq(2)
// Map.addLayer(waterClass, {min: 0, max: 1, palette: ['white', 'blue']}, 'Water');

Try in Code Editor ↗

02. Accuracy Assessment

It is important to get a quantitative estimate of the accuracy of the classification. To do this, a common strategy is to divide your training samples into 2 random fractions - one used for training the model and the other for validation of the predictions. Once a classifier is trained, it can be used to classify the entire image. We can then compare the classified values with the ones in the validation fraction. We can use the ee.Classifier.confusionMatrix() method to calculate a Confusion Matrix representing expected accuracy.

Don’t get carried away tweaking your model to give you the highest validation accuracy. You must use both qualitative measures (such as visual inspection of results) along with quantitative measures to assess the results.

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
    
var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
 
var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
  .filter(ee.Filter.bounds(boundary))
  .select('B.*')

var composite = filtered.median().clip(boundary) 

// Display the input composite.
Map.addLayer(composite, rgbVis, 'image');


// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()

// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));

// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
  collection: trainingGcp,
  properties: ['landcover'],
  scale: 10,
  tileScale: 16
});

// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50)
.train({
  features: training,  
  classProperty: 'landcover',
  inputProperties: composite.bandNames()
});

// Classify the image.
var classified = composite.classify(classifier);

Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');

//************************************************************************** 
// Accuracy Assessment
//************************************************************************** 

// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
  collection: validationGcp,
  properties: ['landcover'],
  tileScale: 16,
  scale: 10,
});

var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy());

Open in Code Editor ↗

Output

Confusion Matrix
0: [41,4,0,0]
1: [3,43,0,0]
2: [0,0,48,3]
3: [0,0,0,37]

Test Accuracy
0.9441340782122905

03. Improving the Classification

The Earth Engine data-model is especially well suited for machine learning tasks because of its ability to easily incorporate data sources of different spatial resolutions, projections and data types together By giving additional information to the classifier, it is able to separate different classes easily. Here we take the same example and augment it with the following techniques

  • Apply Cloud Masking
  • Add Spectral Indices: We add bands for different spectral indices such as - NDVI, NDBI, MNDWI and BSI.
  • Add Elevation and Slope: We also add slope and elevation bands from the ALOS DEM.
  • Normalize the Inputs: Machine learning models work best when all the inputs have the same scale. We will divide each band with the maximum value. This method ensures that all input values are between 0-1.

Our training features have more parameters and contain values of the same scale. The result is a much improved classification.

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");


var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
// Function to remove cloud and snow pixels from Sentinel-2 SR image
// The function also divides ths image by 10000 to ensure
// the pixels values are between 0 and 1
function maskCloudAndShadowsSR(image) {
  var cloudProb = image.select('MSK_CLDPRB');
  var snowProb = image.select('MSK_SNWPRB');
  var cloud = cloudProb.lt(10);
  var scl = image.select('SCL'); 
  var shadow = scl.eq(3); // 3 = cloud shadow
  var cirrus = scl.eq(10); // 10 = cirrus
  // Cloud probability less than 10% or cloud shadow classification
  var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
  return image.updateMask(mask).divide(10000);
}


var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
  .filter(ee.Filter.bounds(boundary))
  .map(maskCloudAndShadowsSR)
  .select('B.*')


var composite = filtered.median().clip(boundary) 


var addIndices = function(image) {
  var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
  var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
  var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 
  var bsi = image.expression(
      '(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
        'X': image.select('B11'), //swir1
        'Y': image.select('B4'),  //red
        'A': image.select('B8'), // nir
        'B': image.select('B2'), // blue
  }).rename('bsi');
  return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}

var composite = addIndices(composite);

// We divide the elevation and slope with maximum values in the region
// to ensure the values are between 0 and 1
// A more robust technique for image normalization is provided in the course supplement
var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');

var composite = composite.addBands(elev).addBands(slope);

// Normalize the image 

// Machine learning algorithms work best on images when all features have
// the same range

// Function to Normalize Image
// Pixel Values should be between 0 and 1
// Formula is (x - xmin) / (xmax - xmin)
//************************************************************************** 
function normalize(image){
  var bandNames = image.bandNames();
  // Compute min and max of the image
  var minDict = image.reduceRegion({
    reducer: ee.Reducer.min(),
    geometry: boundary,
    scale: 20,
    maxPixels: 1e9,
    bestEffort: true,
    tileScale: 16
  });
  var maxDict = image.reduceRegion({
    reducer: ee.Reducer.max(),
    geometry: boundary,
    scale: 20,
    maxPixels: 1e9,
    bestEffort: true,
    tileScale: 16
  });
  var mins = ee.Image.constant(minDict.values(bandNames));
  var maxs = ee.Image.constant(maxDict.values(bandNames));

  var normalized = image.subtract(mins).divide(maxs.subtract(mins))
  return normalized
}

var composite = normalize(composite);

var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');

// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()

// This being a simpler classification, lets take 60% points for validation.
// Normal recommended ratio is 70% training, 30% validation

var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
  collection: trainingGcp,
  properties: ['landcover'],
  scale: 10,
  tileScale: 16
});
print(training)
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(50)
.train({
  features: training,  
  classProperty: 'landcover',
  inputProperties: composite.bandNames()
});

// Classify the image.
var classified = composite.classify(classifier);

Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');

//************************************************************************** 
// Accuracy Assessment
//************************************************************************** 

// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
  collection: validationGcp,
  properties: ['landcover'],
  scale: 10,
  tileScale: 16
});

var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy());
 

Open in Code Editor ↗

Output

Confusion Matrix
0: [41,3,0,0]
1: [3,43,0,0]
2: [0,0,49,2]
3: [0,0,0,37]

Test Accuracy
0.9550561797752809

04. Exporting Classification Results

When working with complex classifiers over large regions, you may get a User memory limit exceeded or Computation timed out error in the Code Editor. The reason for this is that there is a fixed time limit and smaller memory allocated for code that is run with the On-Demand Computation mode. For larger computations, you can use the Batch mode with the Export functions. Exports run in the background and can run longer than 5-minutes time allocated to the computation code run from the Code Editor. This allows you to process very large and complex datasets. Here’s an example showing how to export your classification results to Google Drive.

We can only export Images or FeatureCollections. What if you wanted to export a number that is the result of a long computation? A useful hack is to create a FeatureCollection with just 1 feature containing null geometry and a property containing the number you want to export.

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");

var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};

// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
  var cloudProb = image.select('MSK_CLDPRB');
  var snowProb = image.select('MSK_SNWPRB');
  var cloud = cloudProb.lt(10);
  var scl = image.select('SCL'); 
  var shadow = scl.eq(3); // 3 = cloud shadow
  var cirrus = scl.eq(10); // 10 = cirrus
  // Cloud probability less than 10% or cloud shadow classification
  var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
  return image.updateMask(mask).divide(10000);
}

var filtered = s2.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
                  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
                  .filter(ee.Filter.bounds(boundary))
                  .map(maskCloudAndShadowsSR)
                  .select('B.*')


var composite = filtered.median().clip(boundary) 


var addIndices = function(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
  var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 
  var bsi = image.expression(
      '(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
        'X': image.select('B11'), //swir1
        'Y': image.select('B4'),  //red
        'A': image.select('B8'), // nir
        'B': image.select('B2'), // blue
  }).rename('bsi');
  return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}

var composite = addIndices(composite);


var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');

var composite = composite.addBands(elev).addBands(slope);

function normalize(image){
  var bandNames = image.bandNames();
  // Compute min and max of the image
  var minDict = image.reduceRegion({
    reducer: ee.Reducer.min(),
    geometry: boundary,
    scale: 20,
    maxPixels: 1e9,
    bestEffort: true,
    tileScale: 16
  });
  var maxDict = image.reduceRegion({
    reducer: ee.Reducer.max(),
    geometry: boundary,
    scale: 20,
    maxPixels: 1e9,
    bestEffort: true,
    tileScale: 16
  });
  var mins = ee.Image.constant(minDict.values(bandNames));
  var maxs = ee.Image.constant(maxDict.values(bandNames));

  var normalized = image.subtract(mins).divide(maxs.subtract(mins))
  return normalized
}

var composite = normalize(composite);

var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');

// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()

// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
  collection: trainingGcp,
  properties: ['landcover'],
  scale: 10,
  tileScale: 16
});
print(training)
// Train a classifier.
var classifier = ee.Classifier.smileRandomForest(10)
.train({
  features: training,  
  classProperty: 'landcover',
  inputProperties: composite.bandNames()
});

// Classify the image.
var classified = composite.classify(classifier);

Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');

// Use classification map to assess accuracy using the validation fraction
// of the overall training set created above.
var test = classified.sampleRegions({
  collection: validationGcp,
  properties: ['landcover'],
  scale: 10,
  tileScale: 16
});

var testConfusionMatrix = test.errorMatrix('landcover', 'classification')
// Printing of confusion matrix may time out. Alternatively, you can export it as CSV
print('Confusion Matrix', testConfusionMatrix);
print('Test Accuracy', testConfusionMatrix.accuracy()); 

//************************************************************************** 
// Exporting Results
//************************************************************************** 

// Create a Feature with null geometry and the value we want to export.
// Use .array() to convert Confusion Matrix to an Array so it can be
// exported in a CSV file
var fc = ee.FeatureCollection([
  ee.Feature(null, {
    'accuracy': testConfusionMatrix.accuracy(),
    'matrix': testConfusionMatrix.array()
  })
  ])
print(fc)  
Export.table.toDrive({
  collection: fc,
  description: 'Accuracy_Export',
  folder: 'earthengine',
  fileNamePrefix: 'accuracy',
  fileFormat: 'CSV'
})

// It is also a good idea to export the classified image as an Asset
// This will allows you to import the classified image in another script
// without running the whole classification workflow.

// For images with discrete pixel values, we must set the
// pyramidingPolicy to 'mode'.

// Change the assetId parameter with the path to your asset folder.
Export.image.toAsset({
  image: classified,
  description: 'Classified_Image_Asset_Export',
  assetId: 'users/ujavalgandhi/classified',
  region: boundary,
  scale: 10,
  maxPixels: 1e10,
  pyramidingPolicy: {'.default': 'mode'}
})


 
 

Open in Code Editor ↗

Exported Classification Outputs

Exported Classification Outputs

05. Using Existing Landcover Datasets

One of the ambitious projects was creating a global landcover data with many classes for many years. This type of data is essential as in expect few countries, the landcover data of terrain is not available / not shared publically by the local governments. Hence creating a global land cover has been attempted by a few organizations One among them is European Space Agency. The ESA has created the 10m global landcover called WorldCover with 10 classes. At present, this data is available for the year 2020 and, ESA has promised to develop and release this data every year following 2020. Watch the launch event to know more about this data and the methodology used to create this.

We can use the eq operator to extract a particular class, extract the water class, and create a surface water map for the Arkavathy region.

var dataset = ee.ImageCollection("ESA/WorldCover/v100").first();

// Select a Basin
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var selected = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = selected.geometry()

Map.centerObject(geometry);
// Add the classified image
var classification = dataset.select('Map').clip(geometry)

Map.addLayer(classification, {}, 'WorldCover Classification')
// Select water class
var water = classification.eq(80)
Map.addLayer(water, {min:0, max:1, palette: ['white', 'blue']}, 'Water')

Open in Code Editor ↗

WorldView

WorldView

Exercise

Use the WorldView data and extract the Cropland class for Arkavathy region. Then add it to map with green palette.

var dataset = ee.ImageCollection("ESA/WorldCover/v100").first();

// Select a Basin
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var selected = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = selected.geometry()

Map.centerObject(geometry);
// Add the classified image
var classification = dataset.select('Map').clip(geometry)

Map.addLayer(classification, {}, 'WorldCover Classification')

// Extract the 'Cropland' class
// Apply a selfMask()
// Add it to the map with the 'green' palette

Open in Code Editor ↗

06. Calculating Area

Now that we have extracted the open water from WorldView, we will learn how to calculate the area for pixels in each class. Calculating area for features is done using the area() function and for images using the ee.Image.pixelArea() function. The ee.Image.pixelArea() function creates an image where each pixel’s value is the area of the pixel. We multiply this pixel area image with our image and sum up the area using the reduceRegion() function.

The ee.Image.pixelArea() function uses a custom equal-area projection for area calculation. The result is area in square meters regardless of the projection of the input image. Learn more

var dataset = ee.ImageCollection("ESA/WorldCover/v100").first();

// Select a Basin
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var selected = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = selected.geometry()

Map.centerObject(geometry);
// Add the classified image
var classification = dataset.select('Map').clip(geometry)

Map.addLayer(classification, {}, 'WorldCover Classification')

// .area() function calculates the area in square meters
var basinArea = geometry.area()

// We can cast the result to a ee.Number() and calculate the
// area in square kilometers
var basinAreaSqKm = ee.Number(basinArea).divide(1e6).round()
print(basinAreaSqKm)


// Area Calculation for Images
var water = classification.eq(80)

// If the image contains values 0 or 1, we can calculate the
// total area using reduceRegion() function

// The result of .eq() operation is a binary image with pixels
// values of 1 where the condition matched and 0 where it didn't
Map.addLayer(water, {min:0, max:1, palette: ['white', 'blue']}, 'Surface Water')

// Since our image has only 0 and 1 pixel values, the vegetation
// pixels will have values equal to their area
var areaImage = water.multiply(ee.Image.pixelArea())


// Now that each pixel for vegetation class in the image has the value
// equal to its area, we can sum up all the values in the region
// to get the total green cover.

var area = areaImage.reduceRegion({
  reducer: ee.Reducer.sum(),
  geometry: geometry,
  scale: 10,
  maxPixels: 1e10
  })

// The result of the reduceRegion() function is a dictionary with the key
// being the band name. We can extract the area number and convert it to
// square kilometers
var waterAreaSqKm = ee.Number(area.get('Map')).divide(1e6).round()
print(waterAreaSqKm) 

Open in Code Editor ↗

Output

Total basin Area(sqkm)
4178

Total water area in the basin(sqkm)
19

Exercise

Extract the area value as a number and convert it as hectares.

var dataset = ee.ImageCollection("ESA/WorldCover/v100").first();

// Select a Basin
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var selected = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var geometry = selected.geometry()

Map.centerObject(geometry);
// Add the classified image
var classification = dataset.select('Map').clip(geometry)

Map.addLayer(classification, {}, 'WorldCover Classification')

var basinArea = geometry.area()

var cropland = classification.eq(40)
Map.addLayer(cropland, {min:0, max:1, palette: ['white', 'green']}, 'Cropland')

// Since our image has only 0 and 1 pixel values, the vegetation
// pixels will have values equal to their area
var areaImage = cropland.multiply(ee.Image.pixelArea())

var area = areaImage.reduceRegion({
  reducer: ee.Reducer.sum(),
  geometry: geometry,
  scale: 10,
  maxPixels: 1e10
  })
print('Cropland Area Sq Meters', area)

// Extract the area value from the dictionaty
// Print the cropland area in hectares
// Hint: 1 Hectare = 10000 sq. meters

Try in Code Editor ↗

Module 5: Flood Mapping

This module lets us map the flooded regions and calculate the flooded area during the 2018 Kerala Floods in Ernakulam district.

We will use the Sentinel 1 GRD (Ground Range Detected) data. This satellite is an active remote sensing platform that uses the microwave C-band to collect data. The main advantage of using microwave data is that it can penetrate through clouds and capture reflectance value both day and night since it is not dependent on Sunlight for reflectance. The spatial resolution of this data is 10m, and the temporal resolution varies between 6-12 days, depending on the location.

The Google Earth Engine provides an Analysis Ready Data (ARD) in which all the required pre-processing is done. To know more about microwave remote sensing data application and usage, visit Remote Sensing Tutorials by Canada Centre for Mapping and Earth Observation.

View Presentation

View the Presentation ↗

01. Load and Filter Sentinel-1 Data

We have learned basic skills to search for a dataset and filter it to a date range and location. Let us load the Sentinel-1 C-band SAR log scaling dataset and filter the collection to flooding event time period. Filter the collection to the Ernakulam region.

Each image will have 3 bands VV, VH, and angle. Most research results say VH - (vertical transmit and horizontal receive) is best for detecting floods as water has low reflectance in the VH band and is easy to detect. Lets us select the VH band, clip it to the ROI and display it in Map with the appropriate visualization parameter.

// On 16 August 2018, severe floods affected the south Indian state Kerala
// due to unusually high rainfall during the monsoon season.
// It was the worst flood in Kerala in nearly a century.
// https://en.wikipedia.org/wiki/2018_Kerala_floods

// Select images by predefined dates
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.centerObject(geometry)
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.date(beforeStart, beforeEnd))
  .filter(ee.Filter.bounds(geometry))

var image = ee.Image(collection.first())

var imageVh = image.select('VH').clip(geometry)

Map.addLayer(imageVh, {min:-30, max: 0}, 'VH (dB)')

Open in Code Editor ↗

Exercise

To detect the flooded regions, we need 2 collections, one before and another after flood occurrence. A before image and an after image has been created. Select the VH band from the images and add both images to the map canvas.

VH single band visualization

VH single band visualization

// On 16 August 2018, severe floods affected the south Indian state Kerala
// due to unusually high rainfall during the monsoon season.
// It was the worst flood in Kerala in nearly a century.
// https://en.wikipedia.org/wiki/2018_Kerala_floods

// Select images by predefined dates
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.centerObject(geometry)
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.bounds(geometry))

var beforeCollection = collection.filterDate(beforeStart, beforeEnd)
var afterCollection = collection.filterDate(afterStart,afterEnd)

var beforeImage = ee.Image(beforeCollection.first())
var afterImage = ee.Image(afterCollection.first())

print(beforeImage)
print(afterImage)

// Exercise
// Select the 'VH' band of before and after images
// Add them to the map.
// Inspect the images and find suitable values for visualization parameters

Try in Code Editor ↗

02. Visualize RGB Composites

We visualized the VH band as a single band image in the previous exercise. We can also visualize a SAR image as a multi-band RGB image. The usual practice is using the VV band in the red channel, the VH band in the green channel, and the VV/VH in the blue channel.

Three bands have different min-max values, but a single number for min-max value in visualization will not be correct. So we can pass a list of numbers, each representing each band’s min-max value in Red Green Blue order, respectively.

RGB multi band visualization

RGB multi band visualization

// On 16 August 2018, severe floods affected the south Indian state Kerala
// due to unusually high rainfall during the monsoon season.
// It was the worst flood in Kerala in nearly a century.
// https://en.wikipedia.org/wiki/2018_Kerala_floods

// Select images by predefined dates
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select(['VV', 'VH'])


var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))

var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var addRatioBand = function(image) {
  var ratioBand = image.select('VV').divide(image.select('VH')).rename('VV/VH')
  return image.addBands(ratioBand)
}

var beforeRgb = addRatioBand(before)
var afterRgb = addRatioBand(after)

var visParams = {min:[-25, -25, 0] ,max:[0, 0, 2]}
Map.centerObject(geometry, 10)
Map.addLayer(beforeRgb, visParams, 'Before Floods');
Map.addLayer(afterRgb, visParams, 'After Floods'); 

Open in Code Editor ↗

Exercise

We have learned to visualize a multi-band image using VV, VH, and VV/VH bands. Let us visualize the change by creating a composite image with VH - before image, VH - after image, and VH - before image. This visualization can effectively identify the before and after images’ changes.

To create this visualization, select the particular bands from each image and concatenate the three bands in the same order using ee.Image.cat function, use the visualization parameter as min: -25 and max: -8 and add it to the canvas.

Change visualization

Change visualization

// On 16 August 2018, severe floods affected the south Indian state Kerala
// due to unusually high rainfall during the monsoon season.
// It was the worst flood in Kerala in nearly a century.
// https://en.wikipedia.org/wiki/2018_Kerala_floods

// Select images by predefined dates
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select(['VV', 'VH'])

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))

var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var visParams = {min: -30, max: 0}
Map.centerObject(geometry, 10)
Map.addLayer(before.select('VH'), visParams, 'Before Floods');
Map.addLayer(after.select('VH'), visParams, 'After Floods'); 

// Change Detection  RGB Visualization
// Create a RGB composite with following Band Combination
// Before VH, After VH, Before VH

// Hint: Use ee.Image.cat() to create a 3-band image
 
  
// Display the image with a min/max range of -25 to -8

Try in Code Editor ↗

03. Apply Speckle Filter

Any SAR image will be with noise. Speckle is like salt and pepper noise in the image. Before performing any operation, a filter is applied over the image to smooth it and reduce the noise. There are many types of filters. RefinedLee filter is a more scientific approach in filtering the noise.

The raw image should be passed into the RefinedLee function. Hence, the log image is converted to raw using the toNatural function. The resulting image from RefinedLee is again converted to log scale by passing it toDB function.

Guido Lemoine implemented the RefinedLee filter in GEE. Keeping the Speckle filtering function at the end of any SAR processing script can be very handy for conversions and save time.

Speckle Filter

Speckle Filter

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var roi = 
    /* color: #d63000 */
    /* displayProperties: [
      {
        "type": "rectangle"
      }
    ] */
    ee.Geometry.Polygon(
        [[[76.34799403033244, 10.196640319195554],
          [76.34799403033244, 10.112323064791251],
          [76.43124979815471, 10.112323064791251],
          [76.43124979815471, 10.196640319195554]]], null, false);
/***** End of imports. If edited, may not auto-convert in the playground. *****/
var gsw = ee.Image("JRC/GSW1_2/GlobalSurfaceWater"),
    admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2"),
    hydrosheds = ee.Image("WWF/HydroSHEDS/03VFDEM");
    
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select('VH');

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))

var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var beforeFiltered = ee.Image(toDB(RefinedLee(toNatural(before))))
var afterFiltered = ee.Image(toDB(RefinedLee(toNatural(after))))

Map.centerObject(geometry, 10)
Map.addLayer(before, {min:-25,max:0}, 'Before Floods', false);
Map.addLayer(after, {min:-25,max:0}, 'After Floods', false); 
Map.addLayer(beforeFiltered, {min:-25,max:0}, 'Before Filtered', false);
Map.addLayer(afterFiltered, {min:-25,max:0}, 'After Filtered', false); 

//############################
// Speckle Filtering Functions
//############################

// Function to convert from dB
function toNatural(img) {
  return ee.Image(10.0).pow(img.select(0).divide(10.0));
}

//Function to convert to dB
function toDB(img) {
  return ee.Image(img).log10().multiply(10.0);
}

//Apllying a Refined Lee Speckle filter as coded in the SNAP 3.0 S1TBX:

//https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sar-processing/src/main/java/org/esa/s1tbx/sar/gpf/filtering/SpeckleFilters/RefinedLee.java
//Adapted by Guido Lemoine

// by Guido Lemoine
function RefinedLee(img) {
  // img must be in natural units, i.e. not in dB!
  // Set up 3x3 kernels 
  var weights3 = ee.List.repeat(ee.List.repeat(1,3),3);
  var kernel3 = ee.Kernel.fixed(3,3, weights3, 1, 1, false);

  var mean3 = img.reduceNeighborhood(ee.Reducer.mean(), kernel3);
  var variance3 = img.reduceNeighborhood(ee.Reducer.variance(), kernel3);

  // Use a sample of the 3x3 windows inside a 7x7 windows to determine gradients and directions
  var sample_weights = ee.List([[0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0], [0,1,0,1,0,1,0], [0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0]]);

  var sample_kernel = ee.Kernel.fixed(7,7, sample_weights, 3,3, false);

  // Calculate mean and variance for the sampled windows and store as 9 bands
  var sample_mean = mean3.neighborhoodToBands(sample_kernel); 
  var sample_var = variance3.neighborhoodToBands(sample_kernel);

  // Determine the 4 gradients for the sampled windows
  var gradients = sample_mean.select(1).subtract(sample_mean.select(7)).abs();
  gradients = gradients.addBands(sample_mean.select(6).subtract(sample_mean.select(2)).abs());
  gradients = gradients.addBands(sample_mean.select(3).subtract(sample_mean.select(5)).abs());
  gradients = gradients.addBands(sample_mean.select(0).subtract(sample_mean.select(8)).abs());

  // And find the maximum gradient amongst gradient bands
  var max_gradient = gradients.reduce(ee.Reducer.max());

  // Create a mask for band pixels that are the maximum gradient
  var gradmask = gradients.eq(max_gradient);

  // duplicate gradmask bands: each gradient represents 2 directions
  gradmask = gradmask.addBands(gradmask);

  // Determine the 8 directions
  var directions = sample_mean.select(1).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(7))).multiply(1);
  directions = directions.addBands(sample_mean.select(6).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(2))).multiply(2));
  directions = directions.addBands(sample_mean.select(3).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(5))).multiply(3));
  directions = directions.addBands(sample_mean.select(0).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(8))).multiply(4));
  // The next 4 are the not() of the previous 4
  directions = directions.addBands(directions.select(0).not().multiply(5));
  directions = directions.addBands(directions.select(1).not().multiply(6));
  directions = directions.addBands(directions.select(2).not().multiply(7));
  directions = directions.addBands(directions.select(3).not().multiply(8));

  // Mask all values that are not 1-8
  directions = directions.updateMask(gradmask);

  // "collapse" the stack into a singe band image (due to masking, each pixel has just one value (1-8) in it's directional band, and is otherwise masked)
  directions = directions.reduce(ee.Reducer.sum());  

  //var pal = ['ffffff','ff0000','ffff00', '00ff00', '00ffff', '0000ff', 'ff00ff', '000000'];
  //Map.addLayer(directions.reduce(ee.Reducer.sum()), {min:1, max:8, palette: pal}, 'Directions', false);

  var sample_stats = sample_var.divide(sample_mean.multiply(sample_mean));

  // Calculate localNoiseVariance
  var sigmaV = sample_stats.toArray().arraySort().arraySlice(0,0,5).arrayReduce(ee.Reducer.mean(), [0]);

  // Set up the 7*7 kernels for directional statistics
  var rect_weights = ee.List.repeat(ee.List.repeat(0,7),3).cat(ee.List.repeat(ee.List.repeat(1,7),4));

  var diag_weights = ee.List([[1,0,0,0,0,0,0], [1,1,0,0,0,0,0], [1,1,1,0,0,0,0], 
    [1,1,1,1,0,0,0], [1,1,1,1,1,0,0], [1,1,1,1,1,1,0], [1,1,1,1,1,1,1]]);

  var rect_kernel = ee.Kernel.fixed(7,7, rect_weights, 3, 3, false);
  var diag_kernel = ee.Kernel.fixed(7,7, diag_weights, 3, 3, false);

  // Create stacks for mean and variance using the original kernels. Mask with relevant direction.
  var dir_mean = img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel).updateMask(directions.eq(1));
  var dir_var = img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel).updateMask(directions.eq(1));

  dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel).updateMask(directions.eq(2)));
  dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel).updateMask(directions.eq(2)));

  // and add the bands for rotated kernels
  for (var i=1; i<4; i++) {
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
  }

  // "collapse" the stack into a single band image (due to masking, each pixel has just one value in it's directional band, and is otherwise masked)
  dir_mean = dir_mean.reduce(ee.Reducer.sum());
  dir_var = dir_var.reduce(ee.Reducer.sum());

  // A finally generate the filtered value
  var varX = dir_var.subtract(dir_mean.multiply(dir_mean).multiply(sigmaV)).divide(sigmaV.add(1.0));

  var b = varX.divide(dir_var);

  var result = dir_mean.add(b.multiply(img.subtract(dir_mean)));
  return(result.arrayFlatten([['sum']]));
}

Open in Code Editor ↗

04. Apply a Threshold

A difference image can be created by dividing the after image by the before image. Then by considering a 25% change, we can select all the pixel values greater than 1.25. This 1.25 threshold is arbitrary, and it can be updated by trial and error for any region. In general, considering a 25% change as the initial assumption is common.

This threshold can be applied using the boolean function gt(). The resultant will be a binary image where a pixel value with 1 represents the flooded region.

Flooded Area

Flooded Area

var gsw = ee.Image("JRC/GSW1_2/GlobalSurfaceWater"),
    admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2"),
    hydrosheds = ee.Image("WWF/HydroSHEDS/03VFDEM");

var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select('VH');

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))
  
var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var beforeFiltered = ee.Image(toDB(RefinedLee(toNatural(before))))
var afterFiltered = ee.Image(toDB(RefinedLee(toNatural(after))))

Map.centerObject(geometry, 10)
Map.addLayer(before, {min:-25,max:0}, 'Before Floods', false);
Map.addLayer(after, {min:-25,max:0}, 'After Floods', false); 
Map.addLayer(beforeFiltered, {min:-25,max:0}, 'Before Filtered', false);
Map.addLayer(afterFiltered, {min:-25,max:0}, 'After Filtered', false); 

var difference = afterFiltered.divide(beforeFiltered);

// Define a threshold
var diffThreshold = 1.25;
// Initial estimate of flooded pixels
var flooded = difference.gt(diffThreshold).rename('water').selfMask();
Map.addLayer(flooded, {min:0, max:1, palette: ['orange']}, 'Initial Flood Area', false);


//############################
// Speckle Filtering Functions
//############################

// Function to convert from dB
function toNatural(img) {
  return ee.Image(10.0).pow(img.select(0).divide(10.0));
}

//Function to convert to dB
function toDB(img) {
  return ee.Image(img).log10().multiply(10.0);
}

//Apllying a Refined Lee Speckle filter as coded in the SNAP 3.0 S1TBX:

//https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sar-processing/src/main/java/org/esa/s1tbx/sar/gpf/filtering/SpeckleFilters/RefinedLee.java
//Adapted by Guido Lemoine

// by Guido Lemoine
function RefinedLee(img) {
  // img must be in natural units, i.e. not in dB!
  // Set up 3x3 kernels 
  var weights3 = ee.List.repeat(ee.List.repeat(1,3),3);
  var kernel3 = ee.Kernel.fixed(3,3, weights3, 1, 1, false);

  var mean3 = img.reduceNeighborhood(ee.Reducer.mean(), kernel3);
  var variance3 = img.reduceNeighborhood(ee.Reducer.variance(), kernel3);

  // Use a sample of the 3x3 windows inside a 7x7 windows to determine gradients and directions
  var sample_weights = ee.List([[0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0], [0,1,0,1,0,1,0], [0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0]]);

  var sample_kernel = ee.Kernel.fixed(7,7, sample_weights, 3,3, false);

  // Calculate mean and variance for the sampled windows and store as 9 bands
  var sample_mean = mean3.neighborhoodToBands(sample_kernel); 
  var sample_var = variance3.neighborhoodToBands(sample_kernel);

  // Determine the 4 gradients for the sampled windows
  var gradients = sample_mean.select(1).subtract(sample_mean.select(7)).abs();
  gradients = gradients.addBands(sample_mean.select(6).subtract(sample_mean.select(2)).abs());
  gradients = gradients.addBands(sample_mean.select(3).subtract(sample_mean.select(5)).abs());
  gradients = gradients.addBands(sample_mean.select(0).subtract(sample_mean.select(8)).abs());

  // And find the maximum gradient amongst gradient bands
  var max_gradient = gradients.reduce(ee.Reducer.max());

  // Create a mask for band pixels that are the maximum gradient
  var gradmask = gradients.eq(max_gradient);

  // duplicate gradmask bands: each gradient represents 2 directions
  gradmask = gradmask.addBands(gradmask);

  // Determine the 8 directions
  var directions = sample_mean.select(1).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(7))).multiply(1);
  directions = directions.addBands(sample_mean.select(6).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(2))).multiply(2));
  directions = directions.addBands(sample_mean.select(3).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(5))).multiply(3));
  directions = directions.addBands(sample_mean.select(0).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(8))).multiply(4));
  // The next 4 are the not() of the previous 4
  directions = directions.addBands(directions.select(0).not().multiply(5));
  directions = directions.addBands(directions.select(1).not().multiply(6));
  directions = directions.addBands(directions.select(2).not().multiply(7));
  directions = directions.addBands(directions.select(3).not().multiply(8));

  // Mask all values that are not 1-8
  directions = directions.updateMask(gradmask);

  // "collapse" the stack into a singe band image (due to masking, each pixel has just one value (1-8) in it's directional band, and is otherwise masked)
  directions = directions.reduce(ee.Reducer.sum());  

  //var pal = ['ffffff','ff0000','ffff00', '00ff00', '00ffff', '0000ff', 'ff00ff', '000000'];
  //Map.addLayer(directions.reduce(ee.Reducer.sum()), {min:1, max:8, palette: pal}, 'Directions', false);

  var sample_stats = sample_var.divide(sample_mean.multiply(sample_mean));

  // Calculate localNoiseVariance
  var sigmaV = sample_stats.toArray().arraySort().arraySlice(0,0,5).arrayReduce(ee.Reducer.mean(), [0]);

  // Set up the 7*7 kernels for directional statistics
  var rect_weights = ee.List.repeat(ee.List.repeat(0,7),3).cat(ee.List.repeat(ee.List.repeat(1,7),4));

  var diag_weights = ee.List([[1,0,0,0,0,0,0], [1,1,0,0,0,0,0], [1,1,1,0,0,0,0], 
    [1,1,1,1,0,0,0], [1,1,1,1,1,0,0], [1,1,1,1,1,1,0], [1,1,1,1,1,1,1]]);

  var rect_kernel = ee.Kernel.fixed(7,7, rect_weights, 3, 3, false);
  var diag_kernel = ee.Kernel.fixed(7,7, diag_weights, 3, 3, false);

  // Create stacks for mean and variance using the original kernels. Mask with relevant direction.
  var dir_mean = img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel).updateMask(directions.eq(1));
  var dir_var = img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel).updateMask(directions.eq(1));

  dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel).updateMask(directions.eq(2)));
  dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel).updateMask(directions.eq(2)));

  // and add the bands for rotated kernels
  for (var i=1; i<4; i++) {
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
  }

  // "collapse" the stack into a single band image (due to masking, each pixel has just one value in it's directional band, and is otherwise masked)
  dir_mean = dir_mean.reduce(ee.Reducer.sum());
  dir_var = dir_var.reduce(ee.Reducer.sum());

  // A finally generate the filtered value
  var varX = dir_var.subtract(dir_mean.multiply(dir_mean).multiply(sigmaV)).divide(sigmaV.add(1.0));

  var b = varX.divide(dir_var);

  var result = dir_mean.add(b.multiply(img.subtract(dir_mean)));
  return(result.arrayFlatten([['sum']]));
}

Open in Code Editor ↗

Exercise

This exercise script is adopted from UN-SPIDER’s recommended approach to computing the flooded region.

Compute the difference image by finding the difference between the after and before images. Use a threshold od -3 (arbitrary value) to identify the initial flooded regions. Click Try in Code Editor to view the complete script.

// There is an alternate way to detect change
// Since the S1_GRD collection has log-scaled DB values, subtract is more appropriate
// If you use subtract(), the threshold value also changes.

// Calculate afterFiltered - beforeFiltered
// Create a flood imagge using a threshold of -3
// Display the results with a 'purple' palette

Try in Code Editor ↗

05. Apply Masks

The Initial flooded map shows all the areas where the possibility of water is logged. But this contains both noise and permanent water. We can mask out the other pixels based on our need to get the exact flooded area. In this exercise, we will mask

  1. Permanent water.

  2. High degree slope area.

  3. Isolated pixels.

The permanent water is detected using the GSW water dataset. The pixel with a seasonality band greater than 5 is considered permanent water for this computation. Based on the ground information, this value can be changed. Using the updateMask function, we can mask out all the permanent water pixels.

The slope mask is created using the hydroSheds DEM dataset. The terrain slope can be calculated from the DEM using ee.Algorithms.Terrain. The areas with a slope greater than 5 degrees are selected and masked out. Mostly the water detected in this area is due to noise in the imagery.

The isolated pixel mask is done to reduce the noise. A single pixel detected as flood in the middle of nowhere must probably be a noise. Hence, all the pixels with less than 8 connected neighbors can be considered noise and masked out. To identify the connected pixels, we can use the connectedPixelCount function to check the connectedness for a pixel in all possible directions. This function will return an image where each pixel has the number of connected pixel counts of the image passed to it. Then, using the boolean function gt(), select all the connected pixel count greater than 8.

After applying all the filters to the initial flooded area image, the remaining pixels can be considered an original flooded region.

Although the computation for the mask is done between images with different resolutions, GEE will convert all the images to the images added in map canvas or resolution mentioned during export. In our case, all the images will be resampled and projected to Sentinel 1 GRD projection.

Flood mask

Flood mask

Flooded region

Flooded region

var admin2 = ee.FeatureCollection('FAO/GAUL_SIMPLIFIED_500m/2015/level2')
var hydrosheds = ee.Image('WWF/HydroSHEDS/03VFDEM')
var gsw = ee.Image('JRC/GSW1_3/GlobalSurfaceWater')
    
var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select('VH');

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))

var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var beforeFiltered = ee.Image(toDB(RefinedLee(toNatural(before))))
var afterFiltered = ee.Image(toDB(RefinedLee(toNatural(after))))

var difference = afterFiltered.divide(beforeFiltered);

// Define a threshold
var diffThreshold = 1.25;
// Initial estimate of flooded pixels
var flooded = difference.gt(diffThreshold).rename('water').selfMask();

Map.centerObject(geometry, 10)
Map.addLayer(before, {min:-25,max:0}, 'Before Floods', false);
Map.addLayer(after, {min:-25,max:0}, 'After Floods', false); 
Map.addLayer(beforeFiltered, {min:-25,max:0}, 'Before Filtered', false);
Map.addLayer(afterFiltered, {min:-25,max:0}, 'After Filtered', false); 
Map.addLayer(flooded, {min:0, max:1, palette: ['orange']}, 'Initial Flood Area', false);


// Mask out area with permanent/semi-permanent water
var permanentWater = gsw.select('seasonality').gte(5).clip(geometry)
Map.addLayer(permanentWater.selfMask(), {min:0, max:1, palette: ['blue']}, 'Permanent Water', false)

// GSW data is masked in non-water areas. Set it to 0 using unmask()
// Invert the image to set all non-permanent regions to 1
var permanentWaterMask = permanentWater.unmask(0).not()

var flooded = flooded.updateMask(permanentWaterMask)
// Mask out areas with more than 5 percent slope using the HydroSHEDS DEM
var slopeThreshold = 5;
var terrain = ee.Algorithms.Terrain(hydrosheds);
var slope = terrain.select('slope');
var steepAreas = slope.gt(slopeThreshold)
var slopeMask = steepAreas.not()
Map.addLayer(steepAreas.selfMask(), {min:0, max:1, palette: ['cyan']}, 'Steep Areas', false)

var flooded = flooded.updateMask(slopeMask)

// Remove isolated pixels
// connectedPixelCount is Zoom dependent, so visual result will vary
var connectedPixelThreshold = 8;
var connections = flooded.connectedPixelCount(25)
var disconnectedAreas = connections.lt(connectedPixelThreshold)
var disconnectedAreasMask = disconnectedAreas.not()
Map.addLayer(disconnectedAreas.selfMask(), {min:0, max:1, palette: ['yellow']}, 'Disconnected Areas', false)

var flooded = flooded.updateMask(disconnectedAreasMask)

Map.addLayer(flooded, {min:0, max:1, palette: ['red']}, 'Flooded Areas');


//############################
// Speckle Filtering Functions
//############################

// Function to convert from dB
function toNatural(img) {
  return ee.Image(10.0).pow(img.select(0).divide(10.0));
}

//Function to convert to dB
function toDB(img) {
  return ee.Image(img).log10().multiply(10.0);
}

//Apllying a Refined Lee Speckle filter as coded in the SNAP 3.0 S1TBX:

//https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sar-processing/src/main/java/org/esa/s1tbx/sar/gpf/filtering/SpeckleFilters/RefinedLee.java
//Adapted by Guido Lemoine

// by Guido Lemoine
function RefinedLee(img) {
  // img must be in natural units, i.e. not in dB!
  // Set up 3x3 kernels 
  var weights3 = ee.List.repeat(ee.List.repeat(1,3),3);
  var kernel3 = ee.Kernel.fixed(3,3, weights3, 1, 1, false);

  var mean3 = img.reduceNeighborhood(ee.Reducer.mean(), kernel3);
  var variance3 = img.reduceNeighborhood(ee.Reducer.variance(), kernel3);

  // Use a sample of the 3x3 windows inside a 7x7 windows to determine gradients and directions
  var sample_weights = ee.List([[0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0], [0,1,0,1,0,1,0], [0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0]]);

  var sample_kernel = ee.Kernel.fixed(7,7, sample_weights, 3,3, false);

  // Calculate mean and variance for the sampled windows and store as 9 bands
  var sample_mean = mean3.neighborhoodToBands(sample_kernel); 
  var sample_var = variance3.neighborhoodToBands(sample_kernel);

  // Determine the 4 gradients for the sampled windows
  var gradients = sample_mean.select(1).subtract(sample_mean.select(7)).abs();
  gradients = gradients.addBands(sample_mean.select(6).subtract(sample_mean.select(2)).abs());
  gradients = gradients.addBands(sample_mean.select(3).subtract(sample_mean.select(5)).abs());
  gradients = gradients.addBands(sample_mean.select(0).subtract(sample_mean.select(8)).abs());

  // And find the maximum gradient amongst gradient bands
  var max_gradient = gradients.reduce(ee.Reducer.max());

  // Create a mask for band pixels that are the maximum gradient
  var gradmask = gradients.eq(max_gradient);

  // duplicate gradmask bands: each gradient represents 2 directions
  gradmask = gradmask.addBands(gradmask);

  // Determine the 8 directions
  var directions = sample_mean.select(1).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(7))).multiply(1);
  directions = directions.addBands(sample_mean.select(6).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(2))).multiply(2));
  directions = directions.addBands(sample_mean.select(3).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(5))).multiply(3));
  directions = directions.addBands(sample_mean.select(0).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(8))).multiply(4));
  // The next 4 are the not() of the previous 4
  directions = directions.addBands(directions.select(0).not().multiply(5));
  directions = directions.addBands(directions.select(1).not().multiply(6));
  directions = directions.addBands(directions.select(2).not().multiply(7));
  directions = directions.addBands(directions.select(3).not().multiply(8));

  // Mask all values that are not 1-8
  directions = directions.updateMask(gradmask);

  // "collapse" the stack into a singe band image (due to masking, each pixel has just one value (1-8) in it's directional band, and is otherwise masked)
  directions = directions.reduce(ee.Reducer.sum());  

  //var pal = ['ffffff','ff0000','ffff00', '00ff00', '00ffff', '0000ff', 'ff00ff', '000000'];
  //Map.addLayer(directions.reduce(ee.Reducer.sum()), {min:1, max:8, palette: pal}, 'Directions', false);

  var sample_stats = sample_var.divide(sample_mean.multiply(sample_mean));

  // Calculate localNoiseVariance
  var sigmaV = sample_stats.toArray().arraySort().arraySlice(0,0,5).arrayReduce(ee.Reducer.mean(), [0]);

  // Set up the 7*7 kernels for directional statistics
  var rect_weights = ee.List.repeat(ee.List.repeat(0,7),3).cat(ee.List.repeat(ee.List.repeat(1,7),4));

  var diag_weights = ee.List([[1,0,0,0,0,0,0], [1,1,0,0,0,0,0], [1,1,1,0,0,0,0], 
    [1,1,1,1,0,0,0], [1,1,1,1,1,0,0], [1,1,1,1,1,1,0], [1,1,1,1,1,1,1]]);

  var rect_kernel = ee.Kernel.fixed(7,7, rect_weights, 3, 3, false);
  var diag_kernel = ee.Kernel.fixed(7,7, diag_weights, 3, 3, false);

  // Create stacks for mean and variance using the original kernels. Mask with relevant direction.
  var dir_mean = img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel).updateMask(directions.eq(1));
  var dir_var = img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel).updateMask(directions.eq(1));

  dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel).updateMask(directions.eq(2)));
  dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel).updateMask(directions.eq(2)));

  // and add the bands for rotated kernels
  for (var i=1; i<4; i++) {
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
  }

  // "collapse" the stack into a single band image (due to masking, each pixel has just one value in it's directional band, and is otherwise masked)
  dir_mean = dir_mean.reduce(ee.Reducer.sum());
  dir_var = dir_var.reduce(ee.Reducer.sum());

  // A finally generate the filtered value
  var varX = dir_var.subtract(dir_mean.multiply(dir_mean).multiply(sigmaV)).divide(sigmaV.add(1.0));

  var b = varX.divide(dir_var);

  var result = dir_mean.add(b.multiply(img.subtract(dir_mean)));
  return(result.arrayFlatten([['sum']]));
}

Open in Code Editor ↗

Exercise

We just learned to mask permanent water using the GSW water dataset. Now mask the cropland pixels extracted from the ESA worldview dataset and add the layer to canvas.

var admin2 = ee.FeatureCollection('FAO/GAUL_SIMPLIFIED_500m/2015/level2')
var hydrosheds = ee.Image('WWF/HydroSHEDS/03VFDEM')
var gsw = ee.Image('JRC/GSW1_3/GlobalSurfaceWater')

var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select('VH');

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))

var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var beforeFiltered = ee.Image(toDB(RefinedLee(toNatural(before))))
var afterFiltered = ee.Image(toDB(RefinedLee(toNatural(after))))

var difference = afterFiltered.divide(beforeFiltered);

// Define a threshold
var diffThreshold = 1.25;
// Initial estimate of flooded pixels
var flooded = difference.gt(diffThreshold).rename('water').selfMask();

//Map.centerObject(geometry, 10)
Map.addLayer(before, {min:-25,max:0}, 'Before Floods', false);
Map.addLayer(after, {min:-25,max:0}, 'After Floods', false); 
Map.addLayer(beforeFiltered, {min:-25,max:0}, 'Before Filtered', false);
Map.addLayer(afterFiltered, {min:-25,max:0}, 'After Filtered', false); 
Map.addLayer(flooded, {min:0, max:1, palette: ['orange']}, 'Initial Flood Area', false);


// Mask out area with permanent/semi-permanent water
var permanentWater = gsw.select('seasonality').gte(5).clip(geometry)
Map.addLayer(permanentWater.selfMask(), {min:0, max:1, palette: ['blue']}, 'Permanent Water', false)

// GSW data is masked in non-water areas. Set it to 0 using unmask()
// Invert the image to set all non-permanent regions to 1
var permanentWaterMask = permanentWater.unmask(0).not()

var flooded = flooded.updateMask(permanentWaterMask)
// Mask out areas with more than 5 percent slope using the HydroSHEDS DEM
var slopeThreshold = 5;
var terrain = ee.Algorithms.Terrain(hydrosheds);
var slope = terrain.select('slope');
var steepAreas = slope.gt(slopeThreshold)
var slopeMask = steepAreas.not()
Map.addLayer(steepAreas.selfMask(), {min:0, max:1, palette: ['cyan']}, 'Steep Areas', false)

var flooded = flooded.updateMask(slopeMask)

// Remove isolated pixels
// connectedPixelCount is Zoom dependent, so visual result will vary
var connectedPixelThreshold = 8;
var connections = flooded.connectedPixelCount(25)
var disconnectedAreas = connections.lt(connectedPixelThreshold)
var disconnectedAreasMask = disconnectedAreas.not()
Map.addLayer(disconnectedAreas.selfMask(), {min:0, max:1, palette: ['yellow']}, 'Disconnected Areas', false)

var flooded = flooded.updateMask(disconnectedAreasMask)

Map.addLayer(flooded, {min:0, max:1, palette: ['red']}, 'Flooded Areas');

// Exercise
// Apply a Crop Mask to asses flood damage to crops
// Use the ESA WorldCover data to extract cropland
var dataset = ee.ImageCollection("ESA/WorldCover/v100").first();
var classification = dataset.select('Map').clip(geometry)
var cropland = classification.eq(40)
Map.addLayer(cropland.selfMask(), {min:0, max:1, palette: ['white', 'green']}, 'Cropland')

// Mask all non-cropland and display flooded regions
// Display the result on the map.


//############################
// Speckle Filtering Functions
//############################

// Function to convert from dB
function toNatural(img) {
  return ee.Image(10.0).pow(img.select(0).divide(10.0));
}

//Function to convert to dB
function toDB(img) {
  return ee.Image(img).log10().multiply(10.0);
}

//Apllying a Refined Lee Speckle filter as coded in the SNAP 3.0 S1TBX:

//https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sar-processing/src/main/java/org/esa/s1tbx/sar/gpf/filtering/SpeckleFilters/RefinedLee.java
//Adapted by Guido Lemoine

// by Guido Lemoine
function RefinedLee(img) {
  // img must be in natural units, i.e. not in dB!
  // Set up 3x3 kernels 
  var weights3 = ee.List.repeat(ee.List.repeat(1,3),3);
  var kernel3 = ee.Kernel.fixed(3,3, weights3, 1, 1, false);

  var mean3 = img.reduceNeighborhood(ee.Reducer.mean(), kernel3);
  var variance3 = img.reduceNeighborhood(ee.Reducer.variance(), kernel3);

  // Use a sample of the 3x3 windows inside a 7x7 windows to determine gradients and directions
  var sample_weights = ee.List([[0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0], [0,1,0,1,0,1,0], [0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0]]);

  var sample_kernel = ee.Kernel.fixed(7,7, sample_weights, 3,3, false);

  // Calculate mean and variance for the sampled windows and store as 9 bands
  var sample_mean = mean3.neighborhoodToBands(sample_kernel); 
  var sample_var = variance3.neighborhoodToBands(sample_kernel);

  // Determine the 4 gradients for the sampled windows
  var gradients = sample_mean.select(1).subtract(sample_mean.select(7)).abs();
  gradients = gradients.addBands(sample_mean.select(6).subtract(sample_mean.select(2)).abs());
  gradients = gradients.addBands(sample_mean.select(3).subtract(sample_mean.select(5)).abs());
  gradients = gradients.addBands(sample_mean.select(0).subtract(sample_mean.select(8)).abs());

  // And find the maximum gradient amongst gradient bands
  var max_gradient = gradients.reduce(ee.Reducer.max());

  // Create a mask for band pixels that are the maximum gradient
  var gradmask = gradients.eq(max_gradient);

  // duplicate gradmask bands: each gradient represents 2 directions
  gradmask = gradmask.addBands(gradmask);

  // Determine the 8 directions
  var directions = sample_mean.select(1).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(7))).multiply(1);
  directions = directions.addBands(sample_mean.select(6).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(2))).multiply(2));
  directions = directions.addBands(sample_mean.select(3).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(5))).multiply(3));
  directions = directions.addBands(sample_mean.select(0).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(8))).multiply(4));
  // The next 4 are the not() of the previous 4
  directions = directions.addBands(directions.select(0).not().multiply(5));
  directions = directions.addBands(directions.select(1).not().multiply(6));
  directions = directions.addBands(directions.select(2).not().multiply(7));
  directions = directions.addBands(directions.select(3).not().multiply(8));

  // Mask all values that are not 1-8
  directions = directions.updateMask(gradmask);

  // "collapse" the stack into a singe band image (due to masking, each pixel has just one value (1-8) in it's directional band, and is otherwise masked)
  directions = directions.reduce(ee.Reducer.sum());  

  //var pal = ['ffffff','ff0000','ffff00', '00ff00', '00ffff', '0000ff', 'ff00ff', '000000'];
  //Map.addLayer(directions.reduce(ee.Reducer.sum()), {min:1, max:8, palette: pal}, 'Directions', false);

  var sample_stats = sample_var.divide(sample_mean.multiply(sample_mean));

  // Calculate localNoiseVariance
  var sigmaV = sample_stats.toArray().arraySort().arraySlice(0,0,5).arrayReduce(ee.Reducer.mean(), [0]);

  // Set up the 7*7 kernels for directional statistics
  var rect_weights = ee.List.repeat(ee.List.repeat(0,7),3).cat(ee.List.repeat(ee.List.repeat(1,7),4));

  var diag_weights = ee.List([[1,0,0,0,0,0,0], [1,1,0,0,0,0,0], [1,1,1,0,0,0,0], 
    [1,1,1,1,0,0,0], [1,1,1,1,1,0,0], [1,1,1,1,1,1,0], [1,1,1,1,1,1,1]]);

  var rect_kernel = ee.Kernel.fixed(7,7, rect_weights, 3, 3, false);
  var diag_kernel = ee.Kernel.fixed(7,7, diag_weights, 3, 3, false);

  // Create stacks for mean and variance using the original kernels. Mask with relevant direction.
  var dir_mean = img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel).updateMask(directions.eq(1));
  var dir_var = img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel).updateMask(directions.eq(1));

  dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel).updateMask(directions.eq(2)));
  dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel).updateMask(directions.eq(2)));

  // and add the bands for rotated kernels
  for (var i=1; i<4; i++) {
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
  }

  // "collapse" the stack into a single band image (due to masking, each pixel has just one value in it's directional band, and is otherwise masked)
  dir_mean = dir_mean.reduce(ee.Reducer.sum());
  dir_var = dir_var.reduce(ee.Reducer.sum());

  // A finally generate the filtered value
  var varX = dir_var.subtract(dir_mean.multiply(dir_mean).multiply(sigmaV)).divide(sigmaV.add(1.0));

  var b = varX.divide(dir_var);

  var result = dir_mean.add(b.multiply(img.subtract(dir_mean)));
  return(result.arrayFlatten([['sum']]));
}

Try in Code Editor ↗

06. Calculate Flooded Area

We visually inspected the flooded regions in the previous exercise, now lets us calculate the total flooded area in hectares and compute the percentage of the area flooded. We have already discussed the technique for calculating area in the earth engine.

Flooded area

Flooded area

var admin2 = ee.FeatureCollection('FAO/GAUL_SIMPLIFIED_500m/2015/level2')
var hydrosheds = ee.Image('WWF/HydroSHEDS/03VFDEM')
var gsw = ee.Image('JRC/GSW1_3/GlobalSurfaceWater')

var beforeStart = '2018-07-15'
var beforeEnd = '2018-08-10'
var afterStart = '2018-08-10'
var afterEnd = '2018-08-23'

var ernakulam = admin2.filter(ee.Filter.eq('ADM2_NAME', 'Ernakulam'))
var geometry = ernakulam.geometry()
Map.addLayer(geometry, {color: 'grey'}, 'Ernakulam District')

var collection= ee.ImageCollection('COPERNICUS/S1_GRD')
  .filter(ee.Filter.eq('instrumentMode','IW'))
  .filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
  .filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING')) 
  .filter(ee.Filter.eq('resolution_meters',10))
  .filter(ee.Filter.bounds(geometry))
  .select('VH');

var beforeCollection = collection
  .filter(ee.Filter.date(beforeStart, beforeEnd))

var afterCollection = collection
  .filter(ee.Filter.date(afterStart, afterEnd))
  
var before = beforeCollection.mosaic().clip(geometry);
var after = afterCollection.mosaic().clip(geometry);

var beforeFiltered = ee.Image(toDB(RefinedLee(toNatural(before))))
var afterFiltered = ee.Image(toDB(RefinedLee(toNatural(after))))

var difference = afterFiltered.divide(beforeFiltered);

// Define a threshold
var diffThreshold = 1.25;
// Initial estimate of flooded pixels
var flooded = difference.gt(diffThreshold).rename('water').selfMask();

Map.centerObject(geometry, 10)
Map.addLayer(before, {min:-25,max:0}, 'Before Floods', false);
Map.addLayer(after, {min:-25,max:0}, 'After Floods', false); 
Map.addLayer(beforeFiltered, {min:-25,max:0}, 'Before Filtered', false);
Map.addLayer(afterFiltered, {min:-25,max:0}, 'After Filtered', false); 
Map.addLayer(flooded, {min:0, max:1, palette: ['orange']}, 'Initial Flood Area', false);

// Mask out area with permanent/semi-permanent water
var permanentWater = gsw.select('seasonality').gte(5).clip(geometry)
var permanentWaterMask = permanentWater.unmask(0).not()
var flooded = flooded.updateMask(permanentWaterMask)

// Mask out areas with more than 5 percent slope using the HydroSHEDS DEM
var slopeThreshold = 5;
var terrain = ee.Algorithms.Terrain(hydrosheds);
var slope = terrain.select('slope');
var steepAreas = slope.gt(slopeThreshold)
var slopeMask = steepAreas.not()
var flooded = flooded.updateMask(slopeMask)

// Remove isolated pixels
// connectedPixelCount is Zoom dependent, so visual result will vary
var connectedPixelThreshold = 8;
var connections = flooded.connectedPixelCount(25)
var disconnectedAreas = connections.lt(connectedPixelThreshold)
var disconnectedAreasMask = disconnectedAreas.not()
var flooded = flooded.updateMask(disconnectedAreasMask)

Map.addLayer(flooded, {min:0, max:1, palette: ['red']}, 'Flooded Areas');

// Calculate Affected Area
print('Total District Area (Ha)', geometry.area().divide(10000))

var stats = flooded.multiply(ee.Image.pixelArea()).reduceRegion({
  reducer: ee.Reducer.sum(),
  geometry: geometry,
  scale: 10,
  maxPixels: 1e10,
  tileScale: 16
})
print('Flooded Area (Ha)', ee.Number(stats.get('water')).divide(10000))

//############################
// Speckle Filtering Functions
//############################

// Function to convert from dB
function toNatural(img) {
  return ee.Image(10.0).pow(img.select(0).divide(10.0));
}

//Function to convert to dB
function toDB(img) {
  return ee.Image(img).log10().multiply(10.0);
}

//Apllying a Refined Lee Speckle filter as coded in the SNAP 3.0 S1TBX:

//https://github.com/senbox-org/s1tbx/blob/master/s1tbx-op-sar-processing/src/main/java/org/esa/s1tbx/sar/gpf/filtering/SpeckleFilters/RefinedLee.java
//Adapted by Guido Lemoine

// by Guido Lemoine
function RefinedLee(img) {
  // img must be in natural units, i.e. not in dB!
  // Set up 3x3 kernels 
  var weights3 = ee.List.repeat(ee.List.repeat(1,3),3);
  var kernel3 = ee.Kernel.fixed(3,3, weights3, 1, 1, false);

  var mean3 = img.reduceNeighborhood(ee.Reducer.mean(), kernel3);
  var variance3 = img.reduceNeighborhood(ee.Reducer.variance(), kernel3);

  // Use a sample of the 3x3 windows inside a 7x7 windows to determine gradients and directions
  var sample_weights = ee.List([[0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0], [0,1,0,1,0,1,0], [0,0,0,0,0,0,0], [0,1,0,1,0,1,0],[0,0,0,0,0,0,0]]);

  var sample_kernel = ee.Kernel.fixed(7,7, sample_weights, 3,3, false);

  // Calculate mean and variance for the sampled windows and store as 9 bands
  var sample_mean = mean3.neighborhoodToBands(sample_kernel); 
  var sample_var = variance3.neighborhoodToBands(sample_kernel);

  // Determine the 4 gradients for the sampled windows
  var gradients = sample_mean.select(1).subtract(sample_mean.select(7)).abs();
  gradients = gradients.addBands(sample_mean.select(6).subtract(sample_mean.select(2)).abs());
  gradients = gradients.addBands(sample_mean.select(3).subtract(sample_mean.select(5)).abs());
  gradients = gradients.addBands(sample_mean.select(0).subtract(sample_mean.select(8)).abs());

  // And find the maximum gradient amongst gradient bands
  var max_gradient = gradients.reduce(ee.Reducer.max());

  // Create a mask for band pixels that are the maximum gradient
  var gradmask = gradients.eq(max_gradient);

  // duplicate gradmask bands: each gradient represents 2 directions
  gradmask = gradmask.addBands(gradmask);

  // Determine the 8 directions
  var directions = sample_mean.select(1).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(7))).multiply(1);
  directions = directions.addBands(sample_mean.select(6).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(2))).multiply(2));
  directions = directions.addBands(sample_mean.select(3).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(5))).multiply(3));
  directions = directions.addBands(sample_mean.select(0).subtract(sample_mean.select(4)).gt(sample_mean.select(4).subtract(sample_mean.select(8))).multiply(4));
  // The next 4 are the not() of the previous 4
  directions = directions.addBands(directions.select(0).not().multiply(5));
  directions = directions.addBands(directions.select(1).not().multiply(6));
  directions = directions.addBands(directions.select(2).not().multiply(7));
  directions = directions.addBands(directions.select(3).not().multiply(8));

  // Mask all values that are not 1-8
  directions = directions.updateMask(gradmask);

  // "collapse" the stack into a singe band image (due to masking, each pixel has just one value (1-8) in it's directional band, and is otherwise masked)
  directions = directions.reduce(ee.Reducer.sum());  

  //var pal = ['ffffff','ff0000','ffff00', '00ff00', '00ffff', '0000ff', 'ff00ff', '000000'];
  //Map.addLayer(directions.reduce(ee.Reducer.sum()), {min:1, max:8, palette: pal}, 'Directions', false);

  var sample_stats = sample_var.divide(sample_mean.multiply(sample_mean));

  // Calculate localNoiseVariance
  var sigmaV = sample_stats.toArray().arraySort().arraySlice(0,0,5).arrayReduce(ee.Reducer.mean(), [0]);

  // Set up the 7*7 kernels for directional statistics
  var rect_weights = ee.List.repeat(ee.List.repeat(0,7),3).cat(ee.List.repeat(ee.List.repeat(1,7),4));

  var diag_weights = ee.List([[1,0,0,0,0,0,0], [1,1,0,0,0,0,0], [1,1,1,0,0,0,0], 
    [1,1,1,1,0,0,0], [1,1,1,1,1,0,0], [1,1,1,1,1,1,0], [1,1,1,1,1,1,1]]);

  var rect_kernel = ee.Kernel.fixed(7,7, rect_weights, 3, 3, false);
  var diag_kernel = ee.Kernel.fixed(7,7, diag_weights, 3, 3, false);

  // Create stacks for mean and variance using the original kernels. Mask with relevant direction.
  var dir_mean = img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel).updateMask(directions.eq(1));
  var dir_var = img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel).updateMask(directions.eq(1));

  dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel).updateMask(directions.eq(2)));
  dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel).updateMask(directions.eq(2)));

  // and add the bands for rotated kernels
  for (var i=1; i<4; i++) {
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), rect_kernel.rotate(i)).updateMask(directions.eq(2*i+1)));
    dir_mean = dir_mean.addBands(img.reduceNeighborhood(ee.Reducer.mean(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
    dir_var = dir_var.addBands(img.reduceNeighborhood(ee.Reducer.variance(), diag_kernel.rotate(i)).updateMask(directions.eq(2*i+2)));
  }

  // "collapse" the stack into a single band image (due to masking, each pixel has just one value in it's directional band, and is otherwise masked)
  dir_mean = dir_mean.reduce(ee.Reducer.sum());
  dir_var = dir_var.reduce(ee.Reducer.sum());

  // A finally generate the filtered value
  var varX = dir_var.subtract(dir_mean.multiply(dir_mean).multiply(sigmaV)).divide(sigmaV.add(1.0));

  var b = varX.divide(dir_var);

  var result = dir_mean.add(b.multiply(img.subtract(dir_mean)));
  return(result.arrayFlatten([['sum']]));
}

Open in Code Editor ↗

Exercise

Printing the results in the console may timeout for more extensive computations. Convert the flood area numeric value as Feature Collection and export the area as a CSV file. The technique to export numeric values is discussed in Exporting Classified Result.

// Export the result as a CSV
// Export the fc as a CSV

Try in Code Editor ↗

06 Drought Monitoring

There are many definitions for drought, depending on the context used. Monitoring and calculating the impact of drought in large sales is quite tedious without remote sensing data. In this module, we will learn to calculate the Impact of Hydrological drought using the MODIS data product.

View Presentation

View the Presentation ↗

Apart from Hydrological drought, another common drought is the Metrological drought. To track the meteorological drought, rainfall deviation should be calculated. To know more about it, visit Spatial Thought YouTube.

01. Load MODIS NDVI

MODIS Moderate Resolution Imaging Spectroradiometer has many derived products apart from the raw imagery captured every day. There are 2 satellite sensors, Terra and Aqua. The raw images from these satellites are used to produce different data products categorized as Atmospheric products, Land products, Cryosphere products, and Ocean Products. We will use the 16-day composite NDVI data at 250m resolution for this exercise. Also, MODIS follows a unique naming pattern, which helps identify the data.

The product which we use in this exercise is MOD13Q1.006.

Name Meaning
MOD Terra sensor
13Q1 Vegetation Index at 250m
.006 Version number (use the latest version available at the time of computation)

Load the Terra Vegetation Indices at 250m resolution and filter the collection from 2010 to 2020. Each image has 12 bands, in which NDVI and EVI are the two vegetation indices bands, other bands contain additional information about the data at pixel level. NDVI is used to monitor vegetation, but for very thick vegetation like dense forest, NDVI value will saturate, and no helpful information can be derived. EVI is used to track and monitor thick vegetation cover.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

var modisNDVI = filtered.select('NDVI')

print(modisNDVI)

Open in Code Editor ↗

Exercise

Add the first image to the map be visualizing it using a suitable color palette.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

var modisNDVI = filtered.select('NDVI')

print(modisNDVI)

// Exercise
// Extract 1 image from the collection using the .first() function
// Visualize and add it to the Map
var image = modisNDVI.first()

Map.addLayer(image, {}, 'MODIS NDVI') 

Try in Code Editor ↗

02. Apply Cloud Mask

Every image has another band Summary QA, and this band contains pixel-level information about the quality of data. We want to keep only the pixels with bit values 0 or 1. We will use the .leftShift() to create the bitmask for selecting good quality data.

Bitmasking itself is a complex topic of discussion. Read the blog on Working with QA Bands and Bitmasks to know more about it.

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var modis = ee.ImageCollection("MODIS/006/MOD13Q1");
/***** End of imports. If edited, may not auto-convert in the playground. *****/
var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))


// Apply mask to remove low-quality or cloudy pixels
// The data comes with a SummaryQA band
// The values are stored in 2 bits - resulting in 4 possible values
// 0: Good data, use with confidence
// 1: Marginal data, useful but look at detailed QA for more information
// 2: Pixel covered with snow/ice
// 3: Pixel is cloudy
// We want to keep only the pixels with bit values 0 or 1


var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Test the function
var image = ee.Image(filtered.first())
var imageMasked = ee.Image(maskSnowAndClouds(image))

Map.addLayer(image, {}, 'MODIS NDVI') 
Map.addLayer(imageMasked, {}, 'MODIS NDVI (Masked)') 

Open in Code Editor ↗

Exercise

Use the cloud masking function, pass the filtered collection to get good quality data.

// Apply cloud mask to all images in the filtered collection

Try in Code Editor ↗

03. Extract and Scale NDVI Band

The NDVI value will be a decimal number ranging between -2 to 1. But in GEE, each NDVI pixel value ranges between -20000 to 10000. This is done to use the storage effectively because an integer number will take less storage when compared to a floating-point number. Hence, each pixel value is multiplied by 10,000 and stored as int. Before using these pixel values for analysis, we can divide each pixel with the same scale factor as down-scaling to get the original pixel value. We can find this scale factor information in the data catalog under the band’s section.

To down-scale each image, we need to pass the image collection into a function and compute the value for each image. Any computation on the image will result in a new image with no metadata like system:timestart of the original image without which timeseries charting is impossible. So, before returning the image from the function, we can carry over the required properties using the .copyProperties() function.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var ndviScaled = ndviCol.map(function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
});

print(ndviScaled);

Open in Code Editor ↗

Exercise

Add the scaled image to the map canvas using a suitable visualization parameter. Inspect any pixel and check the NDVI value.

// Extract 1 image from the ndviScaled collection
// Visualize it and add it to the map

Try in Code Editor ↗

04. Create Monthly Images

The MODIS Vegetation Indices is a 16-day composite, and it cannot be standardized in a calendar unit. So let’s create one image per month by aggregating all the images in the month as a mean composite. To filter the collection for with year, month, day level ee.Filter.calenderRange() function should be use.

We should use a nested map function to map the years first then the months in the inner map function. The inner months map will return a Feature Collection, which results in a Feature collection of Feature Collection when returned by the outer years function. A Feature collection can contain only Features but not Feature Collection. Hence we can use the .flatten() algorithm with the outer years map function to convert Feature collection of Feature Collection to Feature Collection with List of Features.

For an 11-year time series, we will create a final output of 132 (11 x 12) images.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var ndviScaled = ndviCol.map(function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
});

print(ndviScaled);


// Introducting calendarRange()
var mayImages = ndviScaled
  .filter(ee.Filter.calendarRange(5, 5, 'month'))
print(mayImages)


// Introducting flatten()

var nestedList = ee.List([['a', 'b'], ['c', 'd'], ['e', 'f']])
print(nestedList)
print(nestedList.flatten())


// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)

print(monthlyCol)

Open in Code Editor ↗

Exercise

The final output will be a list of images, convert this into a Feature Collection.

// Create an ImageCollection from the above list of images

Try in Code Editor ↗

05. Calculate Min Max Composites

To compute the VCI, we need to calculate each month’s minimum NDVI and maximum NDVI values. The min() and max() reducers are used to compute image collections for the creation of minimum and maximum NDVI values.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var ndviScaled = ndviCol.map(function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
});

// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)


// We can compute Minimum NDVI for each month across all years
// i.e. Minimum NDVI for all May months in the collection
var monthlyMinImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMin = filtered.min()
    return monthlyMin.set('month', month)
})
var monthlyMin = ee.ImageCollection.fromImages(monthlyMinImages)
print(monthlyMin)
// We can compute Maximum NDVI for each month across all years
// i.e. Maximum NDVI for all May months in the collection
var monthlyMaxImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMax = filtered.max()
    return monthlyMax.set('month', month)
})
var monthlyMax = ee.ImageCollection.fromImages(monthlyMaxImages)
print(monthlyMax)

Open in Code Editor ↗

Exercise

Add the min and max NDVI images to map canvas using a suitable visualization parameter for May month.

// Extract 1 image from the ndviScaled collection
// Visualize it and add it to the map

Try in Code Editor ↗

06. Calculate VCI

Vegetation Condition Index (VCI) is a drought monitoring index that takes the previous year’s minimum and maximum NDVI values at a particular area and compares it against the current NDVI value to identify if the area is experiencing drought. As recommended by Un-Spider - Drought Monitoring Recommended Practice VCI value of less than 40 % can be considered as drought.

VCI Formula

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var scaleNdvi = function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
};

var ndviScaled = ndviCol.map(scaleNdvi)

// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)


// We can compute Minimum NDVI for each month across all years
// i.e. Minimum NDVI for all May months in the collection
var monthlyMinImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMin = filtered.min()
    return monthlyMin.set('month', month)
})
var monthlyMin = ee.ImageCollection.fromImages(monthlyMinImages)

// We can compute Maximum NDVI for each month across all years
// i.e. Maximum NDVI for all May months in the collection
var monthlyMaxImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMax = filtered.max()
    return monthlyMax.set('month', month)
})
var monthlyMax = ee.ImageCollection.fromImages(monthlyMaxImages)

// Calculate VCI for 2020

// We are interested in calculating VCI for a particular month
var currentYear = 2020
var currentMonth = 5

var filtered = monthlyCol
  .filter(ee.Filter.eq('year', currentYear))
  .filter(ee.Filter.eq('month', currentMonth))
var currentMonthNdvi = ee.Image(filtered.first())

var minNdvi = ee.Image(monthlyMin.filter(ee.Filter.eq('month', currentMonth)).first())
var maxNdvi = ee.Image(monthlyMax.filter(ee.Filter.eq('month', currentMonth)).first())

// VCI = (NDVI - min) / (max - min)
var image = ee.Image.cat([currentMonthNdvi, minNdvi, maxNdvi]).rename(['ndvi', 'min', 'max'])
var vci = image.expression('100* (ndvi - min) / (max - min)',
    {'ndvi': image.select('ndvi'),
      'min': image.select('min'),
      'max': image.select('max')
    }).rename('vci')


var visParams = {min: 0, max: 1, palette: ['white', 'green']}
var vciPalette = ['#a50026','#d73027','#f46d43','#fdae61',
  '#fee08b','#d9ef8b','#a6d96a','#66bd63','#1a9850','#006837'];
var vciVisParams = {min: 0, max: 100, palette: vciPalette}
Map.addLayer(minNdvi, visParams, 'Minimum May NDVI', false)  
Map.addLayer(maxNdvi, visParams, 'Maximum May NDVI', false)
Map.addLayer(currentMonthNdvi, visParams, 'Current May NDVI', false)
Map.addLayer(vci, vciVisParams, 'VCI')

Open in Code Editor ↗

Exercise

As VCI uses the NDVI index to compute the drought impact, values from non-cropland will have less significance. So, mask out all the pixels which are not crop-land in Global Food-Support Analysis Data.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var scaleNdvi = function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
};

var ndviScaled = ndviCol.map(scaleNdvi)

// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)


// We can compute Minimum NDVI for each month across all years
// i.e. Minimum NDVI for all May months in the collection
var monthlyMinImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMin = filtered.min()
    return monthlyMin.set('month', month)
})
var monthlyMin = ee.ImageCollection.fromImages(monthlyMinImages)

// We can compute Maximum NDVI for each month across all years
// i.e. Maximum NDVI for all May months in the collection
var monthlyMaxImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMax = filtered.max()
    return monthlyMax.set('month', month)
})
var monthlyMax = ee.ImageCollection.fromImages(monthlyMaxImages)

// Calculate VCI for 2020

// We are interested in calculating VCI for a particular month
var currentYear = 2020
var currentMonth = 5

var filtered = monthlyCol
  .filter(ee.Filter.eq('year', currentYear))
  .filter(ee.Filter.eq('month', currentMonth))
var currentMonthNdvi = ee.Image(filtered.first())

var minNdvi = ee.Image(monthlyMin.filter(ee.Filter.eq('month', currentMonth)).first())
var maxNdvi = ee.Image(monthlyMax.filter(ee.Filter.eq('month', currentMonth)).first())

// VCI = (NDVI - min) / (max - min)
var image = ee.Image.cat([currentMonthNdvi, minNdvi, maxNdvi]).rename(['ndvi', 'min', 'max'])
var vci = image.expression('100* (ndvi - min) / (max - min)',
    {'ndvi': image.select('ndvi'),
      'min': image.select('min'),
      'max': image.select('max')
    }).rename('vci')


var visParams = {min: 0, max: 1, palette: ['white', 'green']}
var vciPalette = ['#a50026','#d73027','#f46d43','#fdae61',
  '#fee08b','#d9ef8b','#a6d96a','#66bd63','#1a9850','#006837'];
var vciVisParams = {min: 0, max: 100, palette: vciPalette}
Map.addLayer(minNdvi, visParams, 'Minimum May NDVI', false)  
Map.addLayer(maxNdvi, visParams, 'Maximum May NDVI', false)
Map.addLayer(currentMonthNdvi, visParams, 'Current May NDVI', false)
Map.addLayer(vci, vciVisParams, 'VCI')

// Exercise
// Mask Non-Cropland from VCI Computation
var gfsad = ee.Image("USGS/GFSAD1000_V1");
var cropLand = gfsad.select('landcover').neq(0)

Try in Code Editor ↗

07. Classify VCI Image

Let us classify the VCI image into 3 categories: Good, Fair, and Poor. To select a range of values, we can use the boolean operators. We can use the .where() function to perform this over an image.

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");
var gfsad = ee.Image("USGS/GFSAD1000_V1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var scaleNdvi = function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
};

var ndviScaled = ndviCol.map(scaleNdvi)

// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)


// We can compute Minimum NDVI for each month across all years
// i.e. Minimum NDVI for all May months in the collection
var monthlyMinImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMin = filtered.min()
    return monthlyMin.set('month', month)
})
var monthlyMin = ee.ImageCollection.fromImages(monthlyMinImages)

// We can compute Maximum NDVI for each month across all years
// i.e. Maximum NDVI for all May months in the collection
var monthlyMaxImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMax = filtered.max()
    return monthlyMax.set('month', month)
})
var monthlyMax = ee.ImageCollection.fromImages(monthlyMaxImages)

// Calculate VCI for 2020

// We are interested in calculating VCI for a particular month
var currentYear = 2020
var currentMonth = 5

var filtered = monthlyCol
  .filter(ee.Filter.eq('year', currentYear))
  .filter(ee.Filter.eq('month', currentMonth))
var currentMonthNdvi = ee.Image(filtered.first())

var minNdvi = ee.Image(monthlyMin.filter(ee.Filter.eq('month', currentMonth)).first())
var maxNdvi = ee.Image(monthlyMax.filter(ee.Filter.eq('month', currentMonth)).first())

// VCI = (NDVI - min) / (max - min)
var image = ee.Image.cat([currentMonthNdvi, minNdvi, maxNdvi]).rename(['ndvi', 'min', 'max'])
var vci = image.expression('100* (ndvi - min) / (max - min)',
    {'ndvi': image.select('ndvi'),
      'min': image.select('min'),
      'max': image.select('max')
    }).rename('vci')

var cropLand = gfsad.select('landcover').neq(0)
var vciMasked = vci.updateMask(cropLand)

// Vegetation Condition Classification

// | VCI    | Condition |
// | 60-100 | Good      |
// | 40-60  | Fair      |
// | 0-40   | Poor      |

// Use .where() function to classify continuous images to
// discrete values
var condition = vciMasked
  .where(vciMasked.lt(40), 1)
  .where(vciMasked.gte(40).and(vciMasked.lt(60)), 2)
  .where(vciMasked.gte(60), 3)
  
var vciPalette = ['#a50026','#d73027','#f46d43','#fdae61',
  '#fee08b','#d9ef8b','#a6d96a','#66bd63','#1a9850','#006837'];
var vciVisParams = {min: 0, max: 100, palette: vciPalette}
var conditionParams = {min:1, max:3, palette: ['#d7191c','#fdae61','#91cf60']}
Map.addLayer(vciMasked, vciVisParams, 'VCI', false)
Map.addLayer(condition, conditionParams, 'Vegetation Condition')
  

Open in Code Editor ↗

07 Earth Engine Apps

A simple yet powerful application of GEE is the building App. We can build these apps to share the work with others as a working model instead of a static report. Earth Engine Apps uses JavaScript internally. This module will introduce you to GEE Apps by creating Surface Water Explorer App.

Visit this Earth Engine Apps to exlpore more examples.

01. Client vs Server

The User Interface elements in your Code Editor - Map View, Drawing Tools etc. are ‘client-side’ elements. They run in YOUR browser. Image Collections, feature collections, calculations on Earth Engine objects etc. are ‘server-side’ elements. They run in Google’s data center. You cannot mix both these objects. To learn more, visit the Client vs. Server section of the Earth Engine User Guide.

  • To convert client-side objects to server-side objects, you can use the appropriate API function. Server-side functions start with ee., such ee.Date(), ee.Image() etc.
  • To convert server-side objects to client-side objects, you can call .getInfo() on am Earth Engine object. For the Python API, this is the only way to extract information from a server-side object, but the Javascript API provides a better (and preferred) - method for bring server-side objects to client-side using the evaluate() method. This method asynchronously retrieves the value of the object, without blocking the user interface - meaning it will let your code continue to execute while it fetches the value.
// Client vs. Server

// User Interface elements - Map View, Drawing Tools, Buttons, Sliders etc.
// are 'client-side'. They run in YOUR browser

// Image Collections, feature collections, calculations on Earth Engine objects
// etc. are 'server-side'. They run in Google's data center.

// You cannot mix both these objects.

var date = '2020-01-01' // This is client-side
print(typeof(date))

var eedate = ee.Date('2020-01-01').format() // This is server-side
print(typeof(eedate)) 

// To bring server-side objects to client-side, you can call .getInfo()

// var clientdate = eedate.getInfo()
// print(clientdate)
// print(typeof(clientdate)) 

// getInfo() blocks the execution of your code till the value is fetched
// If the value takes time to compute, your code editor will freeze
// This is not a good user experience
var s2 = ee.ImageCollection("COPERNICUS/S2_SR")
var filtered = s2.filter(ee.Filter.date('2020-01-01', '2020-01-31'))

//var numImages = filtered.size().getInfo()
//print(numImages)

// A better approach is to use evaluate() function
// This is an 'ascynshronous' function - meaning it will let your code
// continue to execute while it fetches the value

// You need to define a 'callback' function which will be called once the 
// value has been computed and ready to be used.

var myCallback = function(object) {
  print(object)
}
print('Computing the size of the collection')
var numImages = filtered.size().evaluate(myCallback)

Open in Code Editor ↗

02. Building an App with UI Widgets

Earth Engine comes with a User Interface API that allows you to build an interactive web application powered by Earth Engine. Building a web mapping application typically requires the skills of a full stack developer and are out of reach for most analysts and scientists. But the Earth Engine User Interface API makes this process much more accessible by providing ready-to-use widgets, such as Buttons, Drop-down Menus, Sliders etc. - and free cloud hosting to allow anyone to publish an app with just a few lines of code.

The apps run in your browser, so they need to use client-side functions. All the user interface functions are contained in the ui. package - such as ui.Select(), ui.Button(). You can create those elements by calling these functions with appropriate parameters. The main container object is the ui.Panel() which can contain different types of widgets. Learn more in the Earth Engine User Interface API section of the Earth Engine User Guide.

The code below shows how to build an app called Surface Water Explorer that allows anyone to pick a district in Karnataka, India then load the JRC Yearly Water Classification History of the district. .

Publishing the App.

We can publish the app to be viewed by anyone regardless of having an account in the earth engine. Click the Apps button in the top left of the script panel and click New apps in the Manage Apps dialog. In the Publish New App dialog, give an app’s name, a public URL will be generated by which we can access the app. Then if you do not have a GCP account, create one and select a project in which you should publish the app. Then with the default settings, click Publish. It will take a few minutes for the earth engine to process your request. After that, your Geo APP will be up and running on the GCP server.

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var gswYearly = ee.ImageCollection("JRC/GSW1_3/YearlyHistory"),
    admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2");
/***** End of imports. If edited, may not auto-convert in the playground. *****/

// Panels are the main container widgets
var mainPanel = ui.Panel({
  style: {width: '300px'}
});


var title = ui.Label({
  value: 'Surface Water Explorer',
  style: {'fontSize': '24px'}
});
// You can add widgets to the panel
mainPanel.add(title)

// You can even add panels to other panels
var admin2Panel = ui.Panel()
mainPanel.add(admin2Panel);

// Let's add a dropdown with the names of admin2 regions
// We need to get all admin2 names and creat a ui.Select object
var filtered = admin2.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var admin2Names = filtered.aggregate_array('ADM2_NAME')

// We define a function that will be called when the user selects a value
admin2Names.evaluate(function(names){
  var dropDown = ui.Select({
    placeholder: 'select a region',
    items: names,
    onChange: display
  })
  admin2Panel.add(dropDown)
})

// This function will be called when the user changes the value in the dropdown
var display = function(admin2Name) {
  var selected = ee.Feature(
    filtered.filter(ee.Filter.eq('ADM2_NAME', admin2Name)).first())
  var geometry = selected.geometry()
  Map.clear()
  Map.addLayer(geometry, {color: 'grey'}, admin2Name)
  Map.centerObject(geometry, 10)
  
  var gswYearFiltered = gswYearly.filter(ee.Filter.eq('year', 2020))
  var gsw2020 = ee.Image(gswYearFiltered.first()).clip(geometry)
  var water2020 = gsw2020.eq(2).or(gsw2020.eq(3)).rename('water').selfMask()
  
  var visParams = {min:0, max:1, palette: ['white','blue']}
  Map.addLayer(water2020, visParams, '2020 Water')
}

Map.setCenter(76.43, 12.41, 8)
ui.root.add(mainPanel);

Open in Code Editor ↗

Exercise

Create the app for your location.

// Change the ADM2_NAME to a State/Provice in your region
// Change the Map.setCenter to center the map on the chosen region
// Test the App

Try in Code Editor ↗

03. Performing Calculations

We can also perform calculations based on user input, and we can do this by using the onChange and evaluate() function. Any UI algorithm that accepts user information will have an option called onChange, and this will initiate a function if the user changes the value. We need to perform calculations only after the data is retrieved for user inputs, and this can be done using the evaluate function, which works asynchronously without affecting other computations.

Explore App ↗

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var admin2 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level2"),
    gswYearly = ee.ImageCollection("JRC/GSW1_3/YearlyHistory");
/***** End of imports. If edited, may not auto-convert in the playground. *****/
// Panels are the main container widgets
var mainPanel = ui.Panel({
  style: {width: '300px'}
});


var title = ui.Label({
  value: 'Surface Water Explorer',
  style: {'fontSize': '24px'}
});
// You can add widgets to the panel
mainPanel.add(title)

// You can even add panels to other panels
var admin2Panel = ui.Panel()
mainPanel.add(admin2Panel);

// Let's add a dropdown with the names of admin2 regions
// We need to get all admin2 names and creat a ui.Select object
var filtered = admin2.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var admin2Names = filtered.aggregate_array('ADM2_NAME')

// We define a function that will be called when the user selects a value
admin2Names.evaluate(function(names){
  var dropDown = ui.Select({
    placeholder: 'select a region',
    items: names,
    onChange: display
  })
  admin2Panel.add(dropDown)
})

var resultPanel = ui.Panel();
var areaLabel = ui.Label()
resultPanel.add(areaLabel)

mainPanel.add(resultPanel);

// This function will be called when the user changes the value in the dropdown
var display = function(admin2Name) {
  areaLabel.setValue('')
  var selected = ee.Feature(
    filtered.filter(ee.Filter.eq('ADM2_NAME', admin2Name)).first())
  var geometry = selected.geometry()
  Map.clear()
  Map.addLayer(geometry, {color: 'grey'}, admin2Name)
  Map.centerObject(geometry, 10)
  
  var gswYearFiltered = gswYearly.filter(ee.Filter.eq('year', 2020))
  var gsw2020 = ee.Image(gswYearFiltered.first()).clip(geometry)
  var water2020 = gsw2020.eq(2).or(gsw2020.eq(3)).rename('water').selfMask()
  
  var visParams = {min:0, max:1, palette: ['white','blue']}
  Map.addLayer(water2020, visParams, '2020 Water')
  
  var area = water2020.multiply(ee.Image.pixelArea()).reduceRegion({
    reducer: ee.Reducer.sum(),
    geometry: geometry,
    scale: 30,
    maxPixels: 1e9
  });
  
  var waterArea = ee.Number(area.get('water')).divide(10000).round();
  waterArea.evaluate(function(result) {
    var text = 'Surface Water Area 2020: ' + result + ' Hectares'
    areaLabel.setValue(text)
  })
}

Map.setCenter(76.43, 12.41, 8)
ui.root.add(mainPanel);

Open in Code Editor ↗

Exercise

Before publishing your app, add your name as Autor.

// Add a new panel called authorPanel
// Add a new label called authorLabel with your name
// Add the authorLabel to authorPanel
// Add the authorPanel to the mainPanel
// Test the app

Try in Code Editor ↗

Supplement

01. Drought Monitoring

Pie Chart Group Statistics

var modis = ee.ImageCollection("MODIS/006/MOD13Q1");
var gfsad = ee.Image("USGS/GFSAD1000_V1");

var startYear = 2010
var endYear = 2020
var startDate = ee.Date.fromYMD(startYear, 1, 1)
var endDate = ee.Date.fromYMD(endYear, 12, 31)
var filtered = modis
  .filter(ee.Filter.date(startDate, endDate))

// Cloud Masking
var bitwiseExtract = function(input, fromBit, toBit) {
  var maskSize = ee.Number(1).add(toBit).subtract(fromBit)
  var mask = ee.Number(1).leftShift(maskSize).subtract(1)
  return input.rightShift(fromBit).bitwiseAnd(mask)
}

var maskSnowAndClouds = function(image) {
  var summaryQa = image.select('SummaryQA')
  // Select pixels which are less than or equals to 1 (0 or 1)
  var qaMask = bitwiseExtract(summaryQa, 0, 1).lte(1)
  var maskedImage = image.updateMask(qaMask)
  return maskedImage.copyProperties(image, ['system:index', 'system:time_start'])
}

// Apply the function to all images in the collection
var maskedCol = filtered.map(maskSnowAndClouds)

var ndviCol = maskedCol.select('NDVI')
// MODIS NDVI values come as NDVI x 10000 that need to be scaled by 0.0001
var scaleNdvi = function(image) {
  var scaled = image.divide(10000)
  return scaled.copyProperties(image, ['system:index', 'system:time_start'])
};

var ndviScaled = ndviCol.map(scaleNdvi)

// Create NDVI composite for every month
var years = ee.List.sequence(startYear,endYear);
var months = ee.List.sequence(1, 12);

// Map over the years and create a monthly average collection
var monthlyImages = years.map(function(year) {
  return months.map(function(month) {
    var filtered = ndviScaled
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.mean()
    return monthly.set({'month': month, 'year': year})
  })
}).flatten()

// We now have 1 image per month for entire duratioon
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)


// We can compute Minimum NDVI for each month across all years
// i.e. Minimum NDVI for all May months in the collection
var monthlyMinImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMin = filtered.min()
    return monthlyMin.set('month', month)
})
var monthlyMin = ee.ImageCollection.fromImages(monthlyMinImages)

// We can compute Maximum NDVI for each month across all years
// i.e. Maximum NDVI for all May months in the collection
var monthlyMaxImages = months.map(function(month) {
    var filtered = monthlyCol.filter(ee.Filter.eq('month', month))
    var monthlyMax = filtered.max()
    return monthlyMax.set('month', month)
})
var monthlyMax = ee.ImageCollection.fromImages(monthlyMaxImages)

// Calculate VCI for 2020

// We are interested in calculating VCI for a particular month
var currentYear = 2020
var currentMonth = 5

var filtered = monthlyCol
  .filter(ee.Filter.eq('year', currentYear))
  .filter(ee.Filter.eq('month', currentMonth))
var currentMonthNdvi = ee.Image(filtered.first())

var minNdvi = ee.Image(monthlyMin.filter(ee.Filter.eq('month', currentMonth)).first())
var maxNdvi = ee.Image(monthlyMax.filter(ee.Filter.eq('month', currentMonth)).first())

// VCI = (NDVI - min) / (max - min)
var image = ee.Image.cat([currentMonthNdvi, minNdvi, maxNdvi]).rename(['ndvi', 'min', 'max'])
var vci = image.expression('100* (ndvi - min) / (max - min)',
    {'ndvi': image.select('ndvi'),
      'min': image.select('min'),
      'max': image.select('max')
    }).rename('vci')

var cropLand = gfsad.select('landcover').neq(0)
var vciMasked = vci.updateMask(cropLand)

// Vegetation Condition Classification

// | VCI    | Condition |
// | 60-100 | Good      |
// | 40-60  | Fair      |
// | 0-40   | Poor      |

// Use .where() function to classify continuous images to
// discrete values
var condition = vciMasked
  .where(vciMasked.lt(40), 1)
  .where(vciMasked.gte(40).and(vciMasked.lt(60)), 2)
  .where(vciMasked.gte(60), 3)


var admin1 = ee.FeatureCollection("FAO/GAUL_SIMPLIFIED_500m/2015/level1");
var karnataka = admin1.filter(ee.Filter.eq('ADM1_NAME', 'Karnataka'))
var geometry = karnataka.geometry()
Map.centerObject(geometry)


var conditionParams = {min:1, max:3, palette: ['#d7191c','#fdae61','#91cf60']}
Map.addLayer(condition.clip(geometry), conditionParams, 'Vegetation Condition')

// Create a Pie-Chart of Vegetation Condition for the State
// Code adapted from 
// https://developers.google.com/earth-engine/tutorials/tutorial_global_surface_water_04

var areaImage =  ee.Image.pixelArea().divide(1e5).addBands(condition);

// Run a Grouped Reducer to get area by class
var stats = areaImage.reduceRegion({
  reducer: ee.Reducer.sum().group({
    groupField: 1,
    groupName: 'class',
  }),
  geometry: geometry,
  scale: 500,
});

var areaStatsList = ee.List(stats.get('groups'));

// Define Dictionaries for chart labels and colors
var classNames = ee.Dictionary({
  '1': 'Poor',
  '2': 'Fair',
  '3': 'Good'
})
var classPalette = ee.Dictionary({
  '1': '#d7191c',
  '2': '#fdae61',
  '3': '#91cf60'
})

// Convert the group stats results to a feature collection
function createFeature(areaStats) {
  areaStats = ee.Dictionary(areaStats);
  var classNumber = ee.Number(areaStats.get('class')).format('%d');
  var result = {
      'class': classNumber,
      'class_name': classNames.get(classNumber),
      'palette': classPalette.get(classNumber),
      'area_hectares': areaStats.get('sum')
  };
  return ee.Feature(null, result);   // Creates a feature without a geometry.
}
var fc = ee.FeatureCollection(areaStatsList.map(createFeature))

// Create a JSON dictionary that defines piechart colors based on the
// transition class palette.
// https://developers.google.com/chart/interactive/docs/gallery/piechart
// .getInfo() is need to convert server object to JSON
function createPieChartSliceDictionary(fc) {
  return ee.List(fc.aggregate_array('palette'))
    .map(function(p) { return {'color': p}; }).getInfo();
}
var chart = ui.Chart.feature.byFeature({
    features: fc,
    xProperty: 'class_name',
    yProperties: ['area_hectares', 'class']
  })
  .setChartType('PieChart')
  .setOptions({
    title: 'Summary of Vegetation Condition',
    slices: createPieChartSliceDictionary(fc),
    sliceVisibilityThreshold: 0, // don't group small slices
    is3D: true,

  });
print(chart);

02. Hydrology

Chart Cumulative Rainfall

var chirpsDaily = ee.ImageCollection("UCSB-CHG/CHIRPS/DAILY");

// Define a water year
var startDate = ee.Date('2020-06-01')
var endDate = ee.Date('2020-09-30')

var days = endDate.difference(startDate, 'day')
var daysList = ee.List.sequence(1, days)

// map() a function over the list of days

var cumulativeImages = daysList.map(function(day) {
  // Filter the collection from start date till the day of computatiton
  var begin = startDate
  var current = startDate.advance(day, 'day')
  var filtered = chirpsDaily.filter(ee.Filter.date(begin, current))
  // Use sum() to calculate total rainfall in the period
  // Make sure to set the start_time for the image
  var cumulativeImage = filtered.reduce(ee.Reducer.sum())
    .set('system:time_start', current.millis())
  return cumulativeImage
})

// We have list of images containing cumulative rainfall
// Put them in a collection
var cumulativeCol = ee.ImageCollection.fromImages(cumulativeImages)

// Create a point with coordinates for the city of Bengaluru, India
var point = ee.Geometry.Point(77.5946, 12.9716)

// Chart cumulative rainfall
var chart = ui.Chart.image.series({
  imageCollection: cumulativeCol, 
  region: point, 
  reducer: ee.Reducer.mean(), 
  scale: 5566,
}).setOptions({
      interpolateNulls: true,
      lineWidth: 1,
      pointSize: 3,
      title: 'Cumulative Monsoon Rainfall at Bengaluru (2020)',
      vAxis: {title: 'Cumulative Rainfall (mm)'},
      hAxis: {title: 'Month', format: 'YYYY-MMM'}

});
print(chart);

Exporting Precipitation Data

// This script demonstrates how to create and export
// country/continent-wide precipitation data from CHIRPS
// The goal is to create 1 yearly image with 12 bands -
// with each band having the monthly total precipitation
var chirps = ee.ImageCollection("UCSB-CHG/CHIRPS/PENTAD");
var lsib = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017");

var region = lsib.filter(ee.Filter.eq('wld_rgn', 'South America'))
var geometry = region.geometry()
Map.centerObject(geometry)

var year = 2020
var months = ee.List.sequence(1, 12)

// Map over the years and create a monthly totals collection
var monthlyImages = months.map(function(month) {
    var filtered = chirps
      .filter(ee.Filter.calendarRange(year, year, 'year'))
      .filter(ee.Filter.calendarRange(month, month, 'month'))
    var monthly = filtered.sum()
    return monthly.set({'month': month, 'year': year})
  })

// We now have 1 image per month for the chosen year
var monthlyCol = ee.ImageCollection.fromImages(monthlyImages)
print('Image Collection:', monthlyCol)

// Use the toBands() function to create a multi-band image
var yearImage = monthlyCol.toBands()

// Rename the bands so they have names '1', '2' ... '12'
var bandNames = months.map(function(month) {
  return ee.Number(month).format('%d')
})
var yearImage = yearImage.rename(bandNames)
print('Multi-band Image', yearImage)

// Export to GeoTiff
// The resulting region is large and geometry is complex/
// Extract the bounding-box to be used in the export.
var exportGeometry = geometry.bounds()

Export.image.toDrive({
  image: yearImage,
  description: 'multi_band_image_export',
  folder: 'earthengine',
  fileNamePrefix: 'precipitation',
  region: exportGeometry,
  scale: 5566,
  crs: 'EPSG:4326',
  maxPixels: 1e10
})

// Visualize the collection by creating a yearly total image
var visParams = {
  min:0,
  max: 2500,
  palette: ['#f1eef6','#bdc9e1','#74a9cf','#2b8cbe','#045a8d']
}
Map.addLayer(monthlyCol.sum().clip(geometry), visParams, 'Yearly Precipitation')

GPM Precipitation Time Series

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var gpm = ee.ImageCollection("NASA/GPM_L3/IMERG_V06");
/***** End of imports. If edited, may not auto-convert in the playground. *****/
// Display GPM Precipitation Time Series
var startDate = ee.Date.fromYMD(2021, 8, 4)
var endDate = startDate.advance(1, 'day')

// Select the Calibrated Precipitation Band
var collection = gpm.select('precipitationCal');

var filtered = collection.filter(ee.Filter.date(startDate, endDate))

// GPM units are mm/hour but image represents 30 mins
// Divide by 2 to get the total precipitation within 30 mins
var filtered = filtered.map(function(image) {
  return image.divide(2).copyProperties(image, ['system:time_start'])
})

var geometry = ee.Geometry.Point([77.60412933051538, 12.952912912328241])


// Display a time-series chart
var chart = ui.Chart.image.series({
  imageCollection: filtered,
  region: geometry,
  reducer: ee.Reducer.mean(),
  scale: 20
}).setOptions({
      lineWidth: 1,
      title: 'Precipitation Time Series',
      interpolateNulls: true,
      vAxis: {title: '30-min Precipitation (mm)'},
      hAxis: {title: '', format: 'HH:mm'}
    })
print(chart);

IMD Number of Rainy Days

/**** Start of imports. If edited, may not auto-convert in the playground. ****/
var imdCol = ee.ImageCollection("users/ujavalgandhi/imd/rainfall"),
    india = ee.FeatureCollection("users/ujavalgandhi/public/soi_india_boundary");
/***** End of imports. If edited, may not auto-convert in the playground. *****/
// IMD Collection is a processed version of raw IMD Rainfall Images
// See https://github.com/spatialthoughts/projects/tree/master/imd
// It has 1 image per day with a 'rainfall' band with precipitation values in mm

var filtered = imdCol
  .filter(ee.Filter.date('2017-01-01', '2017-12-31'))
  .filter(ee.Filter.bounds(india))

// Rainy day is when the total daily rainfall is > 2.5mm
// Let's process the collection to find all pixels which are 'rainy'
var rainyCol = filtered.map(function(dayImage){
  // selfMask() is important as it will mask all 0 values
  // Otherwise it will still count as a valid pixel
  return dayImage.gt(2.5).selfMask()
})

// We can now apply a reducer to count all valid pixels from the stack of images
var raindayCount = rainyCol.reduce(ee.Reducer.count())

var visParams = {
  min: 30,
  max: 120,
  palette: ['#feebe2','#fbb4b9','#f768a1','#c51b8a','#7a0177']
}
Map.addLayer(raindayCount, visParams, 'Number of Rainy Days')

03. Land Use Land Cover Classification

Calculating Area by Class

var classified = ee.Image("users/ujavalgandhi/e2e/bangalore_classified");
var bangalore = ee.FeatureCollection("users/ujavalgandhi/public/bangalore_boundary");

Map.addLayer(classified, {min: 0, max: 3, palette: ['gray', 'brown', 'blue', 'green']}, '2019');

// Create a 2 band image with the area image and the classified image
// Divide the area image by 1e6 so area results are in Sq Km
var areaImage = ee.Image.pixelArea().divide(1e6).addBands(classified);

// Calculate Area by Class
// Using a Grouped Reducer
var areas = areaImage.reduceRegion({
      reducer: ee.Reducer.sum().group({
      groupField: 1,
      groupName: 'classification',
    }),
    geometry: bangalore,
    scale: 100,
    tileScale: 4,
    maxPixels: 1e10
    }); 

var classAreas = ee.List(areas.get('groups'))
print(classAreas)

var areaChart = ui.Chart.image.byClass({
  image: areaImage,
  classBand: 'classification', 
  region: bangalore,
  scale: 100,
  reducer: ee.Reducer.sum(),
  classLabels: ['urban', 'bare', 'water', 'vegetation'],
}).setOptions({
  hAxis: {title: 'Classes'},
  vAxis: {title: 'Area Km^2'},
  title: 'Area by class',
  series: {
    0: { color: 'gray' },
    1: { color: 'brown' },
    2: { color: 'blue' },
    3: { color: 'green' }
  }
});
print(areaChart); 

Hyperparameter Tuning

var s2 = ee.ImageCollection("COPERNICUS/S2_SR");
var basin = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_7");
var gcp = ee.FeatureCollection("users/ujavalgandhi/e2e/arkavathy_gcps");
var alos = ee.Image("JAXA/ALOS/AW3D30/V2_2");

var arkavathy = basin.filter(ee.Filter.eq('HYBAS_ID', 4071139640))
var boundary = arkavathy.geometry()
var rgbVis = {
  min: 0.0,
  max: 3000,
  bands: ['B4', 'B3', 'B2'],
};
// Function to remove cloud and snow pixels from Sentinel-2 SR image
function maskCloudAndShadowsSR(image) {
  var cloudProb = image.select('MSK_CLDPRB');
  var snowProb = image.select('MSK_SNWPRB');
  var cloud = cloudProb.lt(10);
  var scl = image.select('SCL'); 
  var shadow = scl.eq(3); // 3 = cloud shadow
  var cirrus = scl.eq(10); // 10 = cirrus
  // Cloud probability less than 10% or cloud shadow classification
  var mask = cloud.and(cirrus.neq(1)).and(shadow.neq(1));
  return image.updateMask(mask).divide(10000);
}


var filtered = s2
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 30))
  .filter(ee.Filter.date('2019-01-01', '2019-12-31'))
  .filter(ee.Filter.bounds(boundary))
  .map(maskCloudAndShadowsSR)
    .select('B.*')


var composite = filtered.median().clip(boundary) 

var addIndices = function(image) {
var ndvi = image.normalizedDifference(['B8', 'B4']).rename(['ndvi']);
  var ndbi = image.normalizedDifference(['B11', 'B8']).rename(['ndbi']);
  var mndwi = image.normalizedDifference(['B3', 'B11']).rename(['mndwi']); 
  var bsi = image.expression(
      '(( X + Y ) - (A + B)) /(( X + Y ) + (A + B)) ', {
        'X': image.select('B11'), //swir1
        'Y': image.select('B4'),  //red
        'A': image.select('B8'), // nir
        'B': image.select('B2'), // blue
  }).rename('bsi');
  return image.addBands(ndvi).addBands(ndbi).addBands(mndwi).addBands(bsi)
}

var composite = addIndices(composite);

var elev = alos.select('AVE_DSM').divide(2000).rename('elev');
var slope = ee.Terrain.slope(alos.select('AVE_DSM')).divide(30).rename('slope');

var composite = composite.addBands(elev).addBands(slope);

var visParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 0.3, gamma: 1.2};
Map.addLayer(composite, visParams, 'RGB');

// Add a random column and split the GCPs into training and validation set
var gcp = gcp.randomColumn()

// This being a simpler classification, we take 60% points
// for validation. Normal recommended ratio is
// 70% training, 30% validation
var trainingGcp = gcp.filter(ee.Filter.lt('random', 0.6));
var validationGcp = gcp.filter(ee.Filter.gte('random', 0.6));
Map.addLayer(validationGcp)
// Overlay the point on the image to get training data.
var training = composite.sampleRegions({
  collection: trainingGcp,
  properties: ['landcover'],
  s