How to see wildfires from space using GOES-16 data

December 03, 2019

At XY, we use diverse streams of large-scale data to understand how the myriad exposures we encounter every day affect our health and well-being. One of the most important sources of data we use is satellite imagery, which when coupled with deep learning techniques can be used to accurately predict health and disease outcomes at varying geographical scales. We use satellite imagery to predict both chronic and acute disease, and also to monitor natural and man-made disasters like wildfires. In this post, we provide a step-by-step Python tutorial for researchers to use satellite imagery in order to see wildfires from space, using the devastating Thomas Fire that started in late 2017 in California as our main example.

St Thomas Fire, 2017-2018

Wildfires are naturally occurring and man-made events, and can lead to massive releases of CO2, aerosols, and other toxic gases. While wildfires can occur in relatively remote areas, winds typically push the resulting fire for hundreds of miles, often toward local municipalities.

One such fire, the Thomas Fire, took place from December 14, 2017 to January 12, 2018 in Santa Barbara and Ventura counties. This fire was able to take advantage of that year's relatively dry conditions and burn approximately 440 square miles, increasing the risk of large-scale landslides and releasing large amounts of particulate matter into the air.

Scorched mountains that were in the path of the wildfire

GOES-R Background There are several sources of freely available satellite imagery useful for monitoring wildfires, including MODIS, Sentinel, VIIRS, and GOES-R series satellites. Here we use GOES-16 data to make a true-color animation of California's 2017 Thomas Fire.

The GOES-R series satellites have been in operation since 1975 as part of NOAA's effort to monitor severe storms and support weather forecasting and meteorological research. GOES-16 (East) and GOES-17 (West) are the latest operational satellites, with an improved detector known as the Advanced Baseline Imager (ABI). GOES-16 came online a day after the Thomas Fire began, enabling coverage December 15 onward.

GOES image data can be retrieved from either AWS or the NOAA CLASS form. Navigate to the AWS download page and obtain all CONUS L2 Cloud and Moisture Imagery, particularly the Multi-Band Format files on December 17th after UTC 14 (~7GB). GEONETCast has also provided a shell script for downloading data automatically.

The Advanced Baseline imager (ABI) onboard GOES has three scan modes: Fulldisk (F), CONUS (C), and Mesoscale (M). For any mode, the data come in the form of single or multi-channel NETCDF4 files (extension .nc). For this post we use multi-channel CONUS data. Filenames have a general form of:

OR_ABI-[Product Level]-[Product Short Name]-M[Scanning Mode]_G[OES Satellite]_s[tart time]_e[nd time]_c[entral time].nc

Python Code Snippets

To run the following code snippets, several Python packages are needed including cartopy, matplotlib, metpy, numpy, and xarray.

We assume below that the GOES data have been downloaded to a subdirectory of the project directory, namely data/thomas_fire/dec17/Multi_Channel/.

from datetime import datetime
import as ccrs
import matplotlib.pyplot as plt
import numpy as np
import xarray
import metpy
from glob import glob
import os
date = 'dec17'
# path to multichannel .nc files
BASE_DIR = 'data/thomas_fire/{}/Multi_Channel'.format(date)
#collect .nc files
filepaths = glob(os.path.join(BASE_DIR, '*.nc'))

Because we want to zoom-in on the Thomas Fire and monitor its smoke trails, we define a latitude, longitude window of interest. We will use this within the matplotlib snippet below.

lon_extent = [-121.5, -117.5]
lat_extent = [32.5, 36.5]
lonlat_extent = lon_extent + lat_extent

We can now process each image. The remaining snippets sit within a for-loop: for i, f in enumerate(filepaths): [...]

# Open the GOES-16 NetCDF CONUS file
CF = xarray.open_dataset(f)
# convert the scan starttime to a datetime object
scan_start = datetime.strptime(CF.time_coverage_start, '%Y-%m-%dT%H:%M:%S.%fZ')

GOES multichannel data comes equipped with 16 channels. Here we are only interested in the first three, which correspond to Blue, Red, and Near-Infrared spectral regions:

B = CF['CMI_C01'].data
R = CF['CMI_C02'].data
NIR = CF['CMI_C03'].data
# Standardize the data to remove noise
B = np.clip(B, 0, 1)
R = np.clip(R, 0, 1)
NIR = np.clip(NIR, 0, 1)
# Adjust data for detector brightness on the Advanced Baseline Imager onboard GOES-16
gamma = 2.2
B = np.power(B, 1/gamma)
R = np.power(R, 1/gamma)
NIR = np.power(G, 1/gamma)

At this point we can construct a true-color RGB image. GOES' ABI detector does not in fact have a green channel, but the following recipe can be used to generate a "pseudo" green channel image.

G_pseudo = 0.45 * R + 0.1 * NIR + 0.45 * B
G_pseudo = np.clip(G_pseudo, 0, 1)  # ensure proper range
# create an RGB array
RGB = np.dstack([R, G_pseudo, B])

In order to plot GOES data and zoom-in using latitude, longitude coordinates, we have to obtain the geostationary projection metadata and axes sweep of the ABI data.

dat = CF.metpy.parse_cf('CMI_C02')
geos = dat.metpy.cartopy_crs
x = dat.x
y = dat.y
# set up figure parameters
fig = plt.figure(figsize=(8, 8))
pc = ccrs.PlateCarree()
ax = fig.add_subplot(1, 1, 1, projection=pc)
# plot the RGB image only for our region of interest
ax.imshow(RGB, origin='upper',
ax.set_extent(lonlat_extent, crs=pc)
# annotate figure with arrow and label for the Thomas Fire
plt.arrow(-119.2, 34.1, 0, 0.25, linewidth=0.9, edgecolor='k', head_width=0.1, head_length=0.1, facecolor='k')
ax.annotate('Thomas Fire', xy=(-119.4, 34))
# add title along with timestamp
plt.title('GOES-16 True Color', loc='left', fontweight='bold', fontsize=15)
plt.title('{}'.format(scan_start.strftime('%d %B %Y %H:%M UTC ')), loc='right')

Before ending the for-loop over all GOES .nc files, we need to save the images. Below we use ffmpeg to generate an mp4 movie using all images. To make this easier, save the image with a name of the form image-[NUMBER].png.

plt.savefig('data/thomas_fire/' + date + '/images/image-' + str(i) + '.png', bbox_inches='tight', pad_inches=0)
Around 19 UTC: neighboring Santa Barbara to the West

Animation using ffmpeg

Once all images are processed, the mp4 movie can be generated using ffmpeg. See for installation. There are numerous parameters that can be used with ffmpeg to control the frame rate, frame size, input file name, video codec, etc. The movie shown here was created with the following shell command:

ffmpeg -framerate 12 -f image2 -s 500x500 -startnumber 25 -i image-%d.png -vframes 95 -vcodec libx264 -crf 25 -pixfmt yuv420p CONUS20171217500x500fr12crf25.mp4