Open In Colab   Open in Kaggle

Heatwaves#

Content creators: Wil Laura

Content reviewers: Will Gregory, Paul Heubel, Laura Paccini, Jenna Pearson, Ohad Zivan

Content editors: Paul Heubel

Production editors: Paul Heubel, Konstantine Tsafatinos

Our 2024 Sponsors: CMIP, NFDI4Earth

Project Background#

In this project, you will look into the characterization of heatwaves using near-surface air temperature reanalysis data. Since we are talking about extreme events when the temperature exceeds a certain threshold for a continuous number of days, we will first analyze the global spatial and temporal distribution of air temperature. Next, we will calculate the number and timing of heatwaves for a local area, then focus on determining the percentage of a region under heat waves. Additionally, you will be able to explore its relationship with other climate drivers. Also, you are encouraged to analyze the health impact of heatwaves using an available mortality dataset. Finally, enjoy exploring the heatwaves!

Project Template#

../../_images/2024_Heatwaves.svg

Data Exploration Notebook#

Project Setup#

# this cell enables anaconda in google colab and has to be run At First
# it will force the kernel to restart, this is necessary to install all system dependencies of cfgrib
# which in turn allows us to open grib files via xarray
#!pip install -q condacolab
#import condacolab
#condacolab.install()
# google colab installs

#!mamba install --quiet cartopy cdsapi cfgrib eccodes numpy==1.26.4
# import packages
#import xclim
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
#import matplotlib.dates as mdates
import cartopy.feature as cfeature
import cartopy.crs as ccrs

#from xclim.core.calendar import percentile_doy

Helper functions#

Hide code cell source
# @title Helper functions
import os
import pooch
import tempfile

def pooch_load(filelocation=None, filename=None, processor=None):
    shared_location = "/home/jovyan/shared/Data/projects/Heatwaves"  # this is different for each day
    user_temp_cache = tempfile.gettempdir()

    if os.path.exists(os.path.join(shared_location, filename)):
        file = os.path.join(shared_location, filename)
    else:
        file = pooch.retrieve(
            filelocation,
            known_hash=None,
            fname=os.path.join(user_temp_cache, filename),
            processor=processor,
        )

    return file

Figure settings#

Hide code cell source
# @title Figure settings

import ipywidgets as widgets       # interactive display

%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/neuromatch/climate-course-content/main/cma.mplstyle")

ECMWF Reanalysis v5 (ERA5): Air Temperature at 2m#

You will utilize the ERA5 dataset to examine temperature trends and heatwaves, applying the loading methods introduced in W1D1. Please, see the W1D2 course material for more information on reanalysis data. Besides, you can read more about ERA5 here: Climate reanalysis.

Specifically, in this project, you will focus on near-surface temperature, which refers to the temperature of air at \(2 \text{m}\) above the surface of land, sea, or inland waters, temperature with units of Kelvin \(\left(\text{K}\right)\).

You will access the following subsampled data through the OSF cloud storage to simplify downloading. For the project, it is necessary to download data yourself when you are interested in exploring other regional subsets or variables. Please have a look at the get_ERA5_reanalysis_data.ipynb notebook, where we show how to use the Climate Data Store (CDS) API to get a subset of the huge ECMWF ERA5 Reanalysis data set.

In the following, along with a small subsample file that was downloaded beforehand, we show how to load, explore, and visualize ERA5 data.

# loading a subsample of the ERA5 reanalysis dataset, daily from 1991 to 2000
link_id = "z9xfv"
url_ERA5 = f"https://osf.io/download/{link_id}/"
#filepath = "/content/file_sample.nc"
fname_ERA5 = "file_sample.nc"
ds = xr.open_dataset(pooch_load(url_ERA5, fname_ERA5))
ds
Downloading data from 'https://osf.io/download/z9xfv/' to file '/tmp/file_sample.nc'.
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
Cell In[8], line 6
      4 #filepath = "/content/file_sample.nc"
      5 fname_ERA5 = "file_sample.nc"
----> 6 ds = xr.open_dataset(pooch_load(url_ERA5, fname_ERA5))
      7 ds

Cell In[6], line 13, in pooch_load(filelocation, filename, processor)
     11     file = os.path.join(shared_location, filename)
     12 else:
---> 13     file = pooch.retrieve(
     14         filelocation,
     15         known_hash=None,
     16         fname=os.path.join(user_temp_cache, filename),
     17         processor=processor,
     18     )
     20 return file

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:239, in retrieve(url, known_hash, fname, path, processor, downloader, progressbar)
    236 if downloader is None:
    237     downloader = choose_downloader(url, progressbar=progressbar)
--> 239 stream_download(url, full_path, known_hash, downloader, pooch=None)
    241 if known_hash is None:
    242     get_logger().info(
    243         "SHA256 hash of downloaded file: %s\n"
    244         "Use this value as the 'known_hash' argument of 'pooch.retrieve'"
   (...)
    247         file_hash(str(full_path)),
    248     )

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:807, in stream_download(url, fname, known_hash, downloader, pooch, retry_if_failed)
    803 try:
    804     # Stream the file to a temporary so that we can safely check its
    805     # hash before overwriting the original.
    806     with temporary_file(path=str(fname.parent)) as tmp:
--> 807         downloader(url, tmp, pooch)
    808         hash_matches(tmp, known_hash, strict=True, source=str(fname.name))
    809         shutil.move(tmp, str(fname))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/downloaders.py:221, in HTTPDownloader.__call__(self, url, output_file, pooch, check_only)
    219 try:
    220     response = requests.get(url, timeout=timeout, **kwargs)
--> 221     response.raise_for_status()
    222     content = response.iter_content(chunk_size=self.chunk_size)
    223     total = int(response.headers.get("content-length", 0))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/models.py:1026, in Response.raise_for_status(self)
   1021     http_error_msg = (
   1022         f"{self.status_code} Server Error: {reason} for url: {self.url}"
   1023     )
   1025 if http_error_msg:
-> 1026     raise HTTPError(http_error_msg, response=self)

HTTPError: 500 Server Error: Internal Server Error for url: https://osf.io/download/z9xfv/

Let’s visualize the distribution of the annual mean near-surface temperature for the year 2000 in the given area around the equator. After calculating the anomaly according to the hints included in the template, you should be able to visualize the answer to Question 1 similarly.

# calculate the annual average of the year selected
year_to_use = ds.t2m.loc["2000-01-01":"2000-12-31",:,:].mean(dim="time")
year_to_use
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[9], line 2
      1 # calculate the annual average of the year selected
----> 2 year_to_use = ds.t2m.loc["2000-01-01":"2000-12-31",:,:].mean(dim="time")
      3 year_to_use

NameError: name 'ds' is not defined
# plot temperature
fig, ax = plt.subplots(1, subplot_kw={'projection':ccrs.PlateCarree()}, layout='constrained')

extent = [year_to_use.longitude.min(), year_to_use.longitude.max(), year_to_use.latitude.min(),
          year_to_use.latitude.max()]
#im = ax.imshow(year_to_use, extent=extent, transform=ccrs.PlateCarree(),cmap="coolwarm")
im = year_to_use.plot(ax=ax,
                      transform=ccrs.PlateCarree(),
                      cmap="coolwarm",
                      add_colorbar=False)

# add coastlines, labeled gridlines and continent boundaries
ax.coastlines()
ax.gridlines(draw_labels={"bottom": "x", "left": "y"})
ax.add_feature(cfeature.BORDERS, linestyle='-.')

# create colorbar and set cbar label
cbar = plt.colorbar(im, ax=ax, orientation='vertical', shrink=0.9, pad=0.1)
cbar.set_label('Air temperature at 2m (K)')

# set title
plt.title("Annual mean temperature in 2000")
plt.show()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[10], line 4
      1 # plot temperature
      2 fig, ax = plt.subplots(1, subplot_kw={'projection':ccrs.PlateCarree()}, layout='constrained')
----> 4 extent = [year_to_use.longitude.min(), year_to_use.longitude.max(), year_to_use.latitude.min(),
      5           year_to_use.latitude.max()]
      6 #im = ax.imshow(year_to_use, extent=extent, transform=ccrs.PlateCarree(),cmap="coolwarm")
      7 im = year_to_use.plot(ax=ax,
      8                       transform=ccrs.PlateCarree(),
      9                       cmap="coolwarm",
     10                       add_colorbar=False)

NameError: name 'year_to_use' is not defined
../../_images/06c9330294d856dbf560b421f4b16f1a42473578167e084493a9dc5e0a8a8702.png

Additionally, you can calculate the air temperature trend. We choose a longer time period and therefore another subsample from 1991 to 2020, that we load in the next cell:

link_id = "3xbq8"
fname_91_20 = "data_sample_91_2020.nc"
url_data_sample = f"https://osf.io/download/{link_id}/"
ds_long = xr.open_dataset(pooch_load(url_data_sample, fname_91_20))
ds_long
Downloading data from 'https://osf.io/download/3xbq8/' to file '/tmp/data_sample_91_2020.nc'.
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
Cell In[11], line 4
      2 fname_91_20 = "data_sample_91_2020.nc"
      3 url_data_sample = f"https://osf.io/download/{link_id}/"
----> 4 ds_long = xr.open_dataset(pooch_load(url_data_sample, fname_91_20))
      5 ds_long

Cell In[6], line 13, in pooch_load(filelocation, filename, processor)
     11     file = os.path.join(shared_location, filename)
     12 else:
---> 13     file = pooch.retrieve(
     14         filelocation,
     15         known_hash=None,
     16         fname=os.path.join(user_temp_cache, filename),
     17         processor=processor,
     18     )
     20 return file

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:239, in retrieve(url, known_hash, fname, path, processor, downloader, progressbar)
    236 if downloader is None:
    237     downloader = choose_downloader(url, progressbar=progressbar)
--> 239 stream_download(url, full_path, known_hash, downloader, pooch=None)
    241 if known_hash is None:
    242     get_logger().info(
    243         "SHA256 hash of downloaded file: %s\n"
    244         "Use this value as the 'known_hash' argument of 'pooch.retrieve'"
   (...)
    247         file_hash(str(full_path)),
    248     )

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:807, in stream_download(url, fname, known_hash, downloader, pooch, retry_if_failed)
    803 try:
    804     # Stream the file to a temporary so that we can safely check its
    805     # hash before overwriting the original.
    806     with temporary_file(path=str(fname.parent)) as tmp:
--> 807         downloader(url, tmp, pooch)
    808         hash_matches(tmp, known_hash, strict=True, source=str(fname.name))
    809         shutil.move(tmp, str(fname))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/downloaders.py:221, in HTTPDownloader.__call__(self, url, output_file, pooch, check_only)
    219 try:
    220     response = requests.get(url, timeout=timeout, **kwargs)
--> 221     response.raise_for_status()
    222     content = response.iter_content(chunk_size=self.chunk_size)
    223     total = int(response.headers.get("content-length", 0))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/models.py:1026, in Response.raise_for_status(self)
   1021     http_error_msg = (
   1022         f"{self.status_code} Server Error: {reason} for url: {self.url}"
   1023     )
   1025 if http_error_msg:
-> 1026     raise HTTPError(http_error_msg, response=self)

HTTPError: 500 Server Error: Internal Server Error for url: https://osf.io/download/3xbq8/
# find the last year of the dataset
last_year = ds_long['time.year'].max().item()

# filter the last 30 years
ds_30y = ds_long.sel(time=slice(str(last_year-30+1),str(last_year)))

# calculate the mean temperature for each year
mean_time_dim = ds_30y['t2m'].resample(time="Y").mean(dim="time")

# apply cosine of latitude as weights to the dataset variables
weights = np.cos(np.deg2rad(mean_time_dim.latitude))
weighted_mean_time_dim = mean_time_dim.weighted(weights)

# calculate the global mean in degrees celsius
weighted_global_mean_temp = weighted_mean_time_dim.mean(dim=["longitude","latitude"])
weighted_global_mean_temp_c = weighted_global_mean_temp - 273.15

# calculate the trend line
years = weighted_global_mean_temp_c['time'].dt.year.values
annual_temperature = weighted_global_mean_temp_c.values
trend_coefficients = np.polyfit(years, annual_temperature, 1)
trend_line = np.poly1d(trend_coefficients)

# draw data
plt.plot(years, annual_temperature, color="blue", label="ERA5 Reanalysis - annually resampled")
plt.plot(years, trend_line(years), color="red", linestyle="--", label='Trend line')

# aesthetics
plt.xlabel("Time (years)")
plt.ylabel("Air temperature at 2m (K)")
plt.legend()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[12], line 2
      1 # find the last year of the dataset
----> 2 last_year = ds_long['time.year'].max().item()
      4 # filter the last 30 years
      5 ds_30y = ds_long.sel(time=slice(str(last_year-30+1),str(last_year)))

NameError: name 'ds_long' is not defined

ERA5-Land hourly data from 1950 to present#

This ERA5 Reanalysis dataset of hourly data has an increased spatial resolution and focuses on the land variable evolution over the last decades. It is updated regularly by ECMWF and accessible via the CDS API (cf. get_ERA5_reanalysis_data.ipynb). A similar dataset of lower temporal resolution, i.e. monthly averages can be found here.

Depending on your research question it is essential to choose an adequate frequency, however, due to the huge amount of data available, it might become necessary to focus on a regional subset of the ERA5 dataset.

In the following, we show how we downloaded global ERA5-Land data via the CDS API to answer Q2 by calculating global temperature trends. Please note that the API request serves as an example and the downloading process should not be triggered if not necessary. Please think about an adequate frequency and domain of interest beforehand, to request a subset that is sufficient to answer your questions.

# In your home directory (e.g., /home/jovyan/), create a file called .cdsapirc with the following content:
# Replace your-uid:your-api-key with your actual credentials. Do not eliminate the "\n"

# uncomment the following code:
#with open("/home/jovyan/.cdsapirc", "w") as f:
#    f.write("url: https://cds.climate.copernicus.eu/api\n")
#    f.write("key: your-api-key with your actual credentials\n")  # Replace with your actual key

#if you want to know what is your API key in CDS go to: https://cds.climate.copernicus.eu/how-to-api
import cdsapi

#c = cdsapi.Client()

# Uncomment the following block after adjusting it according to your research question
# and after successfully working through the `get_ERA5_reanalysis_data.ipynb` notebook.

#c.retrieve(
#    'reanalysis-era5-land',
#    {
#        'variable': '2m_temperature',
#        'year': ['1974', '1975', '1976', '1977', '1978',
#                 '1979', '1980', '1981', '1982', '1983',
#                 '1984', '1985', '1986', '1987', '1988',
#                 '1989', '1990', '1991', '1992', '1993',
#                 '1994', '1995', '1996', '1997', '1998',
#                 '1999', '2000', '2001', '2002', '2003',
#                 '2004', '2005', '2006', '2007', '2008',
#                 '2009', '2010', '2011', '2012', '2013',
#                 '2014', '2015', '2016', '2017', '2018',
#                 '2019', '2020', '2021', '2022', '2023'],
#        'month': ['01','02','03','04','05','06','07','08','09','10','11','12'],
#        'day': '15',
#        'time': '12:00',
#        'grid': ['0.4', '0.4'],
#        'format': 'grib',
#        'download_format": "unarchived"
#
#    },
#    'reanalysis-era5-land_1974_2023_04x04.grib')

As you can see in the request code block, we downloaded the 2m_temperature / \(\text{t2m}\) variable from the reanalysis-era5-land at noon on every 15th of the month in the last 50 years. In other words, the requested data is not averaged over the whole month but just a sample. To reduce the resolution, we chose a grid of 0.4° in both spatial dimensions. As we want to calculate global trends over the whole time period, this choice should be adequate and save us a few computational-intensive averaging calculations. Furthermore, it helps to emphasize how the downloaded data looks like.

The output is given as a file named reanalysis-era5-land_1974_2023_04x04.grib in the grib format, which needs a small addition in the known file reading method xr.open_dataset(path, engine='cfgrib'), check out this resource for more information. Additionally, we get an idx file that is experimental and useful if the file is opened more often but can be ignored or deleted.

Again we uploaded these files to the OSF cloud for simple data retrieval, and we converted the file format to .nc (NetCDF) as an additional option. The following lines help to open both file types.

Note that the frequency of our example file reanalysis-era5-land_1974_2023_04x04.grib is monthly and not daily, hence no further question of the template can be answered by using it. Please increase the frequency and spatial resolution, and reduce the domain of interest when downloading the data to allow for investigations of regional heatwaves.

# specify filename and filetype
filetype = 'grib'
#filetype = 'nc'
fname_ERA5 = f"reanalysis-era5-land_1974_2023_04x04.{filetype}"

# check whether the specified path/file exists or not (locally or in the JupyterHub)
isExist = os.path.exists(fname_ERA5)

# load data and create data set
if isExist:
    _ = print(f'The file {fname_ERA5} exists locally.\n Loading the data ...\n')
    if filetype == 'grib':
        ds_global = xr.open_dataset(fname_ERA5, engine='cfgrib')
    elif filetype == 'nc':
        ds_global = xr.open_dataset(fname_ERA5)
    else:
        raise ("Please choose an appropriate file type: 'nc' or 'grib'.")

else:
    _ = print(f'The file {fname_ERA5} does not exist locally and has to be downloaded from OSF.\nDownloading the data ...\n')

    # retrieve the grib file from the OSF cloud storage
    if filetype == 'grib':
        link_id = "6d9mf"
    elif filetype == 'nc':
        link_id = "8v63z"
    else:
        raise ("Please choose an appropriate file type: 'nc' or 'grib'.")

    url_grib = f"https://osf.io/download/{link_id}/"

    # The following line is the correct approach, however, it sometimes raises an error that could not be solved by the curriculum team
    # (cf. https://github.com/ecmwf/cfgrib/blob/master/README.rst & https://github.com/pydata/xarray/issues/6512)
    # We, therefore, recommend to download the file separately if this EOFError arises.

    fcached = pooch_load(url_grib, fname_ERA5)

    try:
        if filetype == 'grib':
            ds_global = xr.open_dataset(fcached, engine='cfgrib')
        elif filetype == 'nc':
            ds_global = xr.open_dataset(fcached)
    except EOFError:
        print(f'The cached .grib file could not be parsed with Xarray.\nPlease download the file to your local directory via {url_grib} or download its NetCDF equivalent.')

print(ds_global)
Downloading data from 'https://osf.io/download/6d9mf/' to file '/tmp/reanalysis-era5-land_1974_2023_04x04.grib'.
The file reanalysis-era5-land_1974_2023_04x04.grib does not exist locally and has to be downloaded from OSF.
Downloading the data ...
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
Cell In[15], line 36
     30 url_grib = f"https://osf.io/download/{link_id}/"
     32 # The following line is the correct approach, however, it sometimes raises an error that could not be solved by the curriculum team
     33 # (cf. https://github.com/ecmwf/cfgrib/blob/master/README.rst & https://github.com/pydata/xarray/issues/6512)
     34 # We, therefore, recommend to download the file separately if this EOFError arises.
---> 36 fcached = pooch_load(url_grib, fname_ERA5)
     38 try:
     39     if filetype == 'grib':

Cell In[6], line 13, in pooch_load(filelocation, filename, processor)
     11     file = os.path.join(shared_location, filename)
     12 else:
---> 13     file = pooch.retrieve(
     14         filelocation,
     15         known_hash=None,
     16         fname=os.path.join(user_temp_cache, filename),
     17         processor=processor,
     18     )
     20 return file

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:239, in retrieve(url, known_hash, fname, path, processor, downloader, progressbar)
    236 if downloader is None:
    237     downloader = choose_downloader(url, progressbar=progressbar)
--> 239 stream_download(url, full_path, known_hash, downloader, pooch=None)
    241 if known_hash is None:
    242     get_logger().info(
    243         "SHA256 hash of downloaded file: %s\n"
    244         "Use this value as the 'known_hash' argument of 'pooch.retrieve'"
   (...)
    247         file_hash(str(full_path)),
    248     )

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/core.py:807, in stream_download(url, fname, known_hash, downloader, pooch, retry_if_failed)
    803 try:
    804     # Stream the file to a temporary so that we can safely check its
    805     # hash before overwriting the original.
    806     with temporary_file(path=str(fname.parent)) as tmp:
--> 807         downloader(url, tmp, pooch)
    808         hash_matches(tmp, known_hash, strict=True, source=str(fname.name))
    809         shutil.move(tmp, str(fname))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pooch/downloaders.py:221, in HTTPDownloader.__call__(self, url, output_file, pooch, check_only)
    219 try:
    220     response = requests.get(url, timeout=timeout, **kwargs)
--> 221     response.raise_for_status()
    222     content = response.iter_content(chunk_size=self.chunk_size)
    223     total = int(response.headers.get("content-length", 0))

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/requests/models.py:1026, in Response.raise_for_status(self)
   1021     http_error_msg = (
   1022         f"{self.status_code} Server Error: {reason} for url: {self.url}"
   1023     )
   1025 if http_error_msg:
-> 1026     raise HTTPError(http_error_msg, response=self)

HTTPError: 500 Server Error: Internal Server Error for url: https://osf.io/download/6d9mf/
# plot the mean 2m temperature of 1974 as example
t2m_1974 = ds_global.sel(time=slice('1974')).mean(dim='time')
_ = t2m_1974[list(t2m_1974.keys())[0]].plot()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[16], line 2
      1 # plot the mean 2m temperature of 1974 as example
----> 2 t2m_1974 = ds_global.sel(time=slice('1974')).mean(dim='time')
      3 _ = t2m_1974[list(t2m_1974.keys())[0]].plot()

NameError: name 'ds_global' is not defined

Weekly Mortality Data (Optional)#

The Organisation for Economic Co-operation and Development (OECD) provides weekly mortality data for 38 countries. The list of countries can be found in the OECD data explorer in the filters section, under reference area. This dataset can be used to analyze the impact of heatwaves on health through the general mortality of a country.

# read the mortality data from a csv file
link_id = "rh3mp"
url = f"https://osf.io/download/{link_id}/"
#data_mortality = pd.read_csv("Weekly_mortality_OECD.csv")
data_mortality = pd.read_csv(url)
data_mortality.info()
data_mortality.head()
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
Cell In[17], line 5
      3 url = f"https://osf.io/download/{link_id}/"
      4 #data_mortality = pd.read_csv("Weekly_mortality_OECD.csv")
----> 5 data_mortality = pd.read_csv(url)
      6 data_mortality.info()
      7 data_mortality.head()

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/parsers/readers.py:948, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
    935 kwds_defaults = _refine_defaults_read(
    936     dialect,
    937     delimiter,
   (...)
    944     dtype_backend=dtype_backend,
    945 )
    946 kwds.update(kwds_defaults)
--> 948 return _read(filepath_or_buffer, kwds)

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
    608 _validate_names(kwds.get("names", None))
    610 # Create the parser.
--> 611 parser = TextFileReader(filepath_or_buffer, **kwds)
    613 if chunksize or iterator:
    614     return parser

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/parsers/readers.py:1448, in TextFileReader.__init__(self, f, engine, **kwds)
   1445     self.options["has_index_names"] = kwds["has_index_names"]
   1447 self.handles: IOHandles | None = None
-> 1448 self._engine = self._make_engine(f, self.engine)

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/parsers/readers.py:1705, in TextFileReader._make_engine(self, f, engine)
   1703     if "b" not in mode:
   1704         mode += "b"
-> 1705 self.handles = get_handle(
   1706     f,
   1707     mode,
   1708     encoding=self.options.get("encoding", None),
   1709     compression=self.options.get("compression", None),
   1710     memory_map=self.options.get("memory_map", False),
   1711     is_text=is_text,
   1712     errors=self.options.get("encoding_errors", "strict"),
   1713     storage_options=self.options.get("storage_options", None),
   1714 )
   1715 assert self.handles is not None
   1716 f = self.handles.handle

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/common.py:718, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
    715     codecs.lookup_error(errors)
    717 # open URLs
--> 718 ioargs = _get_filepath_or_buffer(
    719     path_or_buf,
    720     encoding=encoding,
    721     compression=compression,
    722     mode=mode,
    723     storage_options=storage_options,
    724 )
    726 handle = ioargs.filepath_or_buffer
    727 handles: list[BaseBuffer]

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/common.py:372, in _get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode, storage_options)
    370 # assuming storage_options is to be interpreted as headers
    371 req_info = urllib.request.Request(filepath_or_buffer, headers=storage_options)
--> 372 with urlopen(req_info) as req:
    373     content_encoding = req.headers.get("Content-Encoding", None)
    374     if content_encoding == "gzip":
    375         # Override compression based on Content-Encoding header

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/site-packages/pandas/io/common.py:274, in urlopen(*args, **kwargs)
    268 """
    269 Lazy-import wrapper for stdlib urlopen, as that imports a big chunk of
    270 the stdlib.
    271 """
    272 import urllib.request
--> 274 return urllib.request.urlopen(*args, **kwargs)

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:214, in urlopen(url, data, timeout, cafile, capath, cadefault, context)
    212 else:
    213     opener = _opener
--> 214 return opener.open(url, data, timeout)

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:523, in OpenerDirector.open(self, fullurl, data, timeout)
    521 for processor in self.process_response.get(protocol, []):
    522     meth = getattr(processor, meth_name)
--> 523     response = meth(req, response)
    525 return response

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:632, in HTTPErrorProcessor.http_response(self, request, response)
    629 # According to RFC 2616, "2xx" code indicates that the client's
    630 # request was successfully received, understood, and accepted.
    631 if not (200 <= code < 300):
--> 632     response = self.parent.error(
    633         'http', request, response, code, msg, hdrs)
    635 return response

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:561, in OpenerDirector.error(self, proto, *args)
    559 if http_err:
    560     args = (dict, 'default', 'http_error_default') + orig_args
--> 561     return self._call_chain(*args)

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:494, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
    492 for handler in handlers:
    493     func = getattr(handler, meth_name)
--> 494     result = func(*args)
    495     if result is not None:
    496         return result

File /opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/urllib/request.py:641, in HTTPDefaultErrorHandler.http_error_default(self, req, fp, code, msg, hdrs)
    640 def http_error_default(self, req, fp, code, msg, hdrs):
--> 641     raise HTTPError(req.full_url, code, msg, hdrs, fp)

HTTPError: HTTP Error 500: Internal Server Error

Hint for Q3#

For this question you will calculate the percentiles, you can read more about percentiles here and e.g. in W2D3 Tutorial 1. Furthermore, as a recommendation for this question, a definition was given to calculate heatwaves, however, there is a great diversity of definitions, you can read about it in the following article: Perkins & Alexandar (2013).

Hint for Q4#

For Question 4, to understand the method of calculating the percentage of an area under heatwaves, please read the following article: Silva et al. (2022).

Hint for Q5#

The following articles will be helpful: Heo & Bell (2019) (not open access) and Smith et al. (2012).

Hint for Q6#

The following article will be helpful: Reddy et al. (2022).

Hint for Q7#

The following article will help you learn about a method to determine the influence of heatwaves on health by analyzing mortality: Nori-Sarma et al. (2019).

Further reading#