{ "cells": [ { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuromatch/climate-course-content/blob/main/tutorials/W2D3_ExtremesandVariability/student/W2D3_Tutorial6.ipynb)   \"Open" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial 6: Scenario-dependence of Future Changes in Extremes\n", "\n", "**Week 2, Day 3, Extremes & Variability**\n", "\n", "**Content creators:** Matthias Aengenheyster, Joeri Reinders\n", "\n", "**Content reviewers:** Younkap Nina Duplex, Sloane Garelick, Paul Heubel, Zahra Khodakaramimaghsoud, Peter Ohue, Laura Paccini, Jenna Pearson, Agustina Pesce, Derick Temfack, Peizhen Yang, Cheng Zhang, Chi Zhang, Ohad Zivan\n", "\n", "**Content editors:** Paul Heubel, Jenna Pearson, Chi Zhang, Ohad Zivan\n", "\n", "**Production editors:** Wesley Banfield, Paul Heubel, Jenna Pearson, Konstantine Tsafatinos, Chi Zhang, Ohad Zivan\n", "\n", "**Our 2024 Sponsors:** CMIP, NFDI4Earth" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Tutorial Objectives" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "*Estimated timing of tutorial:* 35 minutes\n", "\n", "In this tutorial, we will analyze climate model output for various cities worldwide to investigate the changes in extreme temperature and precipitation patterns over time under different emission scenarios.\n", "\n", "The data we will be using consists of climate model simulations for the historical period (hist) and three future climate scenarios (SSP1-2.6, SSP2-4.5, and SSP5-8.5). These scenarios have been already introduced on W2D1.\n", "\n", "By the end of this tutorial, you will be able to:\n", "\n", "- Utilize climate model output from scenario runs to assess changes during the historical period.\n", "- Compare potential future climate scenarios, focusing on their impact on extreme events." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 2969, "status": "ok", "timestamp": 1682954246617, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "# imports\n", "import xarray as xr\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import pandas as pd\n", "from scipy import stats\n", "from scipy.stats import genextreme as gev\n", "from datetime import datetime\n", "import os\n", "import pooch\n", "import tempfile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install and import feedback gadget\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Install and import feedback gadget\n", "\n", "!pip3 install vibecheck datatops --quiet\n", "\n", "from vibecheck import DatatopsContentReviewContainer\n", "def content_review(notebook_section: str):\n", " return DatatopsContentReviewContainer(\n", " \"\", # No text prompt\n", " notebook_section,\n", " {\n", " \"url\": \"https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab\",\n", " \"name\": \"comptools_4clim\",\n", " \"user_key\": \"l5jpxuee\",\n", " },\n", " ).render()\n", "\n", "\n", "feedback_prefix = \"W2D3_T6\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Figure Settings\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Figure Settings\n", "import ipywidgets as widgets # interactive display\n", "\n", "%config InlineBackend.figure_format = 'retina'\n", "plt.style.use(\n", " \"https://raw.githubusercontent.com/neuromatch/climate-course-content/main/cma.mplstyle\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Helper functions\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Helper functions\n", "\n", "\n", "def pooch_load(filelocation=None, filename=None, processor=None):\n", " shared_location = \"/home/jovyan/shared/Data/tutorials/W2D3_ExtremesandVariability\" # this is different for each day\n", " user_temp_cache = tempfile.gettempdir()\n", "\n", " if os.path.exists(os.path.join(shared_location, filename)):\n", " file = os.path.join(shared_location, filename)\n", " else:\n", " file = pooch.retrieve(\n", " filelocation,\n", " known_hash='ec51d1c9a8eb97e4506107be1b7aa36939d87762cb590cf127c6ce768b75c609',\n", " fname=os.path.join(user_temp_cache, filename),\n", " processor=processor,\n", " )\n", "\n", " return file\n", "\n", "def estimate_return_level_period(period, loc, scale, shape):\n", " \"\"\"\n", " Compute GEV-based return level for a given return period, and GEV parameters\n", " \"\"\"\n", " return stats.genextreme.ppf(1 - 1 / period, shape, loc=loc, scale=scale)\n", "\n", "\n", "def empirical_return_level(data):\n", " \"\"\"\n", " Compute empirical return level using the algorithm introduced in Tutorial 2\n", " \"\"\"\n", " df = pd.DataFrame(index=np.arange(data.size))\n", " # sort the data\n", " df[\"sorted\"] = np.sort(data)[::-1]\n", " # rank via scipy instead to deal with duplicate values\n", " df[\"ranks_sp\"] = np.sort(stats.rankdata(-data))\n", " # find exceedence probability\n", " n = data.size\n", " df[\"exceedance\"] = df[\"ranks_sp\"] / (n + 1)\n", " # find return period\n", " df[\"period\"] = 1 / df[\"exceedance\"]\n", "\n", " df = df[::-1]\n", "\n", " out = xr.DataArray(\n", " dims=[\"period\"],\n", " coords={\"period\": df[\"period\"]},\n", " data=df[\"sorted\"],\n", " name=\"level\",\n", " )\n", " return out\n", "\n", "\n", "def fit_return_levels(data, years, N_boot=None, alpha=0.05):\n", " \"\"\"\n", " Fit GEV to data, compute return levels and confidence intervals\n", " \"\"\"\n", " empirical = (\n", " empirical_return_level(data)\n", " .rename({\"period\": \"period_emp\"})\n", " .rename(\"empirical\")\n", " )\n", " shape, loc, scale = gev.fit(data, 0)\n", " print(\"Location: %.1e, scale: %.1e, shape: %.1e\" % (loc, scale, shape))\n", " central = estimate_return_level_period(years, loc, scale, shape)\n", "\n", " out = xr.Dataset(\n", " # dims = ['period'],\n", " coords={\"period\": years, \"period_emp\": empirical[\"period_emp\"]},\n", " data_vars={\n", " \"empirical\": ([\"period_emp\"], empirical.data),\n", " \"GEV\": ([\"period\"], central),\n", " },\n", " )\n", "\n", " if N_boot:\n", " levels = []\n", " shapes, locs, scales = [], [], []\n", " for i in range(N_boot):\n", " datai = np.random.choice(data, size=data.size, replace=True)\n", " # print(datai.mean())\n", " shapei, loci, scalei = gev.fit(datai, 0)\n", " shapes.append(shapei)\n", " locs.append(loci)\n", " scales.append(scalei)\n", " leveli = estimate_return_level_period(years, loci, scalei, shapei)\n", " levels.append(leveli)\n", "\n", " levels = np.array(levels)\n", " quant = alpha / 2, 1 - alpha / 2\n", " quantiles = np.quantile(levels, quant, axis=0)\n", "\n", " print('')\n", " print(\"Ranges with alpha = %.3f :\" % alpha)\n", " print(\"Location: [%.2f , %.2f]\" % tuple(np.quantile(locs, quant).tolist()))\n", " print(\"Scale: [%.2f , %.2f]\" % tuple(np.quantile(scales, quant).tolist()))\n", " print(\"Shape: [%.2f , %.2f]\" % tuple(np.quantile(shapes, quant).tolist()))\n", "\n", " quantiles = xr.DataArray(\n", " dims=[\"period\", \"quantiles\"],\n", " coords={\"period\": out.period, \"quantiles\": np.array(quant)},\n", " data=quantiles.T,\n", " )\n", " out[\"range\"] = quantiles\n", " return out\n", "\n", "\n", "def plot_return_levels(obj, c=\"C0\", label=\"\", ax=None):\n", " \"\"\"\n", " Plot fitted data:\n", " - empirical return level\n", " - GEV-fitted return level\n", " - alpha-confidence ranges with bootstrapping (if N_boot is given)\n", " \"\"\"\n", " if not ax:\n", " ax = plt.gca()\n", " obj[\"GEV\"].plot.line(\"%s-\" % c, lw=3, _labels=False, label=label, ax=ax)\n", " obj[\"empirical\"].plot.line(\"%so\" % c, mec=\"k\", markersize=5, _labels=False, ax=ax)\n", " if \"range\" in obj:\n", " # obj['range'].plot.line('k--',hue='quantiles',label=obj['quantiles'].values)\n", " ax.fill_between(obj[\"period\"], *obj[\"range\"].T, alpha=0.3, lw=0, color=c)\n", " ax.semilogx()\n", " ax.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Video 1: Scenario-dependence of Future Changes in Extremes\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "remove-input" ] }, "outputs": [], "source": [ "# @title Video 1: Scenario-dependence of Future Changes in Extremes\n", "\n", "from ipywidgets import widgets\n", "from IPython.display import YouTubeVideo\n", "from IPython.display import IFrame\n", "from IPython.display import display\n", "\n", "\n", "class PlayVideo(IFrame):\n", " def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n", " self.id = id\n", " if source == 'Bilibili':\n", " src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n", " elif source == 'Osf':\n", " src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n", " super(PlayVideo, self).__init__(src, width, height, **kwargs)\n", "\n", "\n", "def display_videos(video_ids, W=400, H=300, fs=1):\n", " tab_contents = []\n", " for i, video_id in enumerate(video_ids):\n", " out = widgets.Output()\n", " with out:\n", " if video_ids[i][0] == 'Youtube':\n", " video = YouTubeVideo(id=video_ids[i][1], width=W,\n", " height=H, fs=fs, rel=0)\n", " print(f'Video available at https://youtube.com/watch?v={video.id}')\n", " else:\n", " video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n", " height=H, fs=fs, autoplay=False)\n", " if video_ids[i][0] == 'Bilibili':\n", " print(f'Video available at https://www.bilibili.com/video/{video.id}')\n", " elif video_ids[i][0] == 'Osf':\n", " print(f'Video available at https://osf.io/{video.id}')\n", " display(video)\n", " tab_contents.append(out)\n", " return tab_contents\n", "\n", "\n", "video_ids = [('Youtube', 'YtRFXri4t4s'), ('Bilibili', 'BV1yW4y1o7zA')]\n", "tab_contents = display_videos(video_ids, W=730, H=410)\n", "tabs = widgets.Tab()\n", "tabs.children = tab_contents\n", "for i in range(len(tab_contents)):\n", " tabs.set_title(i, video_ids[i][0])\n", "display(tabs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Future_Changes_in_Extremes_Video\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "remove-input" ] }, "outputs": [], "source": [ "# @markdown\n", "from ipywidgets import widgets\n", "from IPython.display import IFrame\n", "\n", "link_id = \"gzsde\"\n", "\n", "print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n", "IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Future_Changes_in_Extremes_Slides\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 1: Load CMIP6 Data\n", "\n", "\n", "As in W2D1 (IPCC Physical Basis), you will be loading CMIP6 data from [Pangeo](https://pangeo.io/). In this way, you can access large amounts of climate model output that has been stored in the cloud. Here, we have already accessed the data of interest and collected it into a [.nc](https://en.wikipedia.org/wiki/NetCDF) file for you. However, the information on how to access this data directly is provided in the Resources section at the end of this notebook.\n", "\n", "You can learn more about CMIP, including additional methods to access CMIP data, through our [CMIP Resource Bank](https://github.com/neuromatch/climate-course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md) and the [CMIP website](https://wcrp-cmip.org/)." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "
\n", " Click here for a detailed description of the four scenarios.\n", " \n", "1. **Historical (hist)**: This scenario covers the time range from 1851 to 2014 and incorporates information about volcanic eruptions, greenhouse gas emissions, and other factors relevant to historical climate conditions.\n", "\n", "The Socio-economic pathway (SSP) scenarios represent potential climate futures beyond 2014. It's important to note that these scenarios are predictions (\"this is what we think will happen\") but are not certainties (\"this is plausible given the assumptions\"). Each scenario is based on different assumptions, primarily concerning the speed and effectiveness of global efforts to address global warming and reduce greenhouse gas emissions and other pollutants. Here are the scenarios we will be using today, along with descriptions taken from [(Cross-Chapter Box 1.4, Table 1, Section 1.6 of the recent IPCC AR6 WG1 report)](https://www.ipcc.ch/report/ar6/wg1/chapter/chapter-1/#1.6).\n", "\n", "2. **SSP1-2.6** *(ambitious climate scenario)*: This scenario stays below 2.0°C warming relative to 1850–1900 (median) with implied net zero CO2 emissions in the second half of the century.\n", "3. **SSP2-4.5** *(middle-ground climate scenario)*: This scenario is in line with the upper end of aggregate NDC (nationally-defined contribution) emissions levels by 2030. CO2 emissions remain at current levels until the middle of the century. The SSP2-4.5 scenario deviates mildly from a ‘no-additional-climate-policy’ reference scenario, resulting in a best-estimate warming of around 2.7°C by the end of the 21st century relative to 1850–1900 [(see Chapter 4)](https://www.ipcc.ch/report/ar6/wg1/chapter/chapter-4/).\n", "4. **SSP5-8.5** *(pessimistic climate scenario)*: The highest emissions scenario with no additional climate policy and where CO2 emissions roughly double from current levels by 2050. Emissions levels as high as SSP5-8.5 are not obtained by integrated assessment models (IAMs) under any of the SSPs other than the fossil-fuelled SSP5 socio-economic development pathway. It exhibits the highest level of warming among all scenarios and is often used as a \"worst-case scenario\" (though it may not be the most likely outcome). It is worth noting that many experts today consider this scenario to be unlikely due to significant improvements in mitigation policies over the past decade or so.\n", "
" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Here you will load precipitation and maximum daily near-surface air temperature." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# download file: 'cmip6_data_city_daily_scenarios_tasmax_pr_models.nc'\n", "filename_cmip6_data = \"cmip6_data_city_daily_scenarios_tasmax_pr_models.nc\"\n", "url_cmip6_data = \"https://osf.io/ngafk/download\"\n", "\n", "data = xr.open_dataset(pooch_load(url_cmip6_data, filename_cmip6_data))" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 2: Inspect Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "data.pr" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "data.tasmax" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "You can take a little time to explore the different variables within this dataset." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 3: Processing" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Let's look at the data for one selected city, for one climate model. In this case here, we choose Madrid and the `MPI-ESM1-2-HR` earth system model (ESM)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 441, "status": "ok", "timestamp": 1682954270647, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "city = \"Madrid\"\n", "data_city = data.sel(city=city, model=\"MPI-ESM1-2-HR\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 1456, "status": "ok", "timestamp": 1682954272100, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "data_city" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "The data has daily resolution, for three climate scenarios. Until 2014 the scenarios are identical (the 'historical' scenario). After 2014 they diverge given different climate change trajectories. Let's plot these two variables over time to get a sense of this." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 13213, "status": "ok", "timestamp": 1682954288444, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "# setup plot\n", "fig, ax = plt.subplots(2, sharex=True, figsize=(10, 5), constrained_layout=True)\n", "\n", "# plot maximum daily near surface air temperature 'tasmax' time series\n", "# of all scenarios\n", "data_city[\"tasmax\"].plot(hue=\"scenario\", ax=ax[0], lw=0.5)\n", "\n", "# plot precipitation 'pr' time series of all scenarios\n", "data_city[\"pr\"].plot(hue=\"scenario\", ax=ax[1], lw=0.5, add_legend=False)\n", "\n", "# plot aesthetics\n", "ax[0].set_title(\"Maximum Daily Near-Surface Air Temperature\")\n", "ax[1].set_title(\"Precipitation\")\n", "ax[0].set_xlabel(\"\")\n", "ax[1].set_xlabel(\"Time (days)\")\n", "ax[0].set_ylabel(\"(K)\") # Kelvin\n", "ax[1].set_ylabel(\"(mm/day)\")\n", "# set limits\n", "ax[0].set_xlim(datetime(1850, 1, 1), datetime(2100, 12, 31))\n", "ax[1].set_ylim(0, None);" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "In the previous tutorials, we have been operating on annual maxima data - looking at the most extreme event observed in each year. We will do the same here: for each year, we take the day with the highest temperature or the largest amount of rainfall, respectively." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# setup plot\n", "fig, ax = plt.subplots(2, sharex=True, figsize=(10, 5), constrained_layout=True)\n", "\n", "# choose a variable, take the annual maximum and plot time series of all scenarios\n", "data_city[\"tasmax\"].resample(time=\"1Y\").max().plot(\n", " hue=\"scenario\", ax=ax[0], lw=2)\n", "data_city[\"pr\"].resample(time=\"1Y\").max().plot(\n", " hue=\"scenario\", ax=ax[1], lw=2, add_legend=False\n", ")\n", "\n", "# plot aesthetics\n", "ax[0].set_title(\"Annual Maximum of Daily Maximum Near-Surface Air Temperature\")\n", "ax[1].set_title(\"Annual Maximum of Daily Precipitation\")\n", "ax[0].set_xlabel(\"\")\n", "ax[1].set_xlabel(\"Time (years)\")\n", "ax[0].set_ylabel(\"(K)\")\n", "ax[1].set_ylabel(\"(mm/day)\")\n", "# set limits\n", "ax[0].set_xlim(datetime(1850, 1, 1), datetime(2100, 12, 31))\n", "ax[1].set_ylim(0, None);" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Questions 3\n", "1. Describe the plot - what do you see for the two variables, over time, between scenarios?" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "tags": [] }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_66d5a702.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Questions_3\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 4: Differences in the Historical Period" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "As in the previous tutorial we want to compare consecutive 30-year periods in the past: therefore take the historical run (1850-2014), and look at the annual maximum daily precipitation for the last three 30-year periods. We only need to look at one scenario because they all use the historical run until 2014." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 971, "status": "ok", "timestamp": 1682954330425, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "# take max daily precip values from Madrid for three climate normal periods\n", "pr_city = data_city[\"pr\"]\n", "pr_city_max = pr_city.resample(time=\"1Y\").max()\n", "\n", "data_period1 = (\n", " pr_city_max.sel(scenario=\"ssp245\", time=slice(\"2014\"))\n", " .sel(time=slice(\"1925\", \"1954\"))\n", " .to_dataframe()[\"pr\"]\n", ")\n", "data_period2 = (\n", " pr_city_max.sel(scenario=\"ssp245\", time=slice(\"2014\"))\n", " .sel(time=slice(\"1955\", \"1984\"))\n", " .to_dataframe()[\"pr\"]\n", ")\n", "data_period3 = (\n", " pr_city_max.sel(scenario=\"ssp245\", time=slice(\"2014\"))\n", " .sel(time=slice(\"1985\", \"2014\"))\n", " .to_dataframe()[\"pr\"]\n", ")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Plot the histograms of annual maximum daily precipitation for the three climate normals covering the historical period. What do you see? Compare to the analysis in the previous tutorial where we analyzed sea level height. Any similarities or differences? Why do you think that is?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 444, "status": "ok", "timestamp": 1682954338685, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "# collect scenario data of all climate normal periods in historical time frame\n", "periods_data = [data_period1, data_period2, data_period3]\n", "periods_labels = [\"1925-1954\", \"1955-1984\", \"1985-2014\"]\n", "\n", "# plot histograms for climate normals during historical period\n", "fig, ax = plt.subplots()\n", "colors = ['C0','C1','C2']\n", "\n", "# repeat histogram/PDF plot for all periods\n", "for counter, period in enumerate(periods_data):\n", " sns.histplot(\n", " period,\n", " bins = np.arange(20, 90, 5),\n", " color = colors[counter],\n", " element = \"step\",\n", " alpha = 0.5,\n", " kde = True,\n", " label = periods_labels[counter],\n", " ax = ax,\n", " )\n", "\n", "# aesthetics\n", "ax.legend()\n", "ax.set_xlabel(\"Annual Maximum Daily Precipitation (mm/day)\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 7, "status": "ok", "timestamp": 1682954338884, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "# calculate moments of the data\n", "periods_stats = pd.DataFrame(index=[\"Mean\", \"Standard Deviation\", \"Skew\"])\n", "\n", "# collect data of periods\n", "periods_data = [data_period1, data_period2, data_period3]\n", "periods_labels = [\"1925-1954\", \"1955-1984\", \"1985-2014\"]\n", "\n", "# repeat statistics calculation for all periods\n", "for counter, period in enumerate(periods_data):\n", " # calculate mean, std and skew and put it into DataFrame\n", " periods_stats[periods_labels[counter]] = [\n", " period.mean(),\n", " period.std(),\n", " period.skew(),\n", " ]\n", "\n", "periods_stats = periods_stats.T\n", "periods_stats" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Now, we fit a GEV to the three time periods, and plot the distributions using the gev.pdf function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# reminder of how the fit function works\n", "shape, loc, scale = gev.fit(data_period1.values, 0)\n", "print(f\"Fitted parameters:\\nShape: {shape:.5f}, Location: {loc:.5f}, Scale: {scale:.5f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# fit GEV distribution for three climate normals during historical period\n", "params_period1 = gev.fit(data_period1, 0)\n", "shape_period1, loc_period1, scale_period1 = params_period1\n", "params_period2 = gev.fit(data_period2, 0)\n", "shape_period2, loc_period2, scale_period2 = params_period2\n", "params_period3 = gev.fit(data_period3, 0)\n", "shape_period3, loc_period3, scale_period3 = params_period3" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# plot corresponding PDFs of GEV distribution fits\n", "\n", "# create x-data in precipitation range\n", "x = np.linspace(20, 90, 1000)\n", "\n", "# collect all fitted GEV parameters\n", "shape_all = [shape_period1, shape_period2, shape_period3]\n", "loc_all = [loc_period1, loc_period2, loc_period3]\n", "scale_all = [scale_period1, scale_period2, scale_period3]\n", "\n", "periods_labels = [\"1925-1954\", \"1955-1984\", \"1985-2014\"]\n", "colors = ['C0','C1','C2']\n", "\n", "# setup plot\n", "fig, ax = plt.subplots()\n", "\n", "# repeat plotting for all climate normal periods\n", "for i in range(len(periods_labels)):\n", " # plot GEV PDFs\n", " ax.plot(\n", " x,\n", " gev.pdf(x, shape_all[i], loc=loc_all[i], scale=scale_all[i]),\n", " c=colors[i],\n", " lw=3,\n", " label=periods_labels[i],\n", " )\n", "\n", "# aesthetics\n", "ax.legend()\n", "ax.set_xlabel(\"Annual Maximum Daily Precipitation (mm/day)\")\n", "ax.set_ylabel(\"Density\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "# show the parameters of the GEV fit\n", "parameters = pd.DataFrame(index=[\"Location\", \"Scale\", \"Shape\"])\n", "parameters[\"1925-1954\"] = [loc_period1, scale_period1, shape_period1]\n", "parameters[\"1955-1984\"] = [loc_period2, scale_period2, shape_period2]\n", "parameters[\"1985-2014\"] = [loc_period3, scale_period3, shape_period3]\n", "\n", "parameters = parameters.T\n", "parameters.round(4) # .astype('%.2f')" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Now we will create a return level plot for the three periods. To do so we will be using some helper functions defined at the beginning of the tutorial, most of which you have seen before. In particular we will use `fit_return_levels()` to generate an xr.Dataset that contains empirical and GEV fits, as well as confidence intervals, and `plot_return_levels()` to generate a plot from this xr.Dataset with calculated confidence intervals shaded (alpha printed below).\n", "\n", "These functions can also be found in `gev_functions.py`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "fit_period1 = fit_return_levels(\n", " data_period1, np.arange(1.1, 1000, 0.1), N_boot=100, alpha=0.05\n", ")\n", "fit_period2 = fit_return_levels(\n", " data_period2, np.arange(1.1, 1000, 0.1), N_boot=100, alpha=0.05\n", ")\n", "fit_period3 = fit_return_levels(\n", " data_period3, np.arange(1.1, 1000, 0.1), N_boot=100, alpha=0.05\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "tags": [] }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "plot_return_levels(fit_period1, c=\"C0\", label=\"1925-1954\", ax=ax)\n", "plot_return_levels(fit_period2, c=\"C1\", label=\"1955-1984\", ax=ax)\n", "plot_return_levels(fit_period3, c=\"C2\", label=\"1985-2014\", ax=ax)\n", "ax.set_xlim(1.5, 1000)\n", "ax.set_ylim(40, 140)\n", "\n", "ax.legend()\n", "ax.set_ylabel(\"Return Level (mm/day)\")\n", "ax.set_xlabel(\"Return Period (years)\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Questions 4\n", "\n", "1. What do you conclude for the historical change in extreme precipitation in this city? What possible limitations could this analysis have? (How) could we address this?" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "tags": [] }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_63eff724.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Questions_1\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Section 5: Climate Futures" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "Now let's look at maximum precipitation in possible climate futures: the years 2071-2100 (the last 30 years). For comparison we use the historical period, 1850-2014." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "In the next box we select the data, then you will plot it as a coding exercise:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {}, "executionInfo": { "elapsed": 238, "status": "ok", "timestamp": 1682954382837, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "outputs": [], "source": [ "data_city = data.sel(city=city, model=\"MPI-ESM1-2-HR\")\n", "\n", "# select the different scenarios and periods, get the annual maximum, store in DataFrames\n", "data_hist = (\n", " data_city[\"pr\"]\n", " .sel(scenario=\"ssp126\", time=slice(\"1850\", \"2014\"))\n", " .resample(time=\"1Y\")\n", " .max()\n", " .to_dataframe()[\"pr\"]\n", ")\n", "data_city_fut = (\n", " data_city[\"pr\"].sel(time=slice(\"2071\", \"2100\")).resample(time=\"1Y\").max()\n", ")\n", "data_ssp126 = data_city_fut.sel(scenario=\"ssp126\").to_dataframe()[\"pr\"]\n", "data_ssp245 = data_city_fut.sel(scenario=\"ssp245\").to_dataframe()[\"pr\"]\n", "data_ssp585 = data_city_fut.sel(scenario=\"ssp585\").to_dataframe()[\"pr\"]" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Coding Exercises 5\n", "Differences between climate scenarios:\n", "\n", "Repeat the analysis that we did above for three different periods, but now for three different climate scenarios, and the historical period for comparison. The four cells below are prepared for the following purposes:\n", "1. Create a figure that displays the histograms of the four records. Find a useful number and spacing of bins (via the `bins=` keyword to `sns.histplot()`). Calculate the moments.\n", "2. Fit GEV distributions to the four records using the same commands as above. Use the `gev.pdf()` function to plot the fitted distributions.\n", "3. Inspect location, scale and shape parameters\n", "4. Create a return-level plot using the `ef.plot_levels_from_obj()` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "# setup plot\n", "fig, ax = plt.subplots()\n", "\n", "#################################################\n", "## TODO for students: ##\n", "## Put the data, labels, and colors you want to plot in lists. ##\n", "## Additionally, create an array with a useful number and spacing of bins. ##\n", "## Below the plotting procedure, calculate the moments for all scenarios. ##\n", "# Remove or comment the following line of code once you have completed the exercise:\n", "raise NotImplementedError(\"Student exercise: Put the data, labels, and colors you want to plot in lists, create an array with a useful number and spacing of bins, and calculate the moments for all scenarios.\")\n", "#################################################\n", "\n", "# collect data of all scenarios, labels, colors in lists, and define bin_range\n", "scenario_data = ...\n", "scenario_labels = ...\n", "colors = ...\n", "bin_range = ...\n", "\n", "# create histograms/ PDFs for each scenario and historical\n", "for counter, data_src in enumerate(scenario_data):\n", " sns.histplot(\n", " data_src,\n", " bins=bin_range,\n", " color=colors[counter],\n", " element=\"step\",\n", " stat=\"density\",\n", " alpha=0.3,\n", " lw=0.5,\n", " line_kws=dict(lw=3),\n", " kde=True,\n", " label=scenario_labels[counter],\n", " ax=ax,\n", " )\n", "\n", "# aesthetics\n", "ax.legend()\n", "ax.set_xlabel(\"Annual Maximum Daily Precipitation (mm/day)\")\n", "\n", "# calculate moments\n", "periods_stats = pd.DataFrame(index=[\"Mean\", \"Standard Deviation\", \"Skew\"])\n", "column_names = ...\n", "\n", "for counter, data_src in enumerate(scenario_data):\n", " periods_stats[column_names[counter]] = ...\n", "\n", "periods_stats = periods_stats.T\n", "periods_stats" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "tags": [] }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_f0cfaef4.py)\n", "\n", "*Example output:*\n", "\n", "Solution hint\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "#################################################\n", "## TODO for students: ##\n", "## Put the data you want to plot in a list. ##\n", "## Then complete the for loop by adding a fitting and plotting procedure. ##\n", "# Remove or comment the following line of code once you have completed the exercise:\n", "raise NotImplementedError(\"Student exercise: Put the data you want to plot in a list. Then complete the for loop by adding a fitting and a plotting procedure for the GEV distributions.\")\n", "#################################################\n", "\n", "# collect data of all scenarios in a list, define labels and colors\n", "scenario_data = ...\n", "scenario_labels = [\"Historical, 1850-2014\", \"SSP-126, 2071-2100\", \"SSP-245, 2071-2100\", \"SSP-585, 2071-2100\"]\n", "colors = [\"k\", \"C0\", \"C1\", \"C2\"]\n", "\n", "# initialize list of scenario_data length\n", "shape_all = [x*0 for x in range(len(scenario_data))]\n", "loc_all = [x*0 for x in range(len(scenario_data))]\n", "scale_all = [x*0 for x in range(len(scenario_data))]\n", "\n", "fig, ax = plt.subplots()\n", "x = np.linspace(20, 120, 1000)\n", "\n", "# repeat fitting and plotting for all scenarios\n", "for counter, scenario in enumerate(scenario_data):\n", " # fit GEV distribution\n", " shape_all[counter],loc_all[counter], scale_all[counter] = ...\n", " # make plots\n", " _ = ...\n", "\n", "# aesthetics\n", "ax.legend()\n", "ax.set_xlabel(\"Annual Maximum Daily Precipitation (mm/day)\")\n", "ax.set_ylabel(\"Density\");" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "executionInfo": { "elapsed": 42287, "status": "ok", "timestamp": 1682954432623, "user": { "displayName": "Matthias Aengenheyster", "userId": "16322208118439170907" }, "user_tz": -120 }, "tags": [] }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_d93c8bdf.py)\n", "\n", "*Example output:*\n", "\n", "Solution hint\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": {} }, "outputs": [], "source": [ "#################################################\n", "## TODO for students: ##\n", "## Put the labels you want to show in a list. ##\n", "## Then complete the for loop by adding a fitting and plotting procedure ##\n", "## with previously applied functions: fit_return_levels() and plot_return_levels(). ##\n", "# Remove or comment the following line of code once you have completed the exercise:\n", "raise NotImplementedError(\"Student exercise: Put the labels you want to show in a list. Then complete the for loop by adding a fitting and a plotting procedure with previously applied functions: fit_return_levels() and plot_return_levels().\")\n", "#################################################\n", "\n", "# collect data of all scenarios in a list, define labels and colors\n", "scenario_data = [data_hist, data_ssp126, data_ssp245, data_ssp585]\n", "scenario_labels = ...\n", "colors = [\"k\", \"C0\", \"C1\", \"C2\"]\n", "\n", "# initialize list for fit output\n", "fit_all_scenarios = [0, 0, 0, 0]\n", "\n", "# setup plot\n", "fig, ax = plt.subplots()\n", "\n", "# repeat fitting and plotting of the return levels for all scenarios\n", "# using fit_return_levels() and plot_return_levels()\n", "for counter, scenario in enumerate(scenario_data):\n", " fit_all_scenarios[counter] = ...\n", " _ = ...\n", "\n", "# aesthetics\n", "ax.set_xlim(1, 200)\n", "ax.set_ylim(30, 110)\n", "ax.set_ylabel(\"Return Level (mm/day)\")\n", "ax.set_xlabel(\"Return Period (years)\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {}, "tags": [] }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_b99eab92.py)\n", "\n", "*Example output:*\n", "\n", "Solution hint\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Coding_Exercise_5\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "## Questions 5\n", "\n", "1. What can you say about how extreme precipitation differs between the climate scenarios? Are the differences large or small compared to periods in the historical records? What are the limitations? Consider the x-axis in the return-level plot compared to the space covered by the data (only 30 years). How could we get more information for longer return periods?" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "execution": {} }, "source": [ "[*Click for solution*](https://github.com/neuromatch/climate-course-content/tree/main/tutorials/W2D3_ExtremesandVariability/solutions/W2D3_Tutorial6_Solution_e25db290.py)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit your feedback\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "execution": {}, "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# @title Submit your feedback\n", "content_review(f\"{feedback_prefix}_Questions_5\")" ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Summary\n", "In this tutorial, you've learned how to analyze climate model output to investigate the changes in extreme precipitation patterns over time under various emission scenarios and the historical period. Specifically, we've focused on three future Shared Socioeconomic Pathways (SSPs) scenarios, which represent potential futures based on different assumptions about greenhouse gas emissions.\n", "\n", "You've explored how to fit Generalized Extreme Value (GEV) distributions to the data and used these fitted distributions to create return-level plots. These plots allow us to visualize the probability of extreme events under different climate scenarios." ] }, { "cell_type": "markdown", "metadata": { "execution": {} }, "source": [ "# Resources\n", "\n", "The data for this tutorial was accessed through the [Pangeo Cloud platform](https://pangeo.io/cloud.html). Additionally, a [notebook](https://github.com/neuromatch/climate-course-content/blob/main/tutorials/W2D3_ExtremesandVariability/get_CMIP6_data_from_pangeo.ipynb) is available here that downloads the specific datasets we used in this tutorial.\n", " \n", "This tutorial uses data from the simulations conducted as part of the [CMIP6](https://pcmdi.llnl.gov/CMIP6/) multi-model ensemble, in particular the models MPI-ESM1-2-HR and MIROC6. \n", "\n", "[MPI-ESM1-2-HR](https://gmd.copernicus.org/articles/12/3241/2019/) was developed and the runs conducted by the [Max Planck Institute for Meteorology](https://mpimet.mpg.de/en/homepage) in Hamburg, Germany. \n", "[MIROC6](https://doi.org/10.5194/gmd-12-2727-2019) was developed and the runs conducted by a japanese modeling community including the Japan Agency for Marine-Earth Science and Technology [(JAMSTEC)](https://www.jamstec.go.jp/e/), Kanagawa, Japan, Atmosphere and Ocean Research Institute [(AORI)](https://www.aori.u-tokyo.ac.jp/english/), The University of Tokyo, Chiba, Japan, National Institute for Environmental Studies [(NIES)](https://www.nies.go.jp/index-e.html), Ibaraki, Japan, and [RIKEN Center for Computational Science](https://www.riken.jp/en/), Hyogo, Japan.\n", "\n", "For references on particular model experiments see this [database](https://www.wdc-climate.de/ords/f?p=127:2).\n", "\n", "For more information on what CMIP is and how to access the data, please see this [page](https://github.com/neuromatch/climate-course-content/blob/main/tutorials/CMIP/CMIP_resource_bank.md)." ] } ], "metadata": { "colab": { "collapsed_sections": [], "include_colab_link": true, "name": "W2D3_Tutorial6", "provenance": [], "toc_visible": true }, "kernel": { "display_name": "Python 3", "language": "python", "name": "python3" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.19" } }, "nbformat": 4, "nbformat_minor": 4 }