Introduction to Seasonal Climate Prediction Liqiang Sun International

  • Slides: 48
Download presentation
Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society

Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)

Weather forecast – Initial Condition problem Climate forecast – Primarily boundary forcing problem

Weather forecast – Initial Condition problem Climate forecast – Primarily boundary forcing problem

Climate Forecasts ü ü ü be probabilistic § ensembling be reliable and skillful §

Climate Forecasts ü ü ü be probabilistic § ensembling be reliable and skillful § calibration and verification address relevant scales and quantities § downscaling

OUTLINE ØFundamentals of probabilistic forecasts ØIdentifying and correcting model errors § systematic errors §

OUTLINE ØFundamentals of probabilistic forecasts ØIdentifying and correcting model errors § systematic errors § Random errors § Conditional errors ØForecast verification ØSummary

Fundamentals of Probabilistic Forecasts

Fundamentals of Probabilistic Forecasts

Basis of Seasonal Climate Prediction Changes in boundary conditions, such as SST and land

Basis of Seasonal Climate Prediction Changes in boundary conditions, such as SST and land surface characteristics, can influence the characteristics of weather (e. g. strength or persistence/absence), and thus influence the seasonal climate.

Influence of SST on tropical atmosphere January 25, 2006 UNAM

Influence of SST on tropical atmosphere January 25, 2006 UNAM

IRI DYNAMICAL CLIMATE FORECAST SYSTEM 2 -tiered OCEAN ATMOSPHERE PERSISTED GLOBAL SST ANOMALY GLOBAL

IRI DYNAMICAL CLIMATE FORECAST SYSTEM 2 -tiered OCEAN ATMOSPHERE PERSISTED GLOBAL SST ANOMALY GLOBAL ATMOSPHERIC MODELS ECPC(Scripps) ECHAM 4. 5(MPI) 10 24 24 10 Persisted SST Ensembles 3 Mo. lead FORECAST SST TROP. PACIFIC (multi-models, dynamical and statistical) TROP. ATL, INDIAN (statistical) EXTRATROPICAL (damped persistence) CCM 3. 6(NCAR) NCEP(MRF 9) NSIPP(NASA) COLA 2 GFDL 12 24 24 30 12 30 Forecast SST Ensembles 3/6 Mo. lead POST PROCESSING MULTIMODEL ENSEMBLING 30 REGIONAL MODELS

Probability Calculated Using the Ensemble Mean Contingency Table Bf Nf Af Bo No Ao

Probability Calculated Using the Ensemble Mean Contingency Table Bf Nf Af Bo No Ao 50 30 20 33 34 33 15 25 60

Probability obtained from ensemble spread 1) Count the # of ensembles in each category,

Probability obtained from ensemble spread 1) Count the # of ensembles in each category, e. g. , Total 100 Ensembles, 40 ensembles in Category “A” 35 ensembles in Category “N”, and 25 ensembles in Category “B”. 2) Calibration

Example of seasonal rainfall forecast (3 -month average & Probabilistic)

Example of seasonal rainfall forecast (3 -month average & Probabilistic)

Why seasonal averages? Rainfall correlation skill: ECHAM 4. 5 vs CRU Observations (1951 -95)

Why seasonal averages? Rainfall correlation skill: ECHAM 4. 5 vs CRU Observations (1951 -95) Should we only be forecasting for February for SW US & N Mexico?

Why seasonal averages? Partial Correlation Maps for Individual Months No independent skill for individual

Why seasonal averages? Partial Correlation Maps for Individual Months No independent skill for individual months.

Why seasonal averages?

Why seasonal averages?

Why probabilistic? Observed Rainfall (SON 2004) Model Forecast (SON 2004), Made Aug 2004 RUN

Why probabilistic? Observed Rainfall (SON 2004) Model Forecast (SON 2004), Made Aug 2004 RUN #1 Units are mm/season RUN #4 Two ensemble members from same AGCM, same SST forcing, just different initial conditions.

Why probabilistic? Model Forecast (SON 2004), Made Aug 2004 1 5 2 6 3

Why probabilistic? Model Forecast (SON 2004), Made Aug 2004 1 5 2 6 3 7 4 8 Observed Rainfall Sep-Oct-Nov 2004 (CAMS-OPI) Seasonal climate is a combination of boundary-forced SIGNAL, and chaotic NOISE from internal dynamics of the atmosphere.

Why probabilistic? Model Forecast (SON 2004), Made Aug 2004 ENSEMBLE MEAN Observed Rainfall Sep-Oct-Nov

Why probabilistic? Model Forecast (SON 2004), Made Aug 2004 ENSEMBLE MEAN Observed Rainfall Sep-Oct-Nov 2004 (CAMS-OPI) Average model response, or SIGNAL, due to prescribed SSTs was for normal to below-normal rainfall over southern US/ northern Mexico in this season. Need to also communicate fact that some of the ensemble member predictions were actually wet in this region. Thus, there may be a ‘most likely outcome’, but there also a ‘range of possibilities’ that must be quantified.

Climate Forecast: Signal + Uncertainty Near-Normal Below Normal Historical distribution “SIGNAL” “NOISE” Above Normal

Climate Forecast: Signal + Uncertainty Near-Normal Below Normal Historical distribution “SIGNAL” “NOISE” Above Normal Forecast distribution Climatological Average The SIGNAL represents the ‘most likely’ outcome. The NOISE represents internal atmospheric chaos, uncertainties in the boundary conditions, and random errors in the models. Forecast Mean

Probabilistic Forecasts Reliability: Forecasts should “mean what they say”. Resolution: Probabilities should differ from

Probabilistic Forecasts Reliability: Forecasts should “mean what they say”. Resolution: Probabilities should differ from climatology as much as possible, when appropriate Reliability Diagrams Showing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event. Good reliability is indicated by a 45° diagonal.

Identifying and Correcting Model Errors

Identifying and Correcting Model Errors

Optimizing Probabilistic Information • Eliminate the ‘bad’ uncertainty -- Reduce systematic errors e. g.

Optimizing Probabilistic Information • Eliminate the ‘bad’ uncertainty -- Reduce systematic errors e. g. MOS correction, calibration • Reliably estimate the ‘good’ uncertainty -- Reduce probability sampling errors e. g. Gaussian fitting and Generalized Linear Model (GLM) -- Minimize the random errors e. g. multi-model approach (for both response & forcing) -- Minimize the conditional errors e. g. Conditional Exceedance Probabilities (CEPs)

Systematic Spatial Errors Systematic error in location of mean rainfall, leads to spatial error

Systematic Spatial Errors Systematic error in location of mean rainfall, leads to spatial error in interannual rainfall variability, and thus a resulting lack of skill locally.

Systematic Calibration Errors ORIGINAL RESCALED Dynamical models may have quantitative errors in the mean

Systematic Calibration Errors ORIGINAL RESCALED Dynamical models may have quantitative errors in the mean climate … as well as in the magnitude of its interannual variability. ORIGINAL RECALIBRATED Statistical recalibration of the model’s climate and its response characteristics can improve model reliability.

Reducing Systematic Errors MOS Correction DJFM rainfall anomaly correlation before and after statistical correction

Reducing Systematic Errors MOS Correction DJFM rainfall anomaly correlation before and after statistical correction January 25, 2006 UNAM (Tippett et al. , 2003, Int. J. Climato

N=8 N=24 N=16 N=39 Converges like S = Signal-to-noise ratio N = ensemble size

N=8 N=24 N=16 N=39 Converges like S = Signal-to-noise ratio N = ensemble size “True” rms divide by .

Fitting with a Gaussian Two types of error: • PDF not really Gaussian! •

Fitting with a Gaussian Two types of error: • PDF not really Gaussian! • Sampling error – Fit only mean – Fit mean and variance Error(Gaussian fit N=24) = Error(Counting N=40)

Minimizing Random Errors Multi-model ensembling Probabilistic skill scores (RPSS for 2 m Temperature (JFM

Minimizing Random Errors Multi-model ensembling Probabilistic skill scores (RPSS for 2 m Temperature (JFM 1950 -1995) Combining models reduces deficiencies of individual models

A Major Goal of Probabilistic Forecasts Reliability! Reliability Diagrams Showing consistency between the a

A Major Goal of Probabilistic Forecasts Reliability! Reliability Diagrams Showing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event. Good reliability is indicated by a 45° diagonal.

Benefit of Increasing Number of AGCMs in Multi-Model Combination JAS Temperature JAS Precipitation (Robertson

Benefit of Increasing Number of AGCMs in Multi-Model Combination JAS Temperature JAS Precipitation (Robertson et al. 2004)

Correcting Conditional Biases METHODOLOGY

Correcting Conditional Biases METHODOLOGY

Conditional Exceedance Probabilities The probability that the observation exceeds the amount forecast depends upon

Conditional Exceedance Probabilities The probability that the observation exceeds the amount forecast depends upon the skill of the model. If the model were perfect, this probability would be constant. If it is imperfect, it will depend on the ensemble member’s value. Identify whether the exceedance probability is conditional upon the value indicated. Generalized linear models with binomial errors can be used, e. g. : Tests can be performed on 1 to identify conditional biases. If 1 = 0 then the system is reliable. 0 can indicate unconditional bias. (Mason et al. 2007, Mon Wea Rev)

Idealized CEPs Positive skill SIGNAL too weak PERFECT Reliability β 1>0 β 1=0 Positive

Idealized CEPs Positive skill SIGNAL too weak PERFECT Reliability β 1>0 β 1=0 Positive skill SIGNAL too strong β 1<0 Negative skill β 1<0 |β 1|>|Clim| (from Mason et al. 2007, Mon Wea Rev) NO skill β 1= Clim.

Conditional Exceedance Probabilities (CEPs) Standardized anomaly Scale Use CEPs to determine biased probability of

Conditional Exceedance Probabilities (CEPs) Standardized anomaly Scale Use CEPs to determine biased probability of exceedance. Shift 100% 50% 0% Shift model-predicted PDF towards goal of 50% exceedance probability. Note that scale is a parameter determined in minimizing the model-CEP slope.

CEP Recalibration can either strengthen or weaken SIGNAL Adjustment decreases signal Adjustment increases signal

CEP Recalibration can either strengthen or weaken SIGNAL Adjustment decreases signal Adjustment increases signal CEP Recalibration consistently reduces MSE Adjustment increases MSE Adjustment decreases MSE

Effect of Conditional Bias Correction

Effect of Conditional Bias Correction

Forecast Verification

Forecast Verification

Verification of probabilistic forecasts • How do we know if a probabilistic forecast was

Verification of probabilistic forecasts • How do we know if a probabilistic forecast was “correct”? “A probabilistic forecast can never be wrong!” As soon as a forecast is expressed probabilistically, all possible outcomes are forecast. However, the forecaster’s level of confidence can be “correct” or “incorrect” = reliable. Is the forecaster over- / under-confident?

Forecast verification – reliability and resolution • If forecasts are reliable, the probability that

Forecast verification – reliability and resolution • If forecasts are reliable, the probability that the event will occur is the same as the forecast probability. • Forecasts have good resolution, if the probability that the event will occur changes as the forecast probability changes.

Reliability diagram UNAM

Reliability diagram UNAM

Ranked Probability Skill Score (RPSS) RPSS measures the cumulative squared error between the categorical

Ranked Probability Skill Score (RPSS) RPSS measures the cumulative squared error between the categorical forecast probabilities and the observed category relative to some reference forecast (Epstein 1969). The most widely used reference strategy is that of “climatology. ” The RPSS is defined as, where N=3 for tercile forecasts. fj, rj, and oj are the forecast probability, reference forecast probability, and observed probability for category j, respectively. The probability distribution of the observation is 100% for the category that was observed and is 0 for the other two categories. The reference forecast of climatology is assigned to 33. 3% for each of the tercile categories.

Ranked Probability Skill Score (RPSS) The RPSS gives credits forecasting the observed category with

Ranked Probability Skill Score (RPSS) The RPSS gives credits forecasting the observed category with high probabilities, and also puts penalties forecasting the wrong category with high probabilities. § According to its definition, the RPSS maximum value is 100%, which can only be obtained by forecasting the observed category with a 100% probability consistently. § A score of zero implies no skill in the forecasts, which is the same score one would get by consistently issuing a forecast of climatology. For the three category forecast, a forecast of climatology implies no information beyond the historically expected 33. 3%-33. 3% probabilities. § A negative score suggests that the forecasts are underperforming climatology. § The skill for seasonal precipitation forecasts is generally modest. For example, IRI seasonal forecasts with 0 -month lead for the period 19972000 scored 1. 8% and 4. 8%, using the RPSS, for the global and tropical (30 o. S-30 o. N) land areas, respectively (Wilks and Godfrey 2002).

Real-Time Forecast Validation

Real-Time Forecast Validation

Ranked Probability Skill Score (RPSS) Problem The expected RPSS with climatology as the reference

Ranked Probability Skill Score (RPSS) Problem The expected RPSS with climatology as the reference forecast strategy is less than 0 for any forecast that differs from the climatological probability – lack of equitability There are two important implications: § The expected RPSS can be optimized by issuing climatological forecast probabilities. § The forecast may contain some potential usable information even when RPSS is less than 0, especially if the sharpness of the forecasts is high.

There is no single measure that gives a comprehensive summary of forecast quality.

There is no single measure that gives a comprehensive summary of forecast quality.

GHACOF SOND hedging -5% no skill weak bias +5 bias +15% reasonable because of

GHACOF SOND hedging -5% no skill weak bias +5 bias +15% reasonable because of sharpness hedging good resolution: above-normal +10% below-normal +6% serious bias -20% 0% resolution because of large biases sharpest forecasts believable?

Summary Ø Seasonal forecasts are necessarily probabilistic Ø The models used to predict the

Summary Ø Seasonal forecasts are necessarily probabilistic Ø The models used to predict the climate are not perfect, but by identifying and minimizing their errors we can maximize their utility Ø The two attributes of probabilistic forecasts are reliability and resolution. Both these aspects require verification. Ø Skill in seasonal climate prediction varies with seasons and geographic regions - Requires research!