This report describes the accuracy of the forecasts issued by several forecasting agencies at the San Francisco International Airport (San Francisco, California, United States) weather station. The error metrics are computed by comparing the forecasts issued during December 2011 to what actually happened, as recorded by the weather station.
|Agency||Full Name||Number of Forecasts||Forecast Length|
|NOAA||National Weather Service||30||6.8 days|
|Met.no||Norwegian Meteorological Institute||43||9.4 days|
|WWO||World Weather Online||27||4.8 days|
|WXC||Weather Central||31||15.4 days|
In the table above, the number of forecasts listed in the third column is the number of forecasts that we have in our archive. Each agency may have issued more forecasts during the month, depending on their forecasting schedule.
The table below reports the three-day hourly root mean square error (RMSE) for each of the forecasting agencies. For reference, we include the baseline error rate resulting from forecasting each variable to be the average value of that variable from the same time in past years. For each variable, the agency with the smallest error (winner) is indicated in bold.
|Wind Speed||4 mph||5 mph||4 mph||5 mph||6 mph||WWO|
Precipitation is a bit different from the other weather variables in that it consists of discrete events separated by periods of inactivity, and is not well represented by a smoothly varying number, as for example temperature. The hourly RMSE is consequently not a very good measure of forecast accuracy. Instead, we break the forecasts up into 24-hour chunks (days), and categorize each forecast as either predicting precipitation or not on each day. We then compare those yes-or-no predictions to what actually happened and count the cases where the forecasts were correct or incorrect in predicting precipitation or not.
The table below reports several precipitation forecast error metrics over the first three days of the forecasts.
|Overall Error Rate||13%||14%||15%||18%||NOAA|
|Predicted but Not Observed*||31%||34%||52%||53%||NOAA|
|Observed but Not Predicted**||11%||11%||9%||9%||WWO|
|* as a fraction of all days where precipitation was predicted|
|** as a fraction of all days where precipitation was not predicted|
Depending on your preferences, each of these error rates may be important.
In addition to the RMSE reported above, we also present two additional error metrics on the three-day forecast. The first of these (shown below) is the maximum absolute error (MAE). This provides a measure of how far off each agency was at their worst.
|Wind Speed||21 mph||17 mph||14 mph||17 mph||WWO|
The second (shown below) is the mean error, a measure of how biased the forecasts were. A negative bias means the forecasting agency predicted a lower value on average than what was observed; a positive bias means they predicted higher values.
|Wind Speed||0 mph||1 mph||1 mph||-1 mph||NOAA|
While the RMSE is probably the most generally useful error metric for understanding the accuracy of the various forecasting agencies, these additional error metrics provide additional depth.
The graphs below show how the forecasting error evolves over time. They are useful for understanding how far out a forecast should be trusted, and provide a generally more nuanced picture than the three-day error rates reported above.
In the graphs below, we provide the baseline as a reference. The baseline is the error rate resulting from forecasting each variable to be the average from the same time in past years. If the forecasts consistently perform worse than the baseline a certain number of days out, we recommend looking at the averages more than the forecast to get a sense of what that aspect of the weather will be like.
For the same reasons listed in the Three-Day Forecast Accuracy section above, the daily precipitation forecast error graphs differ from those of the other variables above.