|View the Climate Trends Tool|
There is great concern in the world about global warming. Since we have collected a lot of the available high-resolution historical temperature record, we wanted to give you access to this information in graphical form, so that you can look at the data for yourself.
There are a couple of things we want to be clear on:
But, there are a host of reasons why this may or may not indicate that the climate is actually changing:
Climatologists typically try to correct for these types of effects. What and how to correct is a subject of great debate.
So there are no simple answers. This tool will hopefully help shed some light on a very difficult issue, and we hope that you will find it as interesting as we do!
That data is even less reliable than what we do have. There are several separate caveats. Ice cores and tree rings are aggregate yearly indicators, derived from models of the ice deposit contents and how trees grow. Any such models are subject to debate.
The written temperature record prior to 1940 is geographically very sparse, and typically records only the high and low temperature. That is a fundamentally different type of sampling than that done by the weather stations that we show. Changing the sampling method means you must introduce some form of model for how the different series relate. Again, any such model is subject to debate.
You can configure the quality controls to filter out "bad" stations. The data for the stations that pass your checks over the period chosen is then used to compute the rate of temperature change.
The rate of temperature change is reported in degrees per century so as to make the results more comparable over different start/end year choices.
When talking about "averages" people typically mean "the mean". However, since the stations do occasionally report bogus data which can be very hard to filter out, and this data can oftentimes have quite a bit higher or lower temperature than what is "normal", such bogus reports can bias the computed mean.
The median is another form of average, and it is arguably more robust to those types of errors. We therefore give you the option of which one to use.
Almost all stations have some outages every year. Outages can bias the results, but too few stations can also introduce bias. Use this slider to trade off between these two.
Note that 100 hours is about 4 full days. 2000 hours of outage corresponds to 83 full days and is the nominal amount of time a person spends working per year. So that's a lot of outage and a lot of risk of bias - in either direction.
The minimum number of years deemed valid (passing the "max missing hours per year" threshold) that a station must have to be included.
Stations have good years and bad years. If a station had a good year a long time ago, and then a couple of good years recently, the early good year can throw off the rate-of-temperature-change computation (this is a standard problem in regression) and result in more uncertainty in the result.
Requiring more good years to use a station counteracts that, but can lead to too many stations being filtered out.
These sliders let you pick the specific interval you'd like to do the analysis over.
A longer range should yield a more reliable rate-of-change result - unless of course there has been changes in the equipment or surrounding environment that introduce bias that throws that computation off.
Changing the "min good years" affects the valid range for these.
The yellow dots are stations that have been filtered out by the quality control settings. Allowing longer outages, fewer good years, or increasing the start/end range will accept more stations.
Funding cuts in the US lead to an almost decade long outage during 1964-1972.