The Problem of Proper Forecasting

  1. LowDown
    LowDown
    Years ago when the IPCC was new there was a lot of discussion about whether or not forecasting the climate for 50 to 100 years into the future was even possible. It was pointed out that the climate scientists publishing these predictions had not consulted forecasting experts. No doubt this was because forecasting experts would have told them that what they were trying to do was not possible. But then climate science got caught up in the political storm and the whole problem of proper forecasting got lost.

    A 2009 paper by a group of forecast experts whose area is in business forecasting showed how that if we assume that global temperatures will remain the same from year to year or decade to decade you will have a forecast of the climate that is several times more accurate than the computer models used by the IPCC. They call their no-change model the "benchmark model." If the computer model can do no better than the benchmark model then it should not be used to make policy, they said. Their forecast for the rest of this century is therefore that temperatures won't vary more than 0.5 degrees from the 2008 value. They point out that it is probably not possible to come up with a computer model that's more accurate than the benchmark model.

    An expert on chaos theory chimed in recently to point out that reinterative computer models of the weather cannot predict the weather for more than 10 days in advance because of the way that successive errors accumulate. Short term weather forecasts have been and continue to be tested against real world data and refined but still can't provide accurate forecasts for more than a week or so. Some have said that climate models are different but it's not clear to me how that is because they use the same process of successive calculation to predict conditions from one time interval to the next that the weather forecasting models do.

    But there may be ways to do it. Assumptions about the climate puts some constraints on forecasts which might make longer term forecasts possible. For one thing, the earth is in energy balance, the energy inputs are pretty constant, and something is known about what affects the outputs. It isn't necessary to provide a detailed forecast of all the potentially relevant details of the climate when people are mainly interested in temperature. Once can throw the computers out, disregard spacial data and dispense with the problem of chaos. It is valid, for example, to think that an increase in CO2 will increase global temperatures because of the greenhouse effect. CO2 does, in fact, absorb heat radiating up from the surface and then re-emit it back to the earth, so warming is a certainty. Calculating this one has not so much made a forecast as made a statement about energy balance. G.S. Callendar came up with a simple model that assumed that this effect and only this effect would cause warming, and his estimates of warming have been very accurate going all the way back to 1938. His estimates of how long it would take for atmospheric CO2 to get to 400 ppm were way off, he thought it would take until 2200 to do that, but he made clear that he was only guessing about that. His model is the only one that even comes close to being as good as the forecaster's benchmark model. He recognized that water vapor has a greenhouse effect but dismissed the idea that water vapor might have a net heat trapping effect because of the hydrological cycle which would render the effect of water in all its forms a wash. It appears that he was right about that. Modern climate scientists are being brought to that conclusion kicking and screaming by degrees.

    But, in fact, regardless of the assumptions we make about how the climate works and regardless how many teraflops our supercomputers are capable of processing, we could do just as well or better at forecasting global temperatures by assuming that there won't be any change in the temperature from year to year, which is a fascinating finding.

    So, how is it that the computer models go wrong? How could assuming no change give better results? It appears to me that their mistake is in predicting an increase in temperature from year to year that's too big. They predict a yearly increase on the average of 0.03 degrees. From 1980 to the present it has been more like 0.01 degrees, which is closer to no change. Callendar's model predicts a change, based on the increase in CO2, of 0.014 per year, and that's a calculation you can do on a paper napkin.
Results 1 to 1 of 1