Monday, October 31, 2011

Why Eco Models Are Wrong

When it comes to assigning blame for the current economic doldrums, the quants who build the complicated mathematic financial risk models, and the traders who rely on them, deserve their share of the blame. [See “A Formula For Economic Calamity” in the November 2011 issue]. But what if there were a way to come up with simpler models that perfectly reflected reality? And what if we had perfect financial data to plug into them?

Incredibly, even under those utterly unrealizable conditions, we'd still get bad predictions from models.

The reason is that current methods used to “calibrate” models often render them inaccurate.

That's what Jonathan Carter​ stumbled on in his study of geophysical models. Carter wanted to observe what happens to models when they're slightly flawed--that is, when they don't get the physics just right. But doing so required having a perfect model to establish a baseline. So Carter set up a model that described the conditions of a hypothetical oil field, and simply declared the model to perfectly represent what would happen in that field--since the field was hypothetical, he could take the physics to be whatever the model said it was. Then he had his perfect model generate three years of data of what would happen. This data then represented perfect data. So far so good.

The next step was "calibrating" the model. Almost all models have parameters that have to be adjusted to make a model applicable to the specific conditions to which it's being applied--the spring constant in Hooke's law, for example, or the resistance in an electrical circuit. Calibrating a complex model for which parameters can't be directly measured usually involves taking historical data, and, enlisting various computational techniques, adjusting the parameters so that the model would have "predicted" that historical data. At that point the model is considered calibrated, and should predict in theory what will happen going forward.

Carter had initially used arbitrary parameters in his perfect model to generate perfect data, but now, in order to assess his model in a realistic way, he threw those parameters out and used standard calibration techniques to match his perfect model to his perfect data. It was supposed to be a formality--he assumed, reasonably, that the process would simply produce the same parameters that had been used to produce the data in the first place. But it didn't. It turned out that there were many different sets of parameters that seemed to fit the historical data. And that made sense, he realized--given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, you'd expect there to be different ways to tweak the parameters so that they can produce similar sets of data over some limited time period.

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward--and sure enough, his calibrated model produced terrible predictions compared to the "reality" originally generated by the perfect model. Calibration--a standard procedure used by all modelers in all fields, including finance--had rendered a perfect model seriously flawed. Though taken aback, he continued his study, and found that having even tiny flaws in the model or the historical data made the situation far worse. "As far as I can tell, you'd have exactly the same situation with any model that has to be calibrated," says Carter.
Essentially, it boils down to my central problem with models of all sorts. By definition, models are an attempted simulation of "the truth", using available data and the rules of physics, and additional rules that appear to govern the behavior of systems (often statistical rules, which have no fundamental mechanism behind them), to predict future behavior.  What Carter has shown is that the necessary incompleteness of the data, as well as the fact that data is always measured with some error, results in propagation of errors (by error, I mean statistical error, which is a difference between the calculated value and a theoretical "true" value) through the modeling process until the results of the modeling not well predict the future outcome of the system.

Scientific American would like you to think that this applies uniquely to the economic models that are used to predict the outcomes in the financial world.  It does not.  It applies broadly to a wide range of model that are used in public policy.  Ecological, meteorological and climatological modeling all depend on the same calibration procedures.

Such models are being used to estimate the effects of nutrients on Chesapeake Bay, and to set rather firm targets for the amount of nutrients that different industries, and different jurisdictions are permitted to release.  The industries and jurisdictions affected all say the models are inadequate.  The regulators insist they're good enough to make important financial decisions (for someone else) on.

Climate modelers insist that, even though they don't really understand the effects of clouds on the earth's climate, and how cosmic rays interact with the atmosphere to alter them, their models are good enough to govern the world's industrial output with, even though, without exception, their past predictions have always failed to come true.  "This time, we're right", they insist.

Meteorological modelers seem to be at least to be the most realistic.  They don't predict too long in advance, and they have a lot of different models that they use for guidance.  The best example is tropical storm/hurricane forecasts.  The classic "spaghetti" tracks of their multiple models for storm tracks tell you two things.  First, a general indication of where a storm is likely to go, and second, and possible more important, how much confidence they place in those forecasts.  If they all agree pretty well, and you're in the path, get your storm shutters out.  If one of ten tracks has you in the storm, at the outer edge of a tangled mess of predictions, you might want to wait a little longer before going out to buy them.  The nice thing about weather models is that tomorrow, or even ten days out, comes pretty soon, and you get to test your models against the truth pretty often.  That's not true with most ecological or climatological models.

"Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." - George E. Box

No comments:

Post a Comment