Friday, August 19, 2011

Skeptical Science on modelling

A great review of what climate models actually are (expressions of the known principles of physics at work in the climate), how they work (really well, actually), and the terrific hypocrisy of "skeptics" who look down their noses at modelling but embrace spectacularly crude and error-riddled models touted by Spencer et al.

Money quote:

However, intead of constraining his variables using physical measurements and then running his model to see if it fits observations, Dr. Spencer just runs his model without limits and tweaks the parameters until it matches the data. This is a practice known as "curve fitting" or "cooking a graph". In one instance where he concluded the climate is not sensitive to changes in greenhouse gases, Dr. Spencer's results used a mixed layer depth of 700 meters. In a recent study in which he concluded that more heat is lost to space than climate models show, amongst numerous other problems, Dr. Spencer's model used a mixed layer depth of 25 meters. In other cases, Dr. Spencer has used models with as many as 30 fully adjustable, unconstrained parameters. With so many variables and apparently no desire to match physical reality, Dr. Spencer's model could spit out literally any answer. As the famous mathematician, John von Neumann said,

"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."

And as Dr. Barry Bickmore added,

"give me more than 30 parameters, and I can fit a trans-dimensional lizard-goat and make rainbow monkeys shoot out its rear end."

I can forgive the hypocrisy of those that hate models that give answers they don't like and fall in love with those that do. What is frustrating is the irrational hostility towards models, period. Deniers loathe models, they say, because they are a substitution of computer calculations for experiment. They promote, and I'm sure they sincerely believe, the idea that climate scientists are exploring the virtual world in their computers rather than investigating the real one.

Again, it's easy to be pulled up short by the staggering hypocrisy of deniers trying to promote their "blog science" curve fitting from the comfort of their local WiFi enabled Starbucks, while climate scientists, to take one example, have trekked to some of the most remote and inaccessible regions of the earth, moving 21st century scientific equipment by pack animal, in order to create the ice core data sets of temperature and CO2:

That is a fundamental mistake, and it is compounded, unfortunately, when scientists talk carelessly about experiments using models.

But again, we're ignoring the hypocrisy -- move along, move along. The key point about models is that they are not experiments, they are hypotheses. They are simply a set of rules and descriptions of conditions, the unfolding of which requires more calculations than a person can do comfortably on a sheet of paper. When Svante Arrhenius created the first real estimate of climate sensitivity, in 1906, he used a climate model, and a "run" took him years, as he laborously performed tens of thousands of calculations broken into individual "cells" of the consequences of doubling atmospheric CO2 (considering what he was working with, he did amazingly well, getting a climate sensitivity of about 2.1C).

A model is a hypothesis about how the world works. Its rules are nothing more than guesses about the nature of the world, and the necessary simplifications, when we know a model doesn't reflect the full complexity of reality, represent our guesses about which factors matter most in determining the outcome. We use models because the world is too complicated to test our guesses on a blackboard or in our heads. We need a way to bring vast amounts of information and a large number of hypotheses about physics together and see if they fit. That's what models do, that's what computers do in general, and if you don't like it, I don't want to hear from you about it via the web, because modelling is nothing more than using computers to do exactly what computers are made to do -- keep track of things, and do simple calculations really fast -- and if you hate models, you hate computers. So posting your Luddite views to the nearest blog is kind of like -- what's the word again -- begins with "H" . . .

1 comment:

  1. > getting a climate sensitivity of about 2.1C

    Actually Arrhenius found a climate sensitivity of 3.4C (page 265 bottom), but for multiplying CO2 concentration by 1.5. Conversion to doubling sensitivity gives 5.8C, almost twice the current estimate (can also be seen in Table VII middle column). And part of this was luck, as his method could only capture the greenhouse gas (CO2 + H2O) contributions, not, e.g., the cloud cover feedback.

    But given the primitive means, pretty good still...