|Buy it now.|
It isn't so much what he has to say specifically on the subject of climate. He doesn't dig into that too deeply. What he is very concerned with is how we evaluate evidence, how we assess and make use of expert predictions and computer models both, and how we recognize the difference between serious prediction and entertaining spin.
Relevant? I thought so. Here are Silver's main points, as I see them:
1. Experts work better with models, and just as importantly, models work better with experts. Neither one is as strong as both of them together.
2.Numbers don't speak for themselves. Without an underlying theory of what might be happening and why, you can't propose a reasonable pretest probability, and without a reasonable estimate of the pretest probability, you can't get much useful information out statistical tests of your data.
I don't know if Silver has even heard the term "mathturbation." If he has, he's far too classy to make use of it. But he shows us in a very compelling way why statistical analysis without theory is useless in a Bayesian universe.
3. In prediction, an average of many estimates from many different models and experts is typically better than just picking your favorite.
4. Many predictions from self-described experts are made for entertainment value and should be judged as such. Professionals are not inherently better or smarter than amateurs, but they are less likely to be subject to perverse incentives that reward them for being grossly wrong over and over.
Especially in our connected, information-rich world, the first task of a talking head is to call attention to their prediction -- to get noticed. That incentive favors extreme predictions, not accurate ones. A professional, however, who needs to maintain relationships with a smaller community paying closer attention over many years and many predictions, has a strong incentive to get things right. Which is one of the reasons we will not be replacing Nature with climate blogs any time soon.
Michael Mann points out that Silver says some nice things about Scott Armstrong, a creepy statistically illiterate self-declared "forecasting expert." That guy makes my skin crawl, but as far as I could see Silver did not side with him and correctly dismissed his climate "bet" thought experiment as dubious.
If you look away from the scant remarks on climate and look at how the larger argument applies to the climate debate, the points Silver makes are powerful arguments for the practices of the quote-unquote "climate establishment," and a devastating takedown of the "skeptic" argument.
He shows why we need experts, not just blind data analysis. He shows that statistics in the real world depend on a reasonable estimate pretest probability, which (and this is a simple but powerful point) means that a clear theory of the underlying process -- not a vague appeal to "natural causes"! -- is necessary to make intelligent use of the data.
Silver explains how the science of statistics justifies taking as many models and methods as possible into account when developing estimates of complex outputs like climate sensitivity and sea level rise.
Finally, by dissecting the pundit model of prediction, where the little known and little remembered predictor/entertainer makes dramatic declarations and evades responsibility for mistakes, Silver strikes a rare and welcome blow for professionalism in a culture that sometimes seems to worship the amateur as a higher and purer source of insight. Professionals don't just know their stuff, Silver argues, they themselves are known, and valued, only as far as their predictions are more successful than not. This gives them a strong incentive to get it right that amateurs struggling to be noticed and political voices pushing an agenda simply do not have.
Silver is worth reading, and I hope the valid caveats expressed by Dr Mann don't discourage pro-science voices from picking him up.