Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Saturday, March 12, 2016

Lucia's sadly selective statistical showpersonship

Lucia Liljegren is spending the twilight of lukewarmism as a non-laughable position mostly posting recipes, and notifying her followers about the arrival of major holidays (three of her last ten posts.) But 'twas not always thus. During the "hiatus" Lucia was fond of comparing the IPCC's multi-modal mean with global temperatures, despite the fact that these were models of climate, not weather forecasts, and that patient people had explained to her over and over that "about 0.2C/decade over the next several decades" was not a prediction one could falsify based on a few years of data.

She liked making this mistake so much, she did it again and again and again (and again and again…and again.) But this all stopped rather abruptly in November of 2014, which funnily enough was exactly the time El Nino came to a stuttering start after a unprecedented 50-month absence:


As a result, every month after November 2014 (anomaly 0.68C) has been hotter (anomaly-wise) than November 2014 itself:

And yet despite the rather dramatic turn in the data, and despite the fact that she liked making this comparison well enough to make it over and over again with different temperature records and updates to the present, she never updated her final graph, which looked like this:


Why did Dr Liljegren suddenly lose interest in this exercise? Why the statistical-torture hiatus? We may never know, but said graph with more recent GISTEMP measurements superimposed looks like this:



It's a mystery, really.

UPDATE: MartinM has better graph-fu than I and has updated lucia's graph of the multi-model mean vs the 13-month mean.

 

Thursday, March 27, 2014

Jar Jar Pielke

Meesa gonna normalize those costs, boss.


So Roger has joined Nate Silver over at the new 538, and the reviews are not good.

There's not really much to say about Pielke. He is what he is. His posts are error-ridden, cherry-picked and logically incoherent, which will surprise no one who has read his blog (or his twitter feed.) (Although I wish I had come up with Ryan Cooper's description of Pielke's output: "the Breakthrough Institute program for hippie-punching your way to fame and fortune.")

The real question of the hour is how Nate Silver, who became the intellectual crush of thinking people everywhere by taking a hard-nosed statistical approach to the squishy world of political analysis, has now seemingly embarked on a career, as the poet said, of "peddling freakonomics-lite contrarianism."

I guess we're seeing another example of successful people not understanding why they were successful in the first place, and either totally neglecting the stuff that made them great or overdoing it to the point of nausea.

You can point to Silver's hiring of Pielke as a mistake, but really, why is he making that mistake? There are a number of factors:

A) He does not have expertise in this area himself.
B) It is a lot more time consuming and difficult to become an expert in climate science and policy than, say, the dynamics of running for Congress.
C) He wanted a contrarian take, which he wrongly believes is what people are looking for from 538.

Unlike politics and sports analysis, where contrarianism is easy and fun because they are saturated with sloppy methodology and magical thinking, climate science is populated mostly by an elite group of highly trained specialists, and that makes successful contrarianism much, much harder.

One can imagine how this might be done. You could get someone very, very good at statistics (not Pielke, obviously) and go through important climate papers, and see what shakes out. One of the troubles with that, obviously, is that to all but a select few, that sort of thing is boring as hell.

Or you could do what I do, and what a lot of other much better informed and more witty people do, and be a contrarians to the contrarians. That's far easier. Their mistakes are glaring, their personality disorders, amusing and dramatic. Since many of the worst offenders are public officials and those that are not get a relentless stream of free publicity from the right-wing hate machine, in contrast to the scientists many people know who they are.

But perhaps Nate did not like all the competition in this space, or perhaps he is shy of embarking on a course which, yet again, would enrage reality-phobic conservatives. But for whatever reason, the new 538 is looking like a caricature of the old 538, leaving bewildered former admirers to ask, do you really not see the difference between the great stuff you were doing before, and the shlock you're putting your name to now?





Sunday, December 9, 2012

The Signal and the Noise: the King of the Nerds on Climate Change

Buy it now.
This book fluttered the needle in the climate community already when Michael Mann expressed concern that not all was well with Silver's chapter on climate. Having read the book, Silver may not have everything right, but he's made a strong contribution to the world of reality-based thinking.

It isn't so much what he has to say specifically on the subject of climate. He doesn't dig into that too deeply. What he is very concerned with is how we evaluate evidence, how we assess and make use of expert predictions and computer models both, and how we recognize the difference between serious prediction and entertaining spin.

Relevant? I thought so. Here are Silver's main points, as I see them:

1. Experts work better with models, and just as importantly, models work better with experts. Neither one is as strong as both of them together.

2.Numbers don't speak for themselves. Without an underlying theory of what might be happening and why, you can't propose a reasonable pretest probability, and without a reasonable estimate of the pretest probability, you can't get much useful information out statistical tests of your data.

I don't know if Silver has even heard the term "mathturbation." If he has, he's far too classy to make use of it. But he shows us in a very compelling way why statistical analysis without theory is useless in a Bayesian universe.

3. In prediction, an average of many estimates from many different models and experts is typically better than just picking your favorite.

4. Many predictions from self-described experts are made for entertainment value and should be judged as such. Professionals are not inherently better or smarter than amateurs, but they are less likely to be subject to perverse incentives that reward them for being grossly wrong over and over.

Especially in our connected, information-rich world, the first task of a talking head is to call attention to their prediction -- to get noticed. That incentive favors extreme predictions, not accurate ones. A professional, however, who needs to maintain relationships with a smaller community paying closer attention over many years and many predictions, has a strong incentive to get things right. Which is one of the reasons we will not be replacing Nature with climate blogs any time soon.

Michael Mann points out that Silver says some nice things about Scott Armstrong, a creepy statistically illiterate self-declared "forecasting expert." That guy makes my skin crawl, but as far as I could see Silver did not side with him and correctly dismissed his climate "bet" thought experiment as dubious.

If you look away from the scant remarks on climate and look at how the larger argument applies to the climate debate, the points Silver makes are powerful arguments for the practices of the quote-unquote "climate establishment," and a devastating takedown of the "skeptic" argument.

He shows why we need experts, not just blind data analysis. He shows that statistics in the real world depend on a reasonable estimate pretest probability, which (and this is a simple but powerful point) means that a clear theory of the underlying process -- not a vague appeal to "natural causes"! -- is necessary to make intelligent use of the data.

Silver explains how the science of statistics justifies taking as many models and methods as possible into account when developing estimates of complex outputs like climate sensitivity and sea level rise.

Finally, by dissecting the pundit model of prediction, where the little known and little remembered predictor/entertainer makes dramatic declarations and evades responsibility for mistakes, Silver strikes a rare and welcome blow for professionalism in a culture that sometimes seems to worship the amateur as a higher and purer source of insight. Professionals don't just know their stuff, Silver argues, they themselves are known, and valued, only as far as their predictions are more successful than not. This gives them a strong incentive to get it right that amateurs struggling to be noticed and political voices pushing an agenda simply do not have.

Silver is worth reading, and I hope the valid caveats expressed by Dr Mann don't discourage pro-science voices from picking him up.