Thursday, May 5, 2016

SteveF makes a hash of climate sensitivity; I propose a solution

-->


Over at the Blackboard they are hawking a “heat balance based empirical estimate of climate sensitivity” which delightfully uses the IPCC's own numbers to show that climate sensitivity has got to be low!!! OMG!!! 
I will show how the IPCC AR5 estimate of human forcing (and its uncertainty) leads to an empirical probability density function for climate sensitivity with relatively “long tail”, but with a most probable and median values near the low end of the IPCC ‘likely range’ of 1.5C to 4.5C per doubling.
And how is Steve going to do that? Well, he's going to give us both barrels of the lukewarmer shotgun, oversimplification and argument from incredulity.

Let's leave aside for today the argument from incredulity (in which his own method produces a fat tail of dangerously high climate sensitivities, and he says, basically, "but that can't be true, because hand-waving") and look at the oversimplification at the heart of SteveF's method for estimating climate sensitivity.
We can translate any net forcing value to a corresponding estimate of effective climate sensitivity via a simple heat balance if:
1) We know how much the average surface temperature has warmed since the start of significant human forcing.
2) We assume how much of that warming has been due to human forcing (as opposed to natural variation).
3) We know how much of the net forcing is being currently accumulated on Earth as added heat.
The Effective Sensitivity, in degrees per watt per sq meter, is given by:
ES = ΔT/(F – A)      (eq. 1)
That is fairly close to true, leaving aside changes in the albedo of the earth over time, and problems with taking the average temperature, which I discuss below. The chief problem is that he doesn't know any of those things with sufficient accuracy to constrain climate sensitivity in any meaningful way.

Climate scientists who do actual work with the climate are doing a fine job of reducing the uncertainty of these numbers, but in every case, the opening move is always to average available measurements over a period of time. But a heat balance model by definition requires accurate accounting of how much heat is coming in and how much is going out, and those numbers are changing over time.

How much has the average surface temperature warmed since the start of significant anthropogenic forcing? Right now, the answer is about 1.5C (not 0.9C, as Steve estimates). El Nino will subside and that number will (temporarily) fall, but that is beside the point: if you are using present-day forcings, then you have to use present-day temperatures.

The same goes for ocean heat uptake: if you are going to compare that to surface temperatures, you need to know what the uptake was at the moment when you took values for the forcings and the total warming. That's tricky, because we know ocean heat uptake varies significantly over time. It's lower than usual right now, because of El Nino, but how low?

If you are going to use ocean heat uptake averaged over X number of years (which to my understanding is basically mandatory to get any kind of an accurate number), then you also have to average the net forcing over those same years, and you also have to average the warming compared to preindustrial over those years.

But simple averaging still will not work, because heat loss varies with temperature to the fourth power. An average warming of +0.9C could reflect a steadily linear increase, or a long period of flat temperatures followed by a prolonged spike in temps to +4C. The latter will radiate more heat into space than the former. The average temperature is the same, but the heat balance is not the same. So we had better stick to instants of time if this method is to have any hope of delivering accurate results.

To reiterate: for an estimate of heat balance to give you climate sensitivity, you need the heat balance AT THAT MOMENT, not averaged over time. The amount of heat the oceans were absorbing ten or twenty years ago can only be compared to the temperature ten or twenty years ago, and the net forcing of ten or twenty years ago. If you are comparing the average heat uptake by the oceans since 1993 with the last 13 years of temperatures and forcing estimates from a moment in 2010, you are comparing apples and oranges. All of these things change over time, and to use the relationship between them to estimate climate sensitivity, only contemporaneous estimates can hope to be valid.

Consider a building occupied by an unknown number of people. You want to know how many people are inside. If you know how many people when in the building at midnight Tuesday, how many entered on Wednesday, and how many left on Wednesday, you know how many are in the building when Thursday dawns (assuming no births or deaths, nerds.)

On the other hand, knowing how many people started in the building, the number who left the building on Sunday, and the average number of people entering the building each day over the last year, doesn’t help you a hell of a lot. But this is what Steve has tried to do with his heat balance based empirical estimate of climate sensitivity.”

For this to work, do you need to use the present moment? No, you do not. In fact, there may be excellent reasons to use some instant in the past, such as the benefit of hindsight in estimating warming or forcings or the presence of an exceptional event (like a large volcanic eruption) that lets you really test some of your theories about forcings and heat balance and temperature.

In other words, you need a series of “moments,” for which you estimate the levels of various forcings and the heat uptake of the oceans. Then you can predict what the temperature “should” be, based upon the inputs, and compare it to what the temperature was (and is.)

Since the act of resolving the state of the climate in each of these moments is one fairly powerful method of determining the inputs for the next “moment” (and whether it can do so, compared to historical observations, is a good test of how accurate it is) we might want to calculate a series of moments, each derived from the one before, based upon our best estimates of forcings, ocean heat uptake, and the like. 

Since everything we said about average temperatures over time could also be said about temperatures averaged over the global surface (i.e., that different local temperatures can yield the same average, but different heat loss) we had better break the earth into boxes, or "cells" and calculate temperature, heat uptake, heat loss, etc., for each cell.

Not all the same color
 

With these modest adjustments the calculations will have a much better chance of doing what Steve wants them to do, which is to take measurements of the climate system, account for ocean heat storage, and estimate climate sensitivity.

I call it a climate model. Trademark pending.

2 comments:

  1. I've used a similar line of logic with deniers in the past. I.e., we'd need to write a computer program that takes the actual physical equations to calculate A and B and C and D and E ..... integrate spatially and temporally .... and --- oh wait - someone's already done that :)

    ReplyDelete
  2. Yes! I have to say to dawned on me rather slowly that the "solution" to Stevie's oversimplification was a climate model.

    Oversimplification of course has its uses and its place, just not as a test or corrective to a more accurate model, which is how Stevie wanted to deploy it.

    ReplyDelete