The Economist muses over the impact of Fukshima and the nuclear big picture:
And if the blow is harder than the previous one, the recipient is less robust than it once was. In liberalised energy markets, building nuclear power plants is no longer a commercially feasible option: they are simply too expensive. Existing reactors can be run very profitably; their capacity can be upgraded and their lives extended. But forecast reductions in the capital costs of new reactors in America and Europe have failed to materialise and construction periods have lengthened. Nobody will now build one without some form of subsidy to finance it or a promise of a favourable deal for selling the electricity. And at the same time as the cost of new nuclear plants has become prohibitive in much of the world, worries about the dark side of nuclear power are resurgent, thanks to what is happening in Iran.After reading Burton Richer's incredible "Beyond Smoke and Mirrors" (which I hope to formally review pretty soon) I had a rekindled enthusiasm for nuclear energy. The problems of disposal can be greatly mitigated by running fuel through plants twice, albeit at the cost of a greater threat of nuclear proliferation if the stuff goes wandering. The actual threat posed by plant malfunctions to human safety is minimal, far less than that of just the deaths caused by inhaling the smoke from burning oil or coal. Nuclear is readily scalable with many promising technologies on the horizon. But for those hoping for modular reactors or micro reactors or molten sodium reactors or thorium on a white horse, another article in the review points to an important problem, beyond a nervous public or heavy-handed regulation, a problem that I, for one, had failed to consider:
Such homogeneity in a 70-year-old high-technology enterprise is remarkable. Seven decades after the Wright brothers’ first flight there were warplanes that could travel at three times the speed of sound, rockets that could send men to the moon, airliner fleets that carried hundreds of thousands of passengers a day, helicopters that could land on top of skyscrapers. Include unmanned spacecraft, and there really were flights a billion times as long as the Wright brothers’ first and lasting for years. But aircraft were capable of diversity and evolution and could be developed cheaply by small teams of engineers. It is estimated that during the 1920s and 1930s some 100,000 types of aircraft were tried out.Wind and solar, not without their disadvantages compared to nuclear power and energy sources, have this great advantage; they are relatively safe and easy to tinker with. There are hundreds if not thousands of designs of solar panels and wind turbines, at every stage of development. You will never be able to build and test 100,000 different types of nuclear energy generation, not if your regulators were raving Randian paleolibertarians. If you don't like the shape of your wind turbine you can radically redesign it and throw it up in the wind and see what it does; will you ever be able to do that with a nuclear reactor? And if not, what can we reasonable expect in the coming decades except that renewable technology will continue to advance and nuclear will be (at best, if the public's fears can be allayed) a bridge technology on the way to a low-carbon future?
Developing a nuclear reactor, on the other hand, has never been a matter for barnstorming experimentation, partly because of the risks and partly because of the links to the technologies of the bomb.
I thought it might be worthwhile to examine more carefully evidence related to a centerpiece of Lindzen’s claim that climate models overstate climate sensitivity by means of “fudge factors” involving aerosols. . . .
In communicating with the public about Climate Change, Richard Lindzen has consistently claimed that climate scientists are overestimating the warming potential of CO2. Central to this claim is his assertion, unqualified by any caveats, that aerosol forcing is “unknown” but is “arbitrarily adjusted” in climate models to make them match observed trends. In particular, he suggests that most often the adjustments deliberately overstate the cooling effect of aerosols to bring the model trends down to the observed trends. We can therefore ask the following relevant questions: (a) Is aerosol forcing “unknown”? (b) Is there acknowledgment by modelers that they adjust the aerosol forcings for the purpose of matching observed trends? (c) If not, are the aerosol parameterizations they make justifiable on some other basis or are they “arbitrary”? (d) Is there independent evidence that can only be reasonably interpreted to mean that the adjustments are made to match observed trends? (e) If choices are made that are not clearly justified by the evidence, are they in the direction of exaggerated aerosol cooling? The answers can help us decide if what Lindzen states as fact is indeed a fact or if Lindzen’s claim in this regard is untruthful.
Before proceeding, it’s worth noting that there is no way to conclusively exclude the possibility that some model choices have on occasion been influenced, perhaps subconsciously, by an intent to match observed temperature trends. We can, however, ask whether this is likely to be true in general, and more importantly whether stating it as an established fact rather than a conjecture can be supported. I suggest that the evidence, taken in total, refutes Lindzen’s statement with high probability.
(a) Is aerosol forcing unknown? A frequent fallacy in blogosphere and some media criticism of mainstream scientific conclusions is the implication that if we don’t know everything, we know nothing. Clearly, if we knew nothing about aerosol forcing, any choices in models would necessarily be “arbitrary”. In fact, however, much is known about aerosol data in general, and in particular its incorporation into models. An example of the latter is found in Schmidt et al 2006, which includes extensive evidence based on physical principles and empirical data. Much also remains to be learned, but the evidence refutes the absolutist proposition that our ignorance is total.
(b) Is there acknowledgment by modelers that they adjust the aerosol forcings for the purpose of matching observed trends? One source on this issue is Gavin Schmidt, in both an exchange on collide-a-scape 334-378 and in the details of how aerosol forcing is developed in the GISS E model described in Schmidt et al 2006 (with coauthors who include modelers Jim Hansen and Andy Lacis). It’s hard to read what Gavin Schmidt wrote without concluding that he flatly rejects any motivation designed to match trends, and that he rejects the notion that such a motivation exists as a general phenomenon among the modelers. (A similar point has been made elsewhere specifically regarding GFDL and CCSM models – see Chapter 5 in the 2008 USCCSP report). What Gavin Schmidt says about how he and other modelers incorporate aerosol forcing into models contradicts Lindzen’s claim about their motivation, unless Gavin is either lying or engaged in self-delusion. His statements of course can’t exclude the possibility of exceptions among a few modelers that Schmidt et al are unaware of.
(c) Are the aerosol parameterizations modelers make justifiable on some empirical basis or are they “arbitrary”? The empirical basis was illustrated in the Schmidt et al reference cited above.
d) Is there independent evidence that can only be reasonably interpreted to mean that the adjustments are made to match observed trends? An important argument that there is some, perhaps unconscious, choice of aerosol parameters made with trends in mind among some modelers comes from papers by Kiehl 2007 and Knutti 2008, both of which report an inverse correlation (a weak one) between model climate sensitivity and total anthropogenic forcing in models that simulate 20th century trends fairly well – a low total forcing reflects primarily strong negative aerosol forcing. Certainly, one explanation for this might be a choice of aerosol forcings made with an eye toward matching observed trends. Since we have statements cited above that trend matching isn’t done, this creates a conflict that would be difficult to resolve if there were not other plausible explanations for the reported inverse correlation. We can explore this possibility.
At least two mechanisms might explain the correlation without invoking specific choices designed to match observed trends. The first is based on the principle that models are parameterized to match existing climatology in the absence of an imposed perturbation such as CO2-mediated forcing. This includes seasonal changes, for example, whereby temperature variation must be explained on the basis of forcings (including aerosols that affect albedo) and feedbacks (which affect climate sensitivity). It is conceivable that different modelers have made choices that permitted that matching but which varied inversely in the relative strengths of forcing and climate sensitivity ,and which then carried over into the trend simulations even though that was not the reason for the choice of parameters. In fact, it is possible that a choice involving a single parameter set could affect both aerosol forcing and sensitivity. For example, Knutti points out that in the case of aerosol indirect effects, both climate sensitivity and these indirect effects depend to some extent on a common hydrology, so that parameterization in that realm could create a correlation of the type observed.
A second mechanism that might contribute to the inverse correlation independent of modeler choice is selection bias. Many models have attempted to hindcast 20th century temperature trends. Those reported by Kiehl 2007 and the subset of CMIP3 models cited by Knutti 2008 do a fairly good job in this regard, but almost certainly others do less well. If, for example, the pairing of climate sensitivity strength and total aerosol forcing in models occurred in a random manner, those that paired them in the same direction (both high or both low) would do poorly and those that paired them inversely would perform better. In preferentially citing the latter, possibly because the poor simulations were less available, these authors have ensured that this type of randomness, if it occurred, would lead to the selective citing of the models that happened to “come out right” even if all models – skillful and unskillful combined – made their pairings at random, or at least independent of observed trends. It would be incumbent on anyone claiming deliberate, non-random pairing to provide direct evidence for that claim, particularly in light of the contradictory statements (see b above) that such deliberate choices were not part of model design. Note, however, that if some models matched temperature trends accurately “by chance”, the apparent accuracy probably overstates the actual skill of the models to make future predictions unless the same compensating errors exist in future simulations.
e) If choices are made that are not clearly justified by the evidence, are they in the direction of exaggerated aerosol cooling? Remember that one of the implications of Lindzen’s “arbitrary adjustments” claim is that they were needed to make the model simulations come out cool enough to match trends without requiring low climate sensitivity. However, if one looks at one of the choices that most significantly affects simulations, it was that most of these older models did not incorporate indirect aerosol effects into their negative forcing estimates. These effects are thought almost universally to be real, albeit fairly small. However, failure to include them makes the models run too warm, contrary to the implication by Lindzen that modelers are trying to overstate climate sensitivity by exaggerating aerosol cooling. Including the indirect effects cools the simulation, and so their absence in the majority of the models implies that the actual climate sensitivity might be higher than estimated from the earlier models. Whatever the practical reasons for excluding indirect aerosol effects, it is hard to see how it could have been motivated by a desire to exaggerate cooling. The omission of indirect effects is likely to be rectified in the current group of models. The absence of indirect effects in most models and their inclusion in others renders interpretation of model/observational relationships problematic. It’s not clear to me that we would see the same inverse relationship if all models had incorporated indirect aerosol effects.
Based on all the above, I find the most plausible interpretation to be the following. (1) Lindzen’s claim that modelers “arbitrarily” adjust “unknown” aerosol forcings to exaggerate the cooling effect of aerosols is unsupportable. (2) There is no convincing reason to doubt claims from modelers (e.g., Schmidt et al) that choices of aerosol parameters are based on available empirical evidence and are not designed to affect the trend simulations. However, the possibility of exceptions to this generalization among some modelers can’t be excluded. (3) The omission of indirect aerosol effects from models is a choice that would understate rather than exaggerate aerosol cooling. (4) Correlations between aerosol forcing and climate sensitivity are difficult to interpret from model simulations that include indirect effects is some cases but exclude them in others (the majority)*. (5) To the extent the inverse correlation would persist even if indirect effects were uniformly included, it can be explained at least in part without invoking deliberate choices by modelers designed to make simulated trends match observed ones. The assertion by modelers that they don’t engage in that type of “tuning” is not refuted.
*In an email conversation with Dr. Knutti, he informs me that the data from many models are inadequate to determine exactly what went into their forcings, and so categorizing the models may not be possible. Dr. Knutti repeats the inference he drew in his paper that some but not all models were guided by observed trends. My conclusion, based on the above analysis, is that at least many were not, and the possibility that some were is still unproven.
________________________________________
I asked Fred Moolten, whose carefully argued and exhaustively researched comments are a highlight of Climate, Etc, for permission to reproduce his comment on Lindzen as a post. This makes him our first guest poster at IT. Very exciting!
Some relevant links:
Lacis, A., J. Hansen, and M. Sato (1992), Climate forcing by stratospheric aerosols, Geophys. Res. Lett., 19(15), 1607–1610, doi:10.1029/92GL01620.
Consistency Between Satellite-Derived and Modeled Estimates of the Direct Aerosol Effect, Gunnar Myhre,Science 10 July 2009: 325 (5937), 187-190.