Words fail me.
Translating Mr. Lemke into English, he is curve-fitting, ignoring any physical reality, and trying to make predictions for the future and assertions about what elements of the physical system matter according to the mathematical games he is playing with himself. As was foretold:
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
Judith Curry, who has given this clown her increasingly promiscuously bestowed "fascinating"[!], slips in a backhanded acknowledgement that Lemke is selling snake oil:
A while back -- in fact, I'm the only one who even seems to remember this -- Dr. Curry came out with a post called "Meta-expertise" that provided an excellent list of questions with which to evaluate a self-described expert:
Exciting day, huh? An electrical engineer come to teach us about climate science. Never seen that before. Question two:
"A Unified DAQ Interconnection Network with Precise Time Synchronization"
"Modeling tool wear in end-milling using enhanced GMDH learning networks"
"A unified interconnection network with precise time synchronization for the CBM DAQ-system" "Knowledge Extraction From High Dimensional Data Using Multileveled Self-organization" (self published on his own website)
"High-density active optical cable: from a new concept to a prototype" "Parallel Self-organizing Modeling" (self published on his own website)
"Algorithms for (Q) SAR model building"
Lemke is not a climate scientist and has not published anything on climate science. He publishes in his field -- electrical engineering. He has tried to apply his mathurbation model to climate systems, but nobody published that stuff -- he posted it on his own website.
Not a lot of humility there either -- the questioner is totally wrong, everything (EVERYTHING) Lemke did is right and correct, so there you go. Further down MattStat has a nice critique:
Which leave Lemke sputtering with indignation, but not admitting any shortcomings:
Lemke's not an expert, not a scientist, not even a phD. What does he actually do? Well, funnily enough, he sells software. What kind of software? Modelling software -- the same kind he's using to argue that CO2 doesn't warm the planet. Public mathturbation may be "fascinating" to Dr. Curry, but I doubt very much it will either transform climate science or even sell his software.
Briefly said, knowledge mining is data mining that goes steps further. It is a data-driven modeling approach, but in addition to data mining, self-organizing knowledge mining also builds the model itself, autonomously, including self-selection of relevant inputs by extracting the necessary “knowledge” to develop it from observational noisy data, only, most objectively in an inductive, self-organizing way. It generates optimal complex models according to the noise dispersion in the data, which systematically avoids overfitting the data. This is a very important condition for prediction. These models are available then explicitly in form of nonlinear difference equations, for example.
So this approach is different from the vast majority of climate models, which are based on theories.
Frank Lemke -- a face you can trust. |
There seems to be a rash of trying to explain global warming by theories that either ignore, or flatly contradict, the science called “physics.”Lemke brings this unphysical approach forward as an exciting a new take on scientific inquiry. It's not. Real scientists are quite familiar with how incredibly easy it is to tune and tweak a made-up mathematical construct to say anything you want it to say. As the poet said:
"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk."
Judith Curry, who has given this clown her increasingly promiscuously bestowed "fascinating"[!], slips in a backhanded acknowledgement that Lemke is selling snake oil:
Conclusions regarding AGW and the role of CO2 cannot be drawn from 23 years of data, but this methodology in principle could be extended to longer time periods.
Of course Lemke has already drawn such conclusions. That's the point of the whole pointless exercise:
Looking at observational data by high-performance self-organizing predictive knowledge mining, it is not confirmed that atmospheric CO2 is the major force of global warming. In fact, no direct influence of CO2 on global temperature has been identified for the best models.So Lemke's main argument gets no traction even with Dr. Curry -- so why push this huckster into the limelight?
A while back -- in fact, I'm the only one who even seems to remember this -- Dr. Curry came out with a post called "Meta-expertise" that provided an excellent list of questions with which to evaluate a self-described expert:
I think these are excellent questions. Put them to your favorite "skeptic" expert today, see how they get on. Based on the howls of outrage that followed when I brought, I think maybe they are only intended for use "outside the family," as it were. But no matter. Let's explore Frank Lemke's claims to climate modelling expertise.
Finally, here are a few tests that can be used to evaluate the “experts” in your life:
- Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
- Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?
- Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.
- Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?
- Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?
- Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?
Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?From Lemke's Linkedin profile:
Education: Master, Electricial Engineering and Technical Informatics
Exciting day, huh? An electrical engineer come to teach us about climate science. Never seen that before. Question two:
Is the expert an active practitioner in their domain (versus being a critic or a commentator)?The Linkedin profile gives no publications. I did a Google Scholar search for publications authored by "Frank Lemke" in the past five years.
"A Unified DAQ Interconnection Network with Precise Time Synchronization"
"Modeling tool wear in end-milling using enhanced GMDH learning networks"
"A unified interconnection network with precise time synchronization for the CBM DAQ-system" "Knowledge Extraction From High Dimensional Data Using Multileveled Self-organization" (self published on his own website)
"High-density active optical cable: from a new concept to a prototype" "Parallel Self-organizing Modeling" (self published on his own website)
"Algorithms for (Q) SAR model building"
Lemke is not a climate scientist and has not published anything on climate science. He publishes in his field -- electrical engineering. He has tried to apply his mathurbation model to climate systems, but nobody published that stuff -- he posted it on his own website.
Overconfidence: Ask your expert to make yes-no predictions inI doubt Lemke's going to help us with this one, but it's hardly necessary. Curry herself has already called Lemke out for sweeping conclusions not justified by his data. #3 and #6 are pretty similar to one another:
their domain of expertise, and before any of these predictions can be
tested ask them to estimate the percentage of time they’re going to be
correct. Compare that estimate with the resulting percentage correct. If
their estimate was too high then your expert may suffer from
over-confidence.
Confirmation bias: We’re all prone to this, but some more so
than others. Is your expert reasonably open to evidence or viewpoints
contrary to their own views?
Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?Fortunately Lemke is participating in the discussion at Climate Etc, so we can see his response to criticism:
knowledgeminerKind of brittle, sarcastic and not to the point, huh? Let's try another:
@anander
“After reading a bit more on the subject, I get a hunch that this is not that mainstream – this is just a hunch mainly based on lowish citation counts. What I was able to find (quickly) were written almost solely by Frank Lemke himself. This doesn’t mean it is somehow false or anything, just wondering!”
Well, this is really an ill-posed task. Try solving it with the info you have and observe yourself how assuming different aspects impacts your answers.
“There is also some danger in the application of theoretically derived paradigms by individuals without reasonably detailed knowledge of climate physics, because this can easily lead to small misinterpretations that generate inaccurate conclusions.”
Fred, to make it clear: There is no application of theoretically derived paradigms in this modeling approach at all! EVERYTHING, including the model and the composition of inputs is derived from observations, only. Observations, measurements of system variables, hide essential information about the behavior of the system. This knowledge can be extracted by self-organizing modeling and transformed into predictive models.
The objective function of the algorithm is not well described. To continue a thought posted by Vaughan Pratt, within a range of 90% of the optimal objective function achieved, and the optimal itself, you can usually find a large set of models that have different entities included, different parameter estimates, and different interpretations. The difference in the objective function between the best and all these second and third raters can’t be known to be other than chance variation. With many variables and few observations, it is next to impossible to avoid overfitting and over-interpreting. So tell us more about how you are not excessively fitting noise.
FWIW, this post reads like an advertisement.
knowledgeminer | October 17, 2011 at 7:10 pm | Reply 1. There is a proven history of this approach of more than 40 years. References are also given in this discussion. 2. Follow the “advertisement”. Sometimes people also call it transparency.Is Lemke a hedgehog or a fox (#5)? Hard to care particularly when his performance on #1-#4, & #6 is a series of epic fails.
Lemke's not an expert, not a scientist, not even a phD. What does he actually do? Well, funnily enough, he sells software. What kind of software? Modelling software -- the same kind he's using to argue that CO2 doesn't warm the planet. Public mathturbation may be "fascinating" to Dr. Curry, but I doubt very much it will either transform climate science or even sell his software.
DId you know he's now released an app? Pretty much the same as the web site.
ReplyDeleteVery interesting to read this article. I would like to thank you for the efforts you had made for writing this awesome article.scottishpoet.co.uk |
ReplyDeletestresscall.co.uk |
bedbreakfast-miltonkeynes.co.uk |
sylvanenvironmental.co.uk |
watchmead.co.uk |
whatiwantwhenidie.co.uk |
mediajingles.co.uk |
lsw-ltd.co.uk |
billysollox.co.uk |
theburkegroup.co.uk |
5starworcester.co.uk |