Fury Scarcely Suffices
h/t RtO
By and large, when it comes to climate, and the changing thereof, my ignorance is fairly prodigious. I know some physics and chemistry, and can sometimes discern statistics from cuisine. However, I must admit that my disinclination towards AGW is derived at least as much from temperamental and non-climate bases as from any knowledgeably reasoned conclusions about climatologists' consensus.
Since ClimateGate, though, I can add knowledge to reflex. While I bow to AOG in all computer matters, my ignorance here is not total: I have a graduate degree in computer science, and have spent a couple previous lives in the field putting daily bread on the table.
With regard to the programming that is, in effect, the climate of the future:
This is complete, unadulterated, distilled, thoroughgoing essence of nonsense. There is simply no charitable explanation that doesn't involve lavish accusations of ignorance or disingenuousness.
Most climate modeling is done in FORTRAN. Despite being one of the oldest high-level languages, it is still one of the most popular for numerically intensive applications. This largely due to its long term presence on mainframe computers; by mainframes' very nature, languages developed for them (see also COBOL) will be much more persistent than those developed since the advent of personal computers. (Update: per rchrd's comment, I should add that Fortran is still widely used because it is efficient, portable, and, perhaps most importantly, well understood. For demanding numerical applications, SFAIK, it is the gold standard. Corrections are underlined below.)
However, the FORTRAN versions used in most, if not all, climate models has some significant shortcomings.
Foremost among them is that, prior to FORTRAN 2003, it was a procedural language, as opposed to structured. FORTRAN allows code that is as difficult to follow as a bowl of spaghetti; modern structured languages essentially enforce programming that much more closely resembles Legos.
Complicating that problem is that FORTRAN is relatively cryptic, even to cognoscenti. That means intelligibility to any reviewer, or even the author more than a few days after the fact, is very dependent upon comments embedded in the code. In contrast, well written code in a structured language is essentially self-documenting; no comments are required because the program statements are self-evident.
This is about far more than scoring style points. The modeled climate is not really the squiggly line depicting temperature deviation over time: that line -- the "results" -- is the visual manifestation of the interaction between data and code. It has no independent existence whatsoever. Therefore, in order to replicate the results, both data and code must be readily available AND comprehensible.
Hiding behind idiosyncratic programming is to write a blank check for every manner of programming sins; suggesting that an independent audit yielding contradictory results would be considered as disproof of accepted wisdom is, at best, touchingly naive. As the development of Linux has shown, by far the best approach would have been to open-source the software development. Instead, what we have is the worst method imaginable: secretive development by those whose specialty, whatever it might be, is most assuredly not software engineering.
Back in the day, when I was dealing with Structured Query Language, had I produced anything within a cannon-shots distance of being as bad as the CRU's climate modeling programs, I would have been fired faster than that cannon ball came out of the barrel.
Google "cru climategate source code fortran". You will not find anything that is as close to complimentary as I have been. Here is just one example.
By and large, when it comes to climate, and the changing thereof, my ignorance is fairly prodigious. I know some physics and chemistry, and can sometimes discern statistics from cuisine. However, I must admit that my disinclination towards AGW is derived at least as much from temperamental and non-climate bases as from any knowledgeably reasoned conclusions about climatologists' consensus.
Since ClimateGate, though, I can add knowledge to reflex. While I bow to AOG in all computer matters, my ignorance here is not total: I have a graduate degree in computer science, and have spent a couple previous lives in the field putting daily bread on the table.
With regard to the programming that is, in effect, the climate of the future:
Ben Santer, still at the Lawrence Livermore National Lab, did not want to spend his time making his line-by-line computer program accessible to public perusal. He knew – and obviously so did his attackers – that the programming codes would be virtually useless to any one trying to replicate his results. … Personal codes are so idiosyncratic to the programmers that it could take months to explain them to others who could, in much shorter time, do an independent audit by building their own code using the same equations or data sets.Ben Santer is a climate modeler at the Lawrence Livermore Laboratory; the quote comes from Steven Schneider's autobiographical Science as a Contact Sport, pages 147-8.
This is complete, unadulterated, distilled, thoroughgoing essence of nonsense. There is simply no charitable explanation that doesn't involve lavish accusations of ignorance or disingenuousness.
Most climate modeling is done in FORTRAN. Despite being one of the oldest high-level languages, it is still one of the most popular for numerically intensive applications. This largely due to its long term presence on mainframe computers; by mainframes' very nature, languages developed for them (see also COBOL) will be much more persistent than those developed since the advent of personal computers. (Update: per rchrd's comment, I should add that Fortran is still widely used because it is efficient, portable, and, perhaps most importantly, well understood. For demanding numerical applications, SFAIK, it is the gold standard. Corrections are underlined below.)
However, the FORTRAN versions used in most, if not all, climate models has some significant shortcomings.
Foremost among them is that, prior to FORTRAN 2003, it was a procedural language, as opposed to structured. FORTRAN allows code that is as difficult to follow as a bowl of spaghetti; modern structured languages essentially enforce programming that much more closely resembles Legos.
Complicating that problem is that FORTRAN is relatively cryptic, even to cognoscenti. That means intelligibility to any reviewer, or even the author more than a few days after the fact, is very dependent upon comments embedded in the code. In contrast, well written code in a structured language is essentially self-documenting; no comments are required because the program statements are self-evident.
This is about far more than scoring style points. The modeled climate is not really the squiggly line depicting temperature deviation over time: that line -- the "results" -- is the visual manifestation of the interaction between data and code. It has no independent existence whatsoever. Therefore, in order to replicate the results, both data and code must be readily available AND comprehensible.
Hiding behind idiosyncratic programming is to write a blank check for every manner of programming sins; suggesting that an independent audit yielding contradictory results would be considered as disproof of accepted wisdom is, at best, touchingly naive. As the development of Linux has shown, by far the best approach would have been to open-source the software development. Instead, what we have is the worst method imaginable: secretive development by those whose specialty, whatever it might be, is most assuredly not software engineering.
Back in the day, when I was dealing with Structured Query Language, had I produced anything within a cannon-shots distance of being as bad as the CRU's climate modeling programs, I would have been fired faster than that cannon ball came out of the barrel.
Google "cru climategate source code fortran". You will not find anything that is as close to complimentary as I have been. Here is just one example.
24 Comments:
"FORTRAN allows code that is as difficult to follow as a bowl of spaghetti; modern structured languages essentially enforce programming that much more closely resembles Legos."
All languages allow code that is as difficult to follow as a bowl of spaghetti and a good and disciplined FORTRAN programmer can, in my opinion, for many applications, write perfectly legible and maintainable code.
I will agree that FORTRAN is probably harder than more modern languages to keep legible and it wasn't originally designed to be reentrant, recursive, threaded, handle data easily, etc. (maybe it's been upgraded for these features, I'll admit I don't know), but for something like a climate simulation, should be adequate, in my opinion.
Good catch. It just gets worse for these people, what with the Himalayan balls-up too this week.
We'd better crack on with mitigation and technological solutions as no-one's going to tolerate any meaningful attempts to reduce carbon use for foreseeable future.
Interesting discussion of energy independence here.
Have you thought to send this post to the newspapers as a oped? It might open a lot of eyes.
Bret;
As they say, "You can write FORTRAN in any language".
I have found that over the years, even as I write in structured languages, my ratio of comment to code has gradually been increasing. Perhaps that's a side effect of writing a weblog :-)
I completely agree with Skipper that the only way you get idiosyncratic code that is this personal is by writing really bad code, and that if Skipper wrote code like that for me, I'd drop kick him out the door.
P.S. Did you all know that SWIPIAW's thesis work was building FORTRAN compilers for parallel supercomputers? It's one of the reasons for FORTRAN's longevity, the amount of work that's been put in over the decades on making very smart compilers.
Are you talking about FORTRAN or Fortran?
FORTRAN, that is FORTRAN 77, is unstructured.
But Fortran 95, and not Fortran 2003, are structured. A lot has happened to Fortran since 1977.
The language itself is not like a bowl of spaghetti. But many programmers have bad programming practices. The latest versions of Fortran do force more structure.
But I don't think your criticisms are current.
Being illiterate in any programming language, I was equally interested in Schneider's other comment re Santer, which was that climate models include 'many undocumented subroutines.'
Evidently, climate modelers, unlike real scientists, don't feel the need to keep lab notebooks.
There output, therefore, would be uncheckable in any programming language.
(I am not certain of this, but believe that a tiny flaw in a subroutine could result in big errors, since in a model they are iterated millions of times.)
Harry, I tried to log-in to your website to read your book review, but was unable to do so and although my log-in was recognized, I was unable to actually log-in. Where can your book reviews be found?
Bret:
All languages allow code that is as difficult to follow as a bowl of spaghetti and a good and disciplined FORTRAN programmer can, in my opinion, for many applications, write perfectly legible and maintainable code.
A disciplined programmer can write perfectly legible and maintainable code in BASIC. However, eliminating things like GOTO, then adding all the things that come with structured and object oriented languages makes it far easier for the rest to do what disciplined programmers managed all along.
For things like climate simulation, or computational fluid dynamics, it is adequate; however, in the hands of a tyro it is like giving a box of hand grenades to a twelve year old boy.
Gaw:
We'd better crack on with mitigation and technological solutions as no-one's going to tolerate any meaningful attempts to reduce carbon use for foreseeable future.
That statement right there is precisely why I don't think warmenists are serious. Given the perfectly foreseeable outcome of Copenhagen, and the near certainty that sort of failure will continue absent global tyranny (which I don't think is on the cards), then warmenists should be demanding 100s of billions for geo-engineering research.
They don't. Why is that?
erp:
Have you thought to send this post to the newspapers as a oped? It might open a lot of eyes.
I should start with the Anchorage Daily News, a McClatchey organ. They are completely in the tank.
SH:
I have found that over the years, even as I write in structured languages, my ratio of comment to code has gradually been increasing. Perhaps that's a side effect of writing a weblog :-)
In contrast, my goal was to write code that was self-documenting. Maybe a line or two describing what a block or module did, but at little beyond that as possible.
Of course, SQL is a higher level language, which makes it easier to read. Also, my approach probably has the drawback of being less parsimonious than yours.
Are you talking about FORTRAN or Fortran?
FORTRAN, that is FORTRAN 77, is unstructured.
But Fortran 95, and not Fortran 2003, are structured. A lot has happened to Fortran since 1977.
I was talking about FORTRAN. However, to be grammatically correct, I should have been talking about Fortran, because it is a (name for a word that is a combination of other words) not a (name for a word that is created from the first letter of other words). Fortran is not NATO.
However: I was referring to Fortran before 2003, because all the climate models that are in question were written before then. And, given that climatologists are demonstrably not software engineers, I'll bet the impact of Fortran 95 has scarcely been felt. (Also, IIRC, F95 still has a GOTO statement.)
So, you are correct that my criticisms of Fortran are not current, but neither are the models.
But wait, there's more. Here is another quote (courtesy of RtO) that is two sentences of pure fallacy:
Steven Schneider says: The beauty of systems science is that we come to conclusions through independent efforts that confirm one another – it's not merely a matter of rerunning someone else's computer code or models. We like independent groups using independent models coded by each separate group to try the same experiments or look at the same data set, and if reasonably conforming, we increase our confidence in the conclusions.
The fallacy is the conformation of code to foregone conclusion. If one takes as an entering argument that the future climate must warm in response to CO2, then in order for the model to be correct, it must produce warming (or droughts, floods, positive feedback, etc). Consequent models, in order to be correct, must also conform. Remember, though, that the climate modeled does not exist, so the conformation is not against reality, but rather a preconception of what reality will eventually become.
Contrast with computational fluid dynamics. CFD is used to design race cars. The CFD code is a model of reality that is useful to the extent that it conforms to reality. If it doesn't, then the race car might develop enough lift to be a bad airplane.
That Schneider says what he does without any apparent irony is shocking.
Harry:
Undocumented subroutines doesn't really change the issue -- sloppy idiosyncratic code is trouble, no matter where it lies.
What is just as great a problem, which I left out due to length considerations, is the handling of data. More specifically, exception handling. The data is not well behaved. From site to site, out-of-range values and not-available values could be the same, or different, or non-existent. Date / Time values (a minor subject in its own right) are different between sites, and within sites over time.
I could go on, but the point here is that exception handling was both undocumented and non-existent. Some sub-routines ran to completion despite invalid data.
If I did that sort of thing for SH, after being dropped kicked out the door, I should expect an old CRT monitor to come flying out the house straight at my head.
There was a typo in my comment above that may have obfuscated the meaning.
Instead of "Fortran 95 and not Fortran 2003, ..." that should have read "Fortran 95 and NOW Fortran 2003 .."
Also, the latest Fortran specs include a form of object oriented data structures.
Fortran is still a very powerful language for numerical computation, and most Fortran compilers are highly optimized and do automatic parallelization.
I second erp's proposal for you to submit your piece as an op-ed. You don't have to aim as low as McClatchy, either.
erp, you shouldn't have to log in just to read my blog posts. (I agree with Guy, the log-in is irritating, but I don't control that.)
Go to www.mauinews.com, click on the 'blogs' button, then 'Restating the Obvious.'
That will bring up the current post. On the right rail, there is a button for 'See all my blogs.'
Click that. The review of Schneider is 'Book review 109,' Dec. 9; and the fisking that Skipper used to launch his post is 'Admissions against interest,' Dec. 29.
I won't apologize for this clunky system, since it ain't mine, but I do regret it.
Skipper, I agree your criticism is the main point, but the one about no lab notebooks is the kind of thing even a computer illiterate should be able to get.
Skipper, if your piece gets printed in the Anchorage paper, it might get picked up by other papers as well.
It's worth a try. People need to know the truth.
rchrd:
Based upon your previous comment, I updated the original post. I hope it captures your point, with which I agree, adequately.
Harry, erp:
I second erp's proposal for you to submit your piece as an op-ed. You don't have to aim as low as McClatchy, either.
I have approximately as much idea as to how to go about that as a dog does calculating logarithms.
Mr. Eagar;
You can get a huge error in a single iteration if, as Skipper mentions, you use garbage data or create an error condition.
Skipper;
I don't believe in self documenting code :-). I will not infrequently spend multiple hours doing a proof to determine something like if doing an increment, and only the increment, at a specific spot in the code is correct. There is no way the bare increment statement can explain why it is correct and I don't want to redrive my proof 4 months later when I look at it and think "now why the heck don't I check for out of range with that increment?".
I should also note that since I use Doxygen, I generate all my API documents from source code comments, which certainly boosts comment density (especially in header files).
Skipper, give the task to your beautiful daughter. She'll handle it brilliantly.
I don't believe in self documenting code :-).
I think the higher level a language (SQL is about as high level as they come, IMHO), the more code can be self documenting. Also, that goes more to the what, as opposed to the why, of a particular piece of code. High level and simple whys (my personal experience) makes most internal documentation statements of the bleeding obvious.
In this regard, no matter how skilled the programmer, FORTRAN should be much more internally documented, particularly with regard to the kind of complex numerical problems FORTRAN is used to solve.
My biggest problem with internal documentation is that, in a production environment, it often did not change with modifications to the underlying code.
This goes to Harry's comment about notebooks: the what and why of every step along the way needed to be scrupulously documented and readily available.
Otherwise, fury scarcely suffices.
I can help with the submission, Skipper.
My favorite example of why you need to keep a lab notebook goes back to Germany in the '20s. After the hyperinflation, the government needed gold.
Some lab work in Hamburg showed that gold could be extracted from seawater, and a ship was outfitted and spent three years wandering around the Pacific extracting gold.
You can extract gold from seawater, but the amounts were a thousandth of what the lab experiments showed. It turned out they had been contaminated y the wedding bands of the researchers.
Harry:
How many words should I aim for?
Anchorage Daily News has rules at www.adn.com. They call their op-eds 'Compass' and limit it to 675 words.
Also, a digital mug shot.
Like most papers, they prefer something that hasn't appeared elsewhere. (Your own blog shouldn't count against that, I think.)
Although the editors don't say that, most papers prefer op-eds from their own subscribers.
However, yours is good enough to compete in the big papers, if you wanted to give the WP, NYT, WSJ etc. a shot first.
On the other hand, time's a-wastin'.
You could go ADN and hope that the Brothers Judd Alliance would then recommend it around.
Once in print and at an online site, you could hope that, say, Watts would pick it up.
You could submit direct to Watts, but -- despite the distaste for newspapers around here -- they are still better for getting ideas out to people who don't know yet they are in need of a new idea. The Internet is incestuous.
The missing concept here is "data torture," which, as my statistics professor said, is something you do alone in a dark room with the door closed.
What is refers to is the practice, once you've collected your data and done the statistical test you always intended to do, and it doesn't work out, of adding and subtracting variables and controls until you do get the result you wanted.
There is clear evidence of data torture in the cru emails.
The problem with it is that, with alpha = .05 (that is, significance set at the point where the chance of error is 5% or less), 5% of every false statistical model you run will be significant. So if you torture the data and run 100 different models, 5 will erroneously appear to be true.
Another way of thinking about this is that we accept a 5% chance of false significance if the scientist's hypotheses are false because we assume that the test is designed ex ante, but there's no enforcement mechanism. If the tests are designed post hoc, then 5% is way too high for alpha.
The missing concept here is "data torture,"
That is because I don't remember nearly enough about statistics to speak anything like knowledgeably.
What you say reminds me of the studies behind risks attending second hand cigarette smoke.
IIRC, in order to get the numbers they wanted, they accepted confidence levels far lower than would ordinarily suffice.
They used a one-tailed test rather than a two-tailed test. Alpha is still 0.05, but it's all stuck on one end of the distribution. As a result, probabilities on the right end of the distribution that would fall between 0.05 and 1.0 (that is, above significance) with a two-tailed test will fall between 0.025 and 0.05 (significant) on a one-tailed test.
A one-tailed test is not per se invalid. You can use it if both theory and prior results indicate that there's no chance that the result will fall at the other end (here, that second-hand smoke would increase life expectancy), but it should always be treated gingerly. There is a good (but not generally accepted) argument that alpha should be reduced for one-tailed tests.
So, if someone brings up second-hand smoke, you can say that, "I only accept results from a one-tailed test if alpha is less than 0.025."
Post a Comment
<< Home