Is the true issue about all those leaked climate emails that the underlying data sets used to make far-reaching UN and other climate predictions are so badly done that they are unable to withstand intelligent expert scrutiny?
And that that’s why the scientists concerned were worried about FOI requests and other transparency demands?
And, if so, so what?
Fascinating technical exchanges on all this for us puny mortals in the comments below Megan McArdle’s searching but fair article over at Atlantic magazine. The Internet in fine form, including reader TW Andrews who makes what seems to me to be Damn Big Point (my emphasis);
Even really good, experienced software developers make mistakes. That’s why most software that gets shipped has a battery of tests that it needs to pass before release. Frequently more man-months go into testing and QA than development.
Most academic-type programs don’t get anything like this level of scrutiny. In general that’s fine, but once we start using these programs to make multi-trillion dollar decisions, the code should be open-sourced, and tested within an inch of it’s life. It’s well worth it to get the best information possible.










