Very odd

utopian sky by Rebecca L. Daily
L&C, GRL, comments on peer review and peer-reviewed comments:
[Via RealClimate]

I said on Friday that I didn’t think that Lindzen and Choi (2009) was obviously nonsense. Well, a number of people have disagreed with me, and in doing so, have presented some of the back story on the how the response was handled. I think this deserves to be more widely known in the hope that it will generate some discussion in the community for how such situations might be dealt with in the future.

From Chris O’Dell:

Given the large number of comments on the peer-review process in general and in the LC09 case in particular, it is probably worthwhile to give a bit more backstory to our Trenberth et al. paper. On my first reading of LC09, I was quite amazed and thought if the results were true, it would be incredible (and, in fact, a good thing!) and hence warranted independent checking. Very simple attempts to reproduce the LC09 numbers simply didn’t work out and revealed some flaws in their process. To find out more, I contacted Dr. Takmeng Wong at NASA Langley, a member of the CERES and ERBE science teams (and major player in the ERBE data set) and found out to my surprise that no one on these teams was a reviewer of LC09. Dr. Wong was doing his own verification of LC09 and so we decided to team up.

After some further checking, I came across a paper very similar to LC09 but written 3 years earlier – Forster & Gregory (2006) , hereafter FG06. FG06, however, came to essentially opposite conclusions from LC09, namely that the data implied an overall positive feedback to the earth’s climate system, though the results were somewhat uncertain for various reasons as described in the paper (they attempted a proper error analysis). The big question of course was, how is it that LC09 did not even bother to reference FG06, let alone explain the major differences in their results? Maybe Lindzen & Choi didn’t know about the existence of FG06, but certainly at least one reviewer should have. And if they also didn’t, well then, a very poor choice of reviewers was made.

[More]

It is very strange when a paper does not reference a previous work that examines similar things examined in the present paper, particularly when the results are diametrically opposed. As O’Dell stated even if the authors were not aware of the previous work, a good reviewer should have. The fact that none of this happened may demonstrate one of the possible holes in peer review – the author can suggest very favorable reviewers that will not rock the boat.

The response of the editors to these events is not very encouraging. Luckily for them, the reviewers are anonymous so their poor approach toward performing their jobs to review the paper can not be linked to anyone specific. But the editors are known and will have to stand by their work.

This episode does indicate the power of the Scientific Method. It does not matter what the reasons were for ignoring prior art – scientists can examine the approach and replicate it to see how robust the work is and what assumptions, right or wrong, were made.

Here they appear to have shown that the work was not very robust and that others will find very different results when they try to reproduce the research. It was a case where the original authors were probably not well served by the peer review process. Stronger reviewers would have caught these problems before publication and allowed the authors to make their own changes, rather than having other scientists do it for them.

Technorati Tags: ,