Traditionally, science holds itself to account, primarily through internal systems of peer review. But the recent retraction of two papers on stem-cell research by the journal Nature highlights weaknesses in this self-regulatory framework that scientists need to address
To err is human, so why should science be any different? The frailties of science can be easy to overlook because it remains one of humankinds greatest cultural and intellectual achievements; working hand in hand with technology, it has transformed our understanding of the world and our capacity to shape it. But as any scientist will tell you, the daily grind of research is often laborious and repetitive and regularly punctuated by failure either through error or miscalculation, or when our cherished theories cannot withstand the pitiless exactitude of experiment. What keeps us going are the moments of revelation or insight that every now and then swell the heart and the head with a warm pulse of satisfaction. Those small victories are all the more important because science is an intensely competitive career; the endless struggles for funding or the space to publish in the most acclaimed journals, which have failure rates as high as 80 or 90%, means that there are demons of disappointment crouching in every laboratory.
The human side of science was thrown into harsh relief by news on the 5th of August of the suicide of Japanese stem cell researcher Yoshiki Sasai. Sasai was a senior coauthor on two papers published in January this year by the high-profile journal Nature that reported a remarkable breakthrough: the generation of stem-cells by subjecting mouse cells to mild stresses such as pressure or acidic conditions, a procedure dubbed stimulus-triggered acquisition of pluripotency (STAP). But soon after publication the claims made in the papers came under intense scrutiny; there were concerns about reproducibility, a key test of any scientific report, and accusations of image manipulation and plagiarism. By the beginning of April an investigation by the RIKEN Center for Developmental Biology (CDB) where most of the work had been carried out found the lead author Haruko Obokata guilty of misconduct for having manipulated data with the intent to deceive. Sasai was cleared of misconduct but criticised in the investigation report for not properly checking the experimental data. On 2nd July both papers were formally retracted by Nature for reasons of plagiarism. A month later a serious and unfortunate incident became a desperate human tragedy when Sasai took his own life.
This all has to do with a series of papers on stem cells published by Nature that had to be retracted because apparently they were just wrong.
Nature is a journal presented as without peer for its impact and with a scarcity of pages to ensure that it has the pick of the best science. Getting published in Nearer can get a researcher tenure.
So, there is obviously an incentive to write the perfect Nature paper based on making up the data. Peer review is not supposed to replicate the experiments but to make sure the data presented match the conclusions, that confirmation and other biases have not arisen, that the protocols are ethical, etc.
Add this to Nature’s need to publish provocative papers – thus increasing the impact factor and justifying its price – and we have a recipe for disaster.
Which appears to have happened here.
Now, scientists are people and they make mistakes. The key of to deal with those mistakes. Science over the last 400 years has determined that open investigations are the best antidote. Secrecy opens the way for fraud.
This is what separates science from alchemy.
Yet Nature, in response to this incident, has done very little in the open – telling us to trust its processes even as it hides them from view.
So how do we know it is fixed? How can we trust any other paper in Nature if we do not know what it does to prevent being gamed? Is it really trustworthy?
In truth, we don’t. we can’t, and probably not
That is because Nature is a business that needs subscribers. Its incentives for success do not always align with science.
Unfortunately, corporate nature of company (in our case NPG) is not about encouraging the openness and improving science. It’s against it! Taking in account recent retractions (6 in half of year), closed access (even decade after publication!), closed flawed peer review, shameless promotion of its impact factor (while everybody knows that IF is a joke), absence of feedback and dialogue with peers… you can see how “frontier of scientific publishing” losing its credibility and trust.
So looking at the reviewer’s comments, to get some idea of how the process failed, would be helpful. But Nature says no and to just trust it.
This might not bode well for Nature’s future. Open access approaches tend to align much better with the needs of researchers. We are seeing a sea change in scientific publishing.
Stonewalling the community may not be the best way to go.