This past week, UC Davis professor of plant pathology - Pamela Ronald - issued a retraction of foundation work her lab did in 1995 and wrote of the mistakes in Scientific American. In her case, there were two errors, one of which was simply labeling:
In this way, new members of my laboratory uncovered two major errors in our previous research. First, we found that one of the bacterial strains we had relied on for key experiments was mislabeled.Incidentally, Dr. Ronald cites the lack of reproducibility that Amgen found.
It's gotten to the point where the Economist has two articles out on it:
Statistical Dunderheads
In the second article on unreliable research there's a segment on researchers who lack statistical knowledge, designing experiments whose results do not pass statistical muster because "scientists are not statisticians." Their conclusion comports with my experience: an epidemic of statistical dunderheads in science. Researchers are choosing N based on the number of slots in the pilot plant or based on the capacity of the lab. Scientists not understanding the risks of Type 1/Type 2 error built into their design.The first step is to get some statistical training. After that, it's on-the-job training and getting on the phone with someone who is qualified. But the way it is and the where we are headed is simply not acceptable. Data-based decision-making is at the core of science (and for that matter, biologics manufacturing). Scientists may not be statisticians, but perhaps they ought to be.
p.s. - It's interesting to note that it is publicly-funded academic research that cannot be confirmed by private-sector firms and not vice versa.
No comments:
Post a Comment