We had a discussion on LP last year about a controversial paper in a psychology journal. Respected psychologist Darryl Bem conducted experiments that apparently showed that Harvard students could foretell the future – at least, when the future involved free pr0n. That prompted an extensive discussion of the issues surrounding the scientific method, at least as it is currently practiced.
In that sense, it’s the paper that keeps on giving.
Given the, um, surprising nature of the results, Bem explicitly included in his paper a call for replications. It’s just good practice – when you turn up an unexpected and potentially important result, it’s good scientific practice to get others to replicate the experiment to show that the result is real. Or not, as the case may be.
Three British psychologists attempted precisely that, and failed to replicate the effect.
That’s no great surprise. What is interesting is how difficult it was to publish the replication. They tried four different journals, and were rejected by all of them, including the journal who originally published the Bem paper on the basis that they “don’t publish replications”.
Which is simply wrong. Replications (or, in this case, failures to replicate) of high-impact, controversial findings deserve high-profile publication.
Sadly, it tallies with my own experience, where an attempt to replicate not only added interesting new data, it exposed a flaw in the data presented in the original study. But the journal that originally published the study gave us very short shrift.