Closed Stapels
A lesson for all
The final report of the Diederick Stapels investigation – English version here – makes sobering reading.
Not just the 55 fraudulent publications, the unsound theses, or the sloppy work of some of his collaborators, but for the generally low level of scientific rigour revealed in social psychology. Here are some extracts from the report, of practices which were widely tolerated by his collaborators and ignored by journals.
“An experiment fails to yield the expected statistically significant results. The experiment is repeated, often with minor changes in the manipulation or other conditions, and the only experiment subsequently reported is the one that did yield the expected results. It is unclear why in theory the changes made should yield the expected results. The article makes no mention of this exploratory method; the impression created is of a one-off experiment performed to check the a priori expectations. […]
“[…] a given experiment does not yield statistically significant differences between the experimental and control groups. The experimental group is compared with a control group from a different experiment –reasoning that ‘they are all equivalent random groups after all’ – and thus the desired significant differences are found. This fact likewise goes unmentioned in the article.
“The removal of experimental conditions. For example, the experimental manipulation in an experiment has three values. Each of these conditions (e.g. three different colours of the otherwise identical stimulus material) is intended to yield a certain specific difference in the dependent variable relative to the other two. Two of the three conditions perform in accordance with the research hypotheses, but a third does not. With no mention in the article of the omission, the third condition is left out, both in theoretical terms and in the results. […]
“Research findings were based on only some of the experimental subjects, without reporting this in the article. On the one hand ‘outliers’ (extreme scores on usually the dependent variable) were removed from the analysis where no significant results were obtained. This elimination reduces the variance of the dependent variable and makes it more likely that ‘statistically significant’ findings will emerge. There may be sound reasons to eliminate outliers, certainly at an exploratory stage, but the elimination must then be clearly stated.
“Conversely, the Committees also observed that extreme scores of one or two experimental subjects were kept in the analysis where their elimination would have changed significant differences into insignificant ones; there was no mention anywhere of the fact that the significance relied on just one or a few subjects.”
I’ve seen all these sins – committed some myself – and they’ve got into print. For a clinical triallist, altering the sample size, primary endpoint, failing to define either, or choosing post hoc between “intention to treat” and “per protocol” are equivalent. Examples here, here, here, and here. The only rigorous defence is adherence to written protocols. Yes social psychology is largely pseudo-scientific babbling for the glorification of its practitioners. But so too is much clinical research. If social psychology goes wrong we waste a bit of money. If clinical researchers get the wrong answer, people die.
Jim Thornton
Trackbacks