The debate was fueled by the wonderful piece of “silly research” written by Sam Shuster (Sex, aggression, and humour: responses to unicycling) made available to the broader scientific community via the British Medical Journal.
The arguments circle around the reception of Shusters work by science journalists, especially in the printed press. Did they miss the point? And why? Why didn’t they recognize the piece as silly and purposefully beyond any scientific standard? Eventually, the debate touches broader questions: What is good science, what is mock or pseudoscience? And: Is it possible to distinguish one from another?
A recent study in the BMJ investigated “Financial ties and concordance between results and conclusions in meta-analyses” on antihypertensive drugs. The authors concluded “that financial ties to one drug company are not associated with favourable results but are associated with favourable conclusions”. Journalists love conclusions to be conveyed on to the public. The pharmaceutical companies and their writers do know that for sure. So they push up the results a bit. How’d one recognize whether the interpretation of a single study is pushed up or not? Would scientists see through this all the time?
Besides that you can layout the design of a drug trial to serve your purpose: “Assessing therapeutic efficacy in a progressive disease: a study of donepezil in Alzheimer’s disease“, the AWARE-study is a good example for that. Get a group of Alzheimer patients. Let them take Donepezil for 24 weeks. Then decide whether the treatment was successful or not. Send the successful patients home, exclude them from the trial. Randomise the remaining (so far unsuccessful) patients into a drug and a placebo group. Get the result: Patients in the drug group do benefit or remain stable, patients in the placebo group do decline. (Remember: All patients were getting the drug for 24 weeks prior to randomisation. So they were used to the drug. After randomisation the placebo patients were deprived of the drug – and performed worse. No wonder, as the study uses an CNS-active agent!). Then have your apologists communicate the argument into the world “that you’d miss the late responders if you terminate treatment to early”. Good science? Bad science? Pseudoscience?
Finally another example. This time I was involved myself being charged as a pseudoscientist by another researcher: A story in DER SPIEGEL and a corresponding review on the treatment evidence of cholinesterase inhibitors on Alzheimer’s disease carried out by our research group in the Department of Primary Medical Care has sparked a fierce discussion inside the scientific community and beyond. One part of the scientific community hailed and welcomed our work. Others were not amused. Some professors argued: “Why did you target just the old and weak?” Others said: “Why us, the psychiatrists? The cardiologists don’t do any better regarding the quality of their evidence.” The most angry statement condemned our paper as pseudoscience: The kind of irresponsible pseudoscience demonstrated in this paper only fuels this perverse zeitgeist (refusing patients available treatment – annotation by the author).
Once again: What is good or bad science? What is mock or pseudoscience? I think, it’s very hard to distinguish, even for scientists themselves, let alone science journalists. Reducing the answer to some criteria like objectivity-intersubjectivity, replicability, explicity or taxonomy does make it easier (at times). It seperates the wheat from the chaff (at times). Sometimes it’s really easy to decide what a smart paper you’ve just read (as it is the case with Shusters observations). Sometimes it is impossible to figure out what’s between the lines because concealment and camouflage are techniques professionally used in the scientific community and in the research papers published. Hence even reviewers and publishers have difficulties to separate the good from the bad and the fraudulent.