Bad Science

Chapter 2

One important feature of a good trial is that neither the experimenters nor the patients know if they got the homeopathy sugar pill or the simple placebo sugar pill, because we want to be sure that any difference we measure is the result of the difference between the pills, and not of people"s expectations or biases. If the researchers knew which of their beloved patients were having the real and which the placebo pills, they might give the game away-or it might change their a.s.sessment of the patient-consciously or unconsciously.

Let"s say I"m doing a study on a medical pill designed to reduce high blood pressure. I know which of my patients are having the expensive new blood pressure pill, and which are having the placebo. One of the people on the sw.a.n.ky new blood pressure pills comes in and has a blood pressure reading that is way off the scale, much higher than I would have expected, especially since they"re on this expensive new drug. So I recheck their blood pressure, "just to make sure I didn"t make a mistake". The next result is more normal, so I write that one down, and ignore the high one.

Blood pressure readings are an inexact technique, like ECG interpretation, X-ray interpretation, pain scores, and many other measurements that are routinely used in clinical trials. I go for lunch, entirely unaware that I am calmly and quietly polluting the data, destroying the study, producing inaccurate evidence, and therefore, ultimately, killing people (because our greatest mistake would be to forget that data is used for serious decisions in the very real world, and bad information causes suffering and death).

There are several good examples from recent medical history where a failure to ensure adequate "blinding", as it is called, has resulted in the entire medical profession being mistaken about which was the better treatment. We had no way of knowing whether keyhole surgery was better than open surgery, for example, until a group of surgeons from Sheffield came along and did a very theatrical trial, in which bandages and decorative fake blood squirts were used, to make sure that n.o.body could tell which type of operation anyone had received.

Some of the biggest figures in evidence-based medicine got together and did a review of blinding in all kinds of trials of medical drugs, and found that trials with inadequate blinding exaggerated the benefits of the treatments being studied by 17 per cent. Blinding is not some obscure piece of nitpicking, idiosyncratic to pedants like me, used to attack alternative therapies.



Closer to home for homeopathy, a review of trials of acupuncture for back pain showed that the studies which were properly blinded showed a tiny benefit for acupuncture, which was not "statistically significant" (we"ll come back to what that means later). Meanwhile, the trials which were not blinded-the ones where the patients knew whether they were in the treatment group or not-showed a ma.s.sive, statistically significant benefit for acupuncture. (The placebo control for acupuncture, in case you"re wondering, is sham acupuncture, with fake needles, or needles in the "wrong" places, although an amusing complication is that sometimes one school of acupuncturists will claim that another school"s sham needle locations are actually their genuine ones.)

So, as we can see, blinding is important, and not every trial is necessarily any good. You can"t just say, "Here"s a trial that shows this treatment works," because there are good trials, or "fair tests", and there are bad trials. When doctors and scientists say that a study was methodologically flawed and unreliable, it"s not because they"re being mean, or trying to maintain the "hegemony", or to keep the backhanders coming from the pharmaceutical industry: it"s because the study was poorly performed-it costs nothing to blind properly-and simply wasn"t a fair test.

Randomisation.

Let"s take this out of the theoretical, and look at some of the trials which homeopaths quote to support their practice. I"ve got a bog-standard review of trials for homeopathic arnica by Professor Edward Ernst in front of me, which we can go through for examples. We should be absolutely clear that the inadequacies here are not unique, I do not imply malice, and I am not being mean. What we are doing is simply what medics and academics do when they appraise evidence.

So, Hildebrandt et al et al. (as they say in academia) looked at forty-two women taking homeopathic arnica for delayed-onset muscle soreness, and found it performed better than placebo. At first glance this seems to be a pretty plausible study, but if you look closer, you can see there was no "randomisation" described. Randomisation is another basic concept in clinical trials. We randomly a.s.sign patients to the placebo sugar pill group or the homeopathy sugar pill group, because otherwise there is a risk that the doctor or homeopath-consciously or unconsciously-will put patients who they think might do well into the homeopathy group, and the no-hopers into the placebo group, thus rigging the results.

Randomisation is not a new idea. It was first proposed in the seventeenth century by John Baptista van Helmont, a Belgian radical who challenged the academics of his day to test their treatments like blood-letting and purging (based on "theory") against his own, which he said were based more on clinical experience: "Let us take out of the hospitals, out of the Camps, or from elsewhere, two hundred, or five hundred poor People, that have Fevers, Pleurisies, etc. Let us divide them into half, let us cast lots, that one half of them may fall to my share, and the other to yours...We shall see how many funerals both of us shall have."

It"s rare to find an experimenter so careless that they"ve not randomised the patients at all, even in the world of CAM. But it"s surprisingly common to find trials where the method of randomisation is inadequate: they look plausible at first glance, but on closer examination we can see that the experimenters have simply gone through a kind of theatre, as if they were randomising the patients, but still leaving room for them to influence, consciously or unconsciously, which group each patient goes into.

In some inept trials, in all areas of medicine, patients are "randomised" into the treatment or placebo group by the order in which they are recruited onto the study-the first patient in gets the real treatment, the second gets the placebo, the third the real treatment, the fourth the placebo, and so on. This sounds fair enough, but in fact it"s a glaring hole that opens your trial up to possible systematic bias.

Let"s imagine there is a patient who the homeopath believes to be a no-hoper, a heart-sink patient who"ll never really get better, no matter what treatment he or she gets, and the next place available on the study is for someone going into the "homeopathy" arm of the trial. It"s not inconceivable that the homeopath might just decide-again, consciously or unconsciously-that this particular patient "probably wouldn"t really be interested" in the trial. But if, on the other hand, this no-hoper patient had come into clinic at a time when the next place on the trial was for the placebo group, the recruiting clinician might feel a lot more optimistic about signing them up.

The same goes for all the other inadequate methods of randomisation: by last digit of date of birth, by date seen in clinic, and so on. There are even studies which claim to randomise patients by tossing a coin, but forgive me (and the entire evidence-based medicine community) for worrying that tossing a coin leaves itself just a little bit too open to manipulation. Best of three, and all that. Sorry, I meant best of five. Oh, I didn"t really see that one, it fell on the floor.

There are plenty of genuinely fair methods of randomisation, and although they require a bit of nous, they come at no extra financial cost. The cla.s.sic is to make people call a special telephone number, to where someone is sitting with a computerised randomisation programme (and the experimenter doesn"t even do that until the patient is fully signed up and committed to the study). This is probably the most popular method amongst meticulous researchers, who are keen to ensure they are doing a "fair test", simply because you"d have to be an out-and-out charlatan to mess it up, and you"d have to work pretty hard at the charlatanry too. We"ll get back to laughing at quacks in a minute, but right now you are learning about one of the most important ideas of modern intellectual history.

Does randomisation matter? As with blinding, people have studied the effect of randomisation in huge reviews of large numbers of trials, and found that the ones with dodgy methods of randomisation overestimate treatment effects by 41 per cent. In reality, the biggest problem with poor-quality trials is not that they"ve used an inadequate method of randomisation, it"s that they don"t tell you how how they randomised the patients at all. This is a cla.s.sic warning sign, and often means the trial has been performed badly. Again, I do not speak from prejudice: trials with unclear methods of randomisation overstate treatment effects by 30 per cent, almost as much as the trials with openly rubbish methods of randomisation. they randomised the patients at all. This is a cla.s.sic warning sign, and often means the trial has been performed badly. Again, I do not speak from prejudice: trials with unclear methods of randomisation overstate treatment effects by 30 per cent, almost as much as the trials with openly rubbish methods of randomisation.

In fact, as a general rule it"s always worth worrying when people don"t give you sufficient details about their methods and results. As it happens (I promise I"ll stop this soon), there have been two landmark studies on whether inadequate information in academic articles is a.s.sociated with dodgy, overly flattering results, and yes, studies which don"t report their methods fully do overstate the benefits of the treatments, by around 25 per cent. Transparency and detail are everything in science. Hildebrandt et ah et ah, through no fault of their own, happened to be the peg for this discussion on randomisation (and I am grateful to them for it): they might well have randomised their patients. They might well have done so adequately. But they did not report on it.

Let"s go back to the eight studies in Ernst"s review article on homeopathic arnica-which we chose pretty arbitrarily-because they demonstrate a phenomenon which we see over and over again with CAM studies: most of the trials were hopelessly methodologically flawed, and showed positive results for homeopathy; whereas the couple of decent studies-the most "fair tests"-showed homeopathy to perform no better than placebo.*

- So, Pinsent performed a double-blind, placebo-controlled study of fifty-nine people having oral surgery: the group receiving homeopathic arnica experienced significantly less pain than the group getting placebo. What you don"t tend to read in the arnica publicity material is that forty-one subjects dropped out of this study. That makes it a fairly rubbish study. It"s been shown that patients who drop out of studies are less likely to have taken their tablets properly, more likely to have hail side-effects, less likely to have got better, and so on. I am not sceptical about this study because it offends my prejudices, but because of the high drop-out rate. The missing patients might have been lost to follow-up because they are dead, for example. Ignoring drop-outs tends to exaggerate the benefits of the treatment being tested, and a high drop-out rate is always a warning sign. - So, Pinsent performed a double-blind, placebo-controlled study of fifty-nine people having oral surgery: the group receiving homeopathic arnica experienced significantly less pain than the group getting placebo. What you don"t tend to read in the arnica publicity material is that forty-one subjects dropped out of this study. That makes it a fairly rubbish study. It"s been shown that patients who drop out of studies are less likely to have taken their tablets properly, more likely to have hail side-effects, less likely to have got better, and so on. I am not sceptical about this study because it offends my prejudices, but because of the high drop-out rate. The missing patients might have been lost to follow-up because they are dead, for example. Ignoring drop-outs tends to exaggerate the benefits of the treatment being tested, and a high drop-out rate is always a warning sign. The study by Gibson The study by Gibson et al et al. did not mention randomisation, nor did it deign to mention the dose of the homeopathic remedy, or the frequency with which it was given. It"s not easy to take studies very seriously when they are this thin. There was a study by Campbell which had thirteen subjects in it (which means a tiny handful of patients in both the homeopathy and the placebo groups): it found that homeopathy performed better than placebo (in this teeny-tiny sample of subjects), but didn"t check whether the results were statistically significant, or merely chance findings. There was a study by Campbell which had thirteen subjects in it (which means a tiny handful of patients in both the homeopathy and the placebo groups): it found that homeopathy performed better than placebo (in this teeny-tiny sample of subjects), but didn"t check whether the results were statistically significant, or merely chance findings. Lastly, Savage Lastly, Savage et al et al. did a study with a mere ten patients, finding that homeopathy was better than placebo; but they too did no statistical a.n.a.lysis of their results. These are the kinds of papers that homeopaths claim as evidence to support their case, evidence which they claim is deceitfully ignored by the medical profession. All of these studies favoured homeopathy. All deserve to be ignored, for the simple reason that each was not a "fair test" of homeopathy, simply on account of these methodological flaws. These are the kinds of papers that homeopaths claim as evidence to support their case, evidence which they claim is deceitfully ignored by the medical profession. All of these studies favoured homeopathy. All deserve to be ignored, for the simple reason that each was not a "fair test" of homeopathy, simply on account of these methodological flaws. I could go on, through a hundred homeopathy trials, but it"s painful enough already. I could go on, through a hundred homeopathy trials, but it"s painful enough already.

So now you can see, I would hope, that when doctors say a piece of research is "unreliable", that"s not necessarily a st.i.tch-up; when academics deliberately exclude a poorly performed study that flatters homeopathy, or any other kind of paper, from a systematic review of the literature, it"s not through a personal or moral bias: it"s for the simple reason that if a study is no good, if it is not a "fair test" of the treatments, then it might give unreliable results, and so it should be regarded with great caution.

There is a moral and financial issue here too: randomising your patients properly doesn"t cost money. Blinding your patients to whether they had the active treatment or the placebo doesn"t cost money. Overall, doing research robustly and fairly does not necessarily require more money, it simply requires that you think before you start. The only people to blame for the flaws in these studies are the people who performed them. In some cases they will be people who turn their backs on the scientific method as a "flawed paradigm"; and yet it seems their great new paradigm is simply "unfair tests".

These patterns are reflected throughout the alternative therapy literature. In general, the studies which are flawed tend to be the ones that favour homeopathy, or any other alternative therapy; and the well-performed studies, where every controllable source of bias and error is excluded, tend to show that the treatments are no better than placebo.

This phenomenon has been carefully studied, and there is an almost linear relationship between the methodological quality of a homeopathy trial and the result it gives. The worse the study-which is to say, the less it is a "fair test"-the more likely it is to find that homeopathy is better than placebo. Academics conventionally measure the quality of a study using standardised tools like the "Jadad score", a seven-point tick list that includes things we"ve been talking about, like "Did they describe the method of randomisation?" and "Was plenty of numerical information provided?"

This graph, from Ernst"s paper, shows what happens when you plot Jadad score against result in homeopathy trials. Towards the top left, you can see rubbish trials with huge design flaws which triumphantly find that homeopathy is much, much better than placebo. Towards the bottom right, you can see that as the Jadad score tends towards the top mark of 5, as the trials become more of a "fair test", the line tends towards showing that homeopathy performs no better than placebo.

There is, however, a mystery in this graph: an oddity, and the makings of a whodunnit. That little dot on the right-hand edge of the graph, representing the ten best-quality trials, with the highest Jadad scores, stands clearly outside the trend of all the others. This is an anomalous finding: suddenly, only at that end of the graph, there are some good-quality trials bucking the trend and showing that homeopathy is better than placebo.

What"s going on there? I can tell you what I think: some of the papers making up that spot are a st.i.tch-up. I don"t know which ones, how it happened, or who did it, in which of the ten papers, but that"s what I think. Academics often have to couch strong criticism in diplomatic language. Here is Professor Ernst, the man who made that graph, discussing the eyebrow-raising outlier. You might decode his Yes, Minister Yes, Minister diplomacy, and conclude that he thinks there"s been a st.i.tch-up too. diplomacy, and conclude that he thinks there"s been a st.i.tch-up too.

There may be several hypotheses to explain this phenomenon. Scientists who insist that homeopathic remedies are in everyway identical to placebos might favour the following. The correlation provided by the four data points (Jadad score 1-4) roughly reflects the truth. Extrapolation of this correlation would lead them to expect that those trials with the least room for bias (Jadad score = 5) show homeopathic remedies are pure placebos. The fact, however, that the average result of the 10 trials scoring 5 points on the Jadad score contradicts this notion, is consistent with the hypomesis that some (by no means all) methodologically astute and highly convinced homeopaths have published results that look convincing but are, in fact, not credible. There may be several hypotheses to explain this phenomenon. Scientists who insist that homeopathic remedies are in everyway identical to placebos might favour the following. The correlation provided by the four data points (Jadad score 1-4) roughly reflects the truth. Extrapolation of this correlation would lead them to expect that those trials with the least room for bias (Jadad score = 5) show homeopathic remedies are pure placebos. The fact, however, that the average result of the 10 trials scoring 5 points on the Jadad score contradicts this notion, is consistent with the hypomesis that some (by no means all) methodologically astute and highly convinced homeopaths have published results that look convincing but are, in fact, not credible.

But this is a curiosity and an aside. In the bigger picture it doesn"t matter, because overall, even including these suspicious studies, the "meta-a.n.a.lyses" still show, overall, that homeopathy is no better than placebo. Meta-a.n.a.lyses?

Meta-a.n.a.lysis.

This will be our last big idea for a while, and this is one that has saved the lives of more people than you will ever meet. A meta-a.n.a.lysis is a very simple thing to do, in some respects: you just collect all the results from all the trials on a given subject, bung them into one big spreadsheet, and do the maths on that, instead of relying on your own gestalt intuition about all the results from each of your little trials. It"s particularly useful when there have been lots of trials, each too small to give a conclusive answer, but all looking at the same topic.

So if there are, say, ten randomised, placebo-controlled trials looking at whether asthma symptoms get better with homeopathy, each of which has a paltry forty patients, you could put them all into one meta-a.n.a.lysis and effectively (in some respects) have a four-hundred-person trial to work with.

In some very famous cases-at least, famous in the world of academic medicine-meta-a.n.a.lyses have shown that a treatment previously believed to be ineffective is in fact rather good, but because the trials that had been done were each too small, individually, to detect the real benefit, n.o.body had been able to spot it.

As I said, information alone can be life-saving, and one of the greatest inst.i.tutional innovations of the past thirty years is undoubtedly the Cochrane Collaboration, an international not-for-profit organisation of academics, which produces systematic summaries of the research literature on healthcare research, including meta-a.n.a.lyses.

The logo of the Cochrane Collaboration features a simplified "blobbogram", a graph of the results from a landmark meta-a.n.a.lysis which looked at an intervention given to pregnant mothers. When people give birth prematurely, as you might expect, the babies are more likely to suffer and die. Some doctors in New Zealand had the idea that giving a short, cheap course of a steroid might help improve outcomes, and seven trials testing this idea were done between 1972 and 1981. Two of them showed some benefit from the steroids, but the remaining five failed to detect any benefit, and because of this, the idea didn"t catch on.

Eight years later, in 1989, a meta-a.n.a.lysis was done by pooling all this trial data. If you look at the blobbogram in the logo on the previous page, you can see what happened. Each horizontal line represents a single study: if the line is over to the left, it means the steroids were better than placebo, and if it is over to the right, it means the steroids were worse. If the horizontal line for a trial touches the big vertical "nil effect" line going down the middle, then the trial showed no clear difference either way. One last thing: the longer a horizontal line is, the less certain the outcome of the study was.

Looking at the blobbogram, we can see that there are lots of not-very-certain studies, long horizontal lines, mostly touching the central vertical line of "no effect"; but they"re all a bit over to the left, so they all seem to suggest that steroids might be might be beneficial, even if each study itself is not statistically significant. beneficial, even if each study itself is not statistically significant.

The diamond at the bottom shows the pooled answer: that there is, in fact, very strong evidence indeed for steroids reducing the risk-by 30 to 50 per cent-of babies dying from the complications of immaturity. We should always remember the human cost of these abstract numbers: babies died unnecessarily because they were deprived of this li fe-saving treatment for a decade. They died, even when there was enough information available to know what would save them even when there was enough information available to know what would save them, because that information had not been synthesised together, and a.n.a.lysed systematically, in a meta-a.n.a.lysis.

Back to homeopathy (you can see why I find it trivial now). A landmark meta-a.n.a.lysis was published recently in the Lancet Lancet. It was accompanied by an editorial t.i.tled: "The End of Homeopathy?" Shang et al et al. did a very thorough meta-a.n.a.lysis of a vast number of homeopathy trials, and they found, overall, adding them all up, that homeopathy performs no better than placebo.

The homeopaths were up in arms. If you mention this meta-a.n.a.lysis, they will try to tell you that it was a st.i.tch-up. What Shang et al et al. did, essentially, like all the previous negative meta-a.n.a.lyses of homeopathy, was to exclude the poorer-quality trials from their a.n.a.lysis.

Homeopaths like to pick out the trials that give them the answer that they want to hear, and ignore the rest, a practice called "cherry-picking". But you can also cherry-pick your favourite meta-a.n.a.lyses, or misrepresent them. Shang et al et al. was only the latest in a long string of meta-a.n.a.lyses to show that homeopathy performs no better than placebo. What is truly amazing to me is that despite the negative results of these meta-a.n.a.lyses, homeopaths have continued-right to the top of the profession-to claim that these same meta-a.n.a.lyses support support the use of homeopathy. They do this by quoting only the result for the use of homeopathy. They do this by quoting only the result for all all trials included in each meta-a.n.a.lysis. This figure includes all of the poorer-quality trials. The most reliable figure, you now know, is for the restricted pool of the most "fair tests", and when you look at those, homeopathy performs no better than placebo. If this fascinates you (and I would be very surprised), then I am currently producing a summary with some colleagues, and you will soon be able to find it online at trials included in each meta-a.n.a.lysis. This figure includes all of the poorer-quality trials. The most reliable figure, you now know, is for the restricted pool of the most "fair tests", and when you look at those, homeopathy performs no better than placebo. If this fascinates you (and I would be very surprised), then I am currently producing a summary with some colleagues, and you will soon be able to find it online at badscience.net badscience.net, in all its glorious detail, explaining the results of the various meta-a.n.a.lyses performed on homeopathy.

Clinicians, pundits and researchers all like to say things like "There is a need for more research," because it sounds forward-thinking and open-minded. In fact that"s not always the case, and it"s a little-known fact that this very phrase has been effectively banned from the British Medical Journal British Medical Journal for many years, on the grounds that it adds nothing: you may say what research is missing, on whom, how, measuring what, and why you want to do it, but the hand-waving, superficially open-minded call for "more research" is meaningless and unhelpful. for many years, on the grounds that it adds nothing: you may say what research is missing, on whom, how, measuring what, and why you want to do it, but the hand-waving, superficially open-minded call for "more research" is meaningless and unhelpful.

There have been over a hundred randomised placebo-controlled trials of homeopathy, and the time has come to stop. Homeopathy pills work no better than placebo pills, we know that much. But there is room for more interesting research.

People do experience that homeopathy is positive for them, but the action is likely to be in the whole process of going to see a homeopath, of being listened to, having some kind of explanation for your symptoms, and all the other collateral benefits of oldfashioned, paternalistic, rea.s.suring medicine. (Oh, and regression to the mean.) So we should measure that; and here is the final superb lesson in evidence-based medicine that homeopathy can teach us: sometimes you need to be imaginative about what kinds of research you do, compromise, and be driven by the questions that need answering, rather than the tools available to you.

It is very common for researchers to research the things which interest them, in all areas of medicine; but they can be interested in quite different things from patients. One study actually thought to ask people with osteoarthritis of the knee what kind of research they wanted to be carried out, and the responses were fascinating: they wanted rigorous real-world evaluations of the benefits from physiotherapy and surgery, from educational and coping strategy interventions, and other pragmatic things. They didn"t want yet another trial comparing one pill with another, or with placebo.

In the case of homeopathy, similarly, homeopaths want to believe that the power is in the pill, rather than in the whole process of going to visit a homeopath, having a chat and so on. It is crucially important to their professional ident.i.ty. But I believe that going to see a homeopath is probably a helpful intervention, in some cases, for some people, even if the pills are just placebos. I think patients would agree, and I think it would be an interesting thing to measure. It would be easy, and you would do something called a pragmatic "waiting-list-controlled trial".

You take two hundred patients, say, all suitable for homeopathic treatment, currently in a GP clinic, and all willing to be referred on for homeopathy, then you split them randomly into two groups of one hundred. One group gets treated by a homeopath as normal, pills, consultation, smoke and voodoo, on top of whatever other treatment they are having, just like in the real world. The other group just sits on the waiting list. They get treatment as usual, whether that is "neglect", "GP treatment" or whatever, but no homeopathy. Then you measure outcomes, and compare who gets better the most.

You could argue that it would be a trivial positive finding, and that it"s obvious the homeopathy group would do better; but it"s the only piece of research really waiting to be done. This is a "pragmatic trial". The groups aren"t blinded, but they couldn"t possibly be in this kind of trial, and sometimes we have to accept compromises in experimental methodology. It would be a legitimate use of public money (or perhaps money from Boiron, the homeopathic pill company valued at $500 million), but there"s nothing to stop homeopaths from just cracking on and doing it for themselves: because despite the homeopaths" fantasies, born out of a lack of knowledge, that research is difficult, magical and expensive, in fact such a trial would be very cheap to conduct.

In fact, it"s not really money that"s missing from the alternative therapy research community, especially in Britain: it"s knowledge of evidence-based medicine, and expertise in how to do a trial. Their literature and debates drip with ignorance, and vitriolic anger at anyone who dares to appraise the trials. Their university courses, as far as they ever even dare to admit what they teach on them (it"s all suspiciously hidden away), seem to skirt around such explosive and threatening questions. I"ve suggested in various places, including at academic conferences, that the single thing that would most improve the quality of evidence in CAM would be funding for a simple, evidence-based medicine hotline, which anyone thinking about running a trial in their clinic could phone up and get advice on how to do it properly, to avoid wasting effort on an "unfair test" that will rightly be regarded with contempt by all outsiders.

In my pipe dream (I"m completely serious, if you"ve got the money) you"d need a handout, maybe a short course that people did to cover the basics, so they weren"t asking stupid questions, and phone support. In the meantime, if you"re a sensible homeopath and you want to do a GP-controlled trial, you could maybe try the badscience website forums, where there are people who might be able to give some pointers (among the childish fighters and trolls...).

But would the homeopaths buy it? I think it would offend their sense of professionalism. You often see homeopaths trying to nuance their way through this tricky area, and they can"t quite make their minds up. Here, for example, is a Radio 4 interview, archived in full online, where Dr Elizabeth Thompson (consultant homeopathic physician, and honorary senior lecturer at the Department of Palliative Medicine at the University of Bristol) has a go.

She starts off with some sensible stuff: homeopathy does work, but through non-specific effects, the cultural meaning of the process, the therapeutic relationship, it"s not about the pills, and so on. She practically comes out and says that homeopathy is all about cultural meaning and the placebo effect. "People have wanted to say homeopathy is like a pharmaceutical compound," she says, "and it isn"t, it is a complex intervention."

Then the interviewer asks: "What would you say to people who go along to their high street pharmacy, where you can buy homeopathic remedies, they have hay fever and they pick out a hay-fever remedy, I mean presumably that"s not the way it works?" There is a moment of tension. Forgive me, Dr Thompson, but I felt you didn"t want to say that the pills work, as pills, in isolation, when you buy them in a shop: apart from anything else, you"d already said that they don"t.

But she doesn"t want to break ranks and say the pills don"t work, either. I"m holding my breath. How will she do it? Is there a linguistic structure complex enough, pa.s.sive enough, to negotiate through this? If there is, Dr Thompson doesn"t find it: "They might flick through and they might just be spot-on...[but] you"ve got to be very lucky to walk in and just get the right remedy." So the power is, and is not, in the pill: "P, and not-P", as philosophers of logic would say.

If they can"t finesse it with the "power is not in the pill" paradox, how else do the homeopaths get around all this negative data? Dr Thompson-from what I have seen-is a fairly clear-thinking and civilised homeopath. She is, in many respects, alone. Homeopaths have been careful to keep themselves outside of the civilising environment of the university, where the influence and questioning of colleagues can help to refine ideas, and weed out the bad ones. In their rare forays, they enter them secretively, walling themselves and their ideas off from criticism or review, refusing to share even what is in their exam papers with outsiders.

It is rare to find a homeopath engaging on the issue of the evidence, but what happens when they do? I can tell you. They get angry, they threaten to sue, they scream and shout at you at meetings, they complain spuriously and with ludicrous misrepresentations-time-consuming to expose, of course, but that"s the point of hara.s.sment-to the Press Complaints Commission and your editor, they send hate mail, and accuse you repeatedly of somehow being in the pocket of big pharma (falsely, although you start to wonder why you bother having principles when faced with this kind of behaviour). They bully, they smear, to the absolute top of the profession, and they do anything they can in a desperate bid to shut you up shut you up, and avoid having a discussion about the evidence. They have even been known to threaten violence (I won"t go into it here, but I manage these issues extremely seriously).

I"m not saying I don"t enjoy a bit of banter. I"m just pointing out that you don"t get anything quite like this in most other fields, and homeopaths, among all the people in this book, with the exception of the odd nutritionist, seem to me to be a uniquely angry breed. Experiment for yourself by chatting with them about evidence, and let me know what you find.

By now your head is hurting, because of all those mischievous, confusing homeopaths and their weird, labyrinthine defences: you need a lovely science ma.s.sage. Why is evidence so complicated? Why do we need all of these clever tricks, these special research paradigms? The answer is simple: the world is much more complicated than simple stories about pills making people get better. We are human, we are irrational, we have foibles, and the power of the mind over the body is greater than anything you have previously imagined.

5 The Placebo Effect

For all the dangers of CAM, to me the greatest disappointment is the way it distorts our understanding of our bodies. Just as the Big Bang theory is far more interesting than the creation story in Genesis, so the story that science can tell us about the natural world is far more interesting than any fable about magic pills concocted by an alternative therapist. To redress that balance, I"m offering you a whirlwind tour of one of the most bizarre and enlightening areas of medical research: the relationship between our bodies and our minds, the role of meaning in healing, and in particular the "placebo effect".

Much like quackery, placebos became unfashionable in medicine once the biomedical model started to produce tangible results. An editorial in 1890 sounded its death knell, describing the case of a doctor who had injected his patient with water instead of morphine: she recovered perfectly well, but then discovered the deception, disputed the bill in court, and won. The editorial was a lament, because doctors have known that rea.s.surance and a good bedside manner can be very effective for as long as medicine has existed. "Shall [the placebo] never again have an opportunity of exerting its wonderful psychological effects as faithfully as one of its more toxic conveners?" asked the Medical Press Medical Press at the time. at the time.

Luckily, its use survived. Throughout history, the placebo effect has been particularly well doc.u.mented in the field of pain, and some of the stories are striking. Henry Beecher, an American anaesthetist, wrote about operating on a soldier with horrific injuries in a World War II field hospital, using salt water because the morphine was all gone, and to his astonishment the patient was fine. Peter Parker, an American missionary, described performing surgery without anaesthesia on a Chinese patient in the mid-nineteenth century: after the operation, she "jumped upon the floor", bowed, and walked out of the room as if nothing had happened.

Theodor Kocher performed 1,600 thyroidectomies without anaesthesia in Berne in the 1890s, and I take my hat off to a man who can do complicated neck operations on conscious patients. Mitchel in the early twentieth century was performing full amputations and mastectomies, entirely without anaesthesia; and surgeons from before the invention of anaesthesia often described how some patients could tolerate knife cutting through muscle, and saw cutting through bone, perfectly awake, and without even clenching their teeth. You might be tougher than you think.

This is an interesting context in which to remember two televised stunts from 2006. The first was a rather melodramatic operation "under hypnosis" on Channel 4: "We just want to start the debate on this important medical issue," explained the production company Zigzag, known for making shows like Mile High Club Mile High Club and and Streak Party Streak Party. The operation, a trivial hernia repair, was performed with medical drugs but at a reduced dose, and treated as if it was a medical miracle.

The second was in Alternative Medicine: The Evidence Alternative Medicine: The Evidence, a rather gushing show on BBC2 presented by Kafhy Sykes ("Professor of the Public Understanding of Science"). This series was the subject of a successful complaint at the highest level, on account of it misleading the audience. Viewers believed they had seen a patient having chest surgery with only acupuncture as anaesthesia: in fact this was not the case, and once again the patient had received an array of conventional medications to allow the operation to be performed.*

- The series also featured a brain-imaging experiment on acupuncture, funded by the BBC, and one of the scientists involved came out afterwards to complain not only that the results had been overinterpreted (which you would expect from the media, as we will see), but moreover, that the pressure from the hinder-that is to say, the BBC-to produce a positive result was overwhelming. This is a perfect example of the things which you do - The series also featured a brain-imaging experiment on acupuncture, funded by the BBC, and one of the scientists involved came out afterwards to complain not only that the results had been overinterpreted (which you would expect from the media, as we will see), but moreover, that the pressure from the hinder-that is to say, the BBC-to produce a positive result was overwhelming. This is a perfect example of the things which you do not not do in science, and the fact that it was masterminded by a "Professor of the Public Understanding of Science" goes some way towards explaining why we are in such a dismal position today. The programme was defended by the BBC in a letter with ten academic signatories. Several of these signatories have since said they did not sign the letter. The mind really does boggle. do in science, and the fact that it was masterminded by a "Professor of the Public Understanding of Science" goes some way towards explaining why we are in such a dismal position today. The programme was defended by the BBC in a letter with ten academic signatories. Several of these signatories have since said they did not sign the letter. The mind really does boggle.

When you consider these misleading episodes alongside the reality-that operations have frequently been performed with no no anaesthetics, anaesthetics, no no placebos, placebos, no no alternative therapists, alternative therapists, no no hypnotists and hypnotists and no no TV producers-these televised episodes suddenly feel rather less dramatic. TV producers-these televised episodes suddenly feel rather less dramatic.

But these are just stories, and the plural of anecdote is not data. Everyone knows about the power of the mind-whether it"s stories of mothers enduring biblical pain to avoid dropping a boiling kettle on their baby, or people lifting cars off their girlfriend like the Incredible Hulk-but devising an experiment that teases the psychological and cultural benefits of a treatment away from the biomedical effects is trickier than you might think. After all, what do you compare a placebo against? Another placebo? Or no treatment at all?

The placebo on trial.

In most studies we don"t have a "no treatment" group to compare both the placebo and the drug against, and for a very good ethical reason: if your patients are ill, you shouldn"t be leaving them untreated simply because of your own mawkish interest in the placebo effect. In fact, in most cases today it is considered wrong even to use a placebo in a trial: whenever possible you should compare your new treatment against the best pre-existing, current treatment.

This is not just for ethical reasons (although it is enshrined in the Declaration of Helsinki, the international ethics bible). Placebo-controlled trials are also frowned upon by the evidence-based medicine community, because they know it"s an easy way to cook the books and get easy positive trial data to support your company"s big new investment. In the real world of clinical practice, patients and doctors aren"t so interested in whether a new drug works better than nothing nothing, they"re interested in whether it works better than the best treatment they already have better than the best treatment they already have.

There have been occasions in medical history where researchers were more cavalier. The Tuskegee Syphilis Study, for example, is one of America"s most shaming hours, if it is possible to say such a thing these days: 399 poor, rural African-American men were recruited by the US Public Health Service in 1932 for an observational study to see what happened if syphilis was left, very simply, untreated. Astonishingly, the study ran right through to 1972. In 1949 penicillin was introduced as an effective treatment for syphilis. These men did not receive that drug, nor did they receive Salvarsan, nor indeed did they receive an apology until 1997, from Bill Clinton.

If we don"t want to do unethical scientific experiments with "no treatment" groups on sick people, how else can we determine the size of the placebo effect on modern illnesses? Firstly, and rather ingeniously, we can compare one placebo with another.

The first experiment in this field was a meta-a.n.a.lysis by Daniel Moerman, an anthropologist who has specialised in the placebo effect. He took the trial data from placebo-controlled trials of gastric ulcer medication, which was his first cunning move, because gastric ulcers are an excellent thing to study: their presence or absence is determined very objectively, with a gastroscopy camera pa.s.sed down into the stomach, to avoid any doubt.

Moerman took only the placebo data from these trials, and then, in his second ingenious move, from all of these studies, of all the different drugs, with their different dosing regimes, he took the ulcer healing rate from the placebo arm of trials where the "placebo" treatment was two sugar pills a day, and compared that with the ulcer healing rate in the placebo arm of trials where the placebo was four sugar pills a day. He found, spectacularly, that four sugar pills are better than two (these findings have also been replicated in a different dataset, for those who are switched on enough to worry about the replicability of important clinical findings).

What the treatment looks like.

So four pills are better than two: but how can this be? Does a placebo sugar pill simply exert an effect like any other pill? Is there a dose-response curve, as pharmacologists would find for any other drug? The answer is that the placebo effect is about far more than just the pill: it is about the cultural meaning of the treatment. Pills don"t simply manifest themselves in your stomach: they are given in particular ways, they take varying forms, and they are swallowed with expectations, all of which have an impact on a person"s beliefs about their own health, and in turn, on outcome. Homeopathy is, for example, a perfect example of the value in ceremony.

I understand this might well seem improbable to you, so I"ve corralled some of the best data on the placebo effect into one place, and the challenge is this: see if you can come up with a better explanation for what is, I guarantee, a seriously strange set of experimental results.

First up, Blackwell [1972] did a set of experiments on fifty-seven college students to determine the effect of colour-as well as the number of tablets-on the effects elicited. The subjects were sitting through a boring hour-long lecture, and were given either one or two pills, which were either pink or blue. They were told that they could expect to receive either a stimulant or a sedative. Since these were psychologists, and this was back when you could do whatever you wanted to your subjects-even lie to them-the treatment that all all the students received consisted simply of sugar pills, but of different colours. the students received consisted simply of sugar pills, but of different colours.

Afterwards, when they measured alertness-as well as any subjective effects-the researchers found that two pills were more effective than one, as we might have expected (and two pills were better at eliciting side-effects too). They also found that colour had an effect on outcome: the pink sugar tablets were better at maintaining concentration than the blue ones. Since colours in themselves have no intrinsic pharmacological properties, the difference in effect could only be due to the cultural meanings of pink and blue: pink is alerting, blue is cool. Another study suggested that Oxazepam, a drug similar to Valium (which was once unsuccessfully prescribed by our GP for me as a hyperactive child) was more effective at treating anxiety in a green tablet, and more effective for depression when yellow.

Drug companies, more than most, know the benefits of good branding: they spend more on PR, after all, than they do on research and development. As you"d expect from men of action with large houses in the country, they put these theoretical ideas into practice: so Prozac, for example, is white and blue; and in case you think I"m cherry-picking here, a survey of the colour of pills currently on the market found that stimulant medication tends to come in red, orange or yellow tablets, while antidepressants and tranquillisers are generally blue, green or purple.

Issues of form go much deeper than colour. In 1970 a sedative-chlordiazepoxide-was found to be more effective in capsule form than pill form, even for the very same drug, in the very same dose: capsules at the time felt newer, somehow, and more sciencey. Maybe you"ve caught yourself splashing out and paying extra for ibuprofen capsules in the chemist"s.

Route of administration has an effect as well: salt-water injections have been shown in three separate experiments to be more effective than sugar pills for blood pressure, for headaches and for postoperative pain, not because of any physical benefit of salt-water injection over sugar pills-there isn"t one-but because, as everyone knows, an injection is a much more dramatic intervention than just taking a pill.

Closer to home for the alternative therapists, the BMJ BMJ recently published an article comparing two different placebo treatments for arm pain, one of which was a sugar pill, and one of which was a "ritual", a treatment modelled on acupuncture: the trial found that the more elaborate placebo ritual had a greater benefit. recently published an article comparing two different placebo treatments for arm pain, one of which was a sugar pill, and one of which was a "ritual", a treatment modelled on acupuncture: the trial found that the more elaborate placebo ritual had a greater benefit.

But the ultimate testament to the social construction of the placebo effect must be the bizarre story of packaging. Pain is an area where you might suspect that expectation would have a particularly significant effect. Most people have found that they can take their minds off pain-to at least some extent-with distraction, or have had a toothache which got worse with stress.

Branthwaite and Cooper did a truly extraordinary study in 1981, looking at 835 women with headaches. It was a four-armed study, where the subjects were given either aspirin or placebo pills, and these pills in turn were packaged either in blank, bland, neutral boxes, or in full, flashy, brand-name packaging. They found-as you"d expect-that aspirin had more of an effect on headaches than sugar pills; but more than that, they found that the packaging itself had a beneficial effect, enhancing the benefit of both the placebo and the aspirin.

People I know still insist on buying brand-name painkillers. As you can imagine, I"ve spent half my life trying to explain to them why this is a waste of money: but in fact the paradox of Branthwaite and Cooper"s experimental data is that they were right all along. Whatever pharmacology theory tells you, that brand-named version is is better, and there"s just no getting away from it. Part of that might be the cost: a recent study looking at pain caused by electric shocks showed that a pain relief treatment was stronger when subjects were told it cost $2.50 than when they were told it cost 10c. (And a paper currently in press shows that people are more likely to take advice when they have paid for it.) better, and there"s just no getting away from it. Part of that might be the cost: a recent study looking at pain caused by electric shocks showed that a pain relief treatment was stronger when subjects were told it cost $2.50 than when they were told it cost 10c. (And a paper currently in press shows that people are more likely to take advice when they have paid for it.) It gets better-or worse, depending on how you feel about your world view slipping sideways. Montgomery and Kirsch [1996] told college students they were taking part in a study on a new local anaesthetic called "trivaricaine". Trivaricaine is brown, you paint it on your skin, it smells like a medicine, and it"s so potent you have to wear gloves when you handle it: or that"s what they implied to the students. In fact it"s made of water, iodine and thyme oil (for the smell), and the experimenter (who also wore a white coat) was only using rubber gloves for a sense of theatre. None of these ingredients will affect pain.

The trivaricaine was painted onto one or other of the subjects" index fingers, and the experimenters then applied painful pressure with a vice. One after another, in varying orders, pain was applied, trivaricaine was applied, and as you would expect by now, the subjects reported less pain, and less unpleasantness, for the fingers that were pre-treated with the amazing trivaricaine. This is a placebo effect, but the pills have gone now.

It gets stranger. Sham ultrasound is beneficial for dental pain, placebo operations have been shown to be beneficial in knee pain (the surgeon just makes fake keyhole surgery holes in the side and mucks about for a bit as if he"s doing something useful), and placebo operations have even been shown to improve angina.

That"s a pretty big deal. Angina is the pain you get when there"s not enough oxygen getting to your heart muscle for the work it"s doing. That"s why it gets worse with exercise: because you"re demanding more work from the heart muscle. You might get a similar pain in your thighs after bounding up ten flights of stairs, depending on how fit you are.

Treatments that help angina usually work by dilating the blood vessels to the heart, and a group of chemicals called nitrates are used for this purpose very frequently. They relax the smooth muscle in the body, which dilates the arteries so more blood can get through (they also relax other bits of smooth muscle in the body, including your a.n.a.l sphincter, which is why a variant is sold as "liquid gold" in s.e.x shops).

In the 1950s there was an idea that you could get blood vessels in the heart to grow back, and thicker, if you tied off an artery on the front of the chest wall that wasn"t very important, but which branched off the main heart arteries. The idea was that this would send messages back to the main branch of the artery, telling it that more artery growth was needed, so the body would be tricked. there was an idea that you could get blood vessels in the heart to grow back, and thicker, if you tied off an artery on the front of the chest wall that wasn"t very important, but which branched off the main heart arteries. The idea was that this would send messages back to the main branch of the artery, telling it that more artery growth was needed, so the body would be tricked.

Unfortunately this idea turned out to be nonsense, but only after a fashion. In 1959 a placebo-controlled trial of the operation was performed: in some operations they did the whole thing properly, but in the "placebo" operations they went through the motions but didn"t tie off any arteries. It was found that the placebo operation was just as good as the real one-people seemed to get a bit better in both cases, and there was little difference between the groups-but the most strange thing about the whole affair was that n.o.body made a fuss at the time: the real operation wasn"t any better than a sham operation, sure, but how could we explain the fact that people had been sensing an improvement from the operation for a very long time? n.o.body thought of the power of placebo. The operation was simply binned.

© 2024 www.topnovel.cc