Bad Science

Chapter 8

Clean up the data Look at your graphs. There will be some anomalous "outliers", or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.

"The best of five...no...seven...no...nine!"

If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are "nearly significant", extend the trial by another three months.

Torture the data If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. "Torture the data and it will confess to anything," as they say at Guantanamo Bay.

Try every b.u.t.ton on the computer If you"re really desperate, and a.n.a.lysing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.



And when you"re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry): remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it"s in your interest to make sure it isn"t read beyond the abstract. Finally, if your finding is really embarra.s.sing, hide it away somewhere and cite "data on file". n.o.body will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully, that won"t be for ages.

How can this be possible?

When I explain this abuse of research to friends from outside medicine and academia, they are rightly amazed. "How can this be possible?" they say. Well, firstly, much bad research comes down to incompetence. Many of the methodological errors described above can come about by wishful thinking, as much as mendacity. But is it possible to prove foul play?

On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, however, the picture emerges very clearly. The issue has been studied so frequently that in 2003 a systematic review found thirty separate studies looking at whether funding in various groups of trials affected the findings. Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favourable to the company than independent studies.

One review of bias tells a particularly Alice in Wonderland story. Fifty-six different trials comparing painkillers like ibuprofen, diclofenac and so on were found. People often invent new versions of these drugs in the hope that they might have fewer side-effects, or be stronger (or stay in patent and make money). In every single trial the sponsoring manufacturer"s drug came out as better than, or equal to, the others in the trial. On not one occasion did the manufacturer"s drug come out worse. Philosophers and mathematicians talk about "transitivity": if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all of these drugs were better than each other.

But there is a surprise waiting around the corner. Astonishingly, when the methodological flaws in studies are examined, it seems that industry-funded trials actually turn out to have better better research methods, on average, than independent trials. research methods, on average, than independent trials.

The most that could be pinned on the drug companies were some fairly trivial howlers: things like using inadequate doses of the compet.i.tor"s drug (as we said above), or making claims in the conclusions section of the paper that exaggerated a positive finding. But these, at least, were transparent flaws: you only had to read the trial to see that the researchers had given a miserly dose of a painkiller; and you should always read the methods and results section of a trial to decide what its findings are, because the discussion and conclusion pages at the end are like the comment pages in a newspaper. They"re not where you get your news from.

How can we explain, then, the apparent fact that industry funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished.

Publication bias and suppressing negative results "Publication bias" is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It"s easy enough to understand, if you put yourself in the shoes of the researcher. Firstly, when you get a negative result, it feels as if it"s all been a bit of a waste of time. It"s easy to convince yourself that you found nothing, when in fact you discovered a very useful piece of information: that the thing you were testing doesn"t work doesn"t work.

Rightly or wrongly, finding out that something doesn"t work probably isn"t going to win you a n.o.bel Prize-there"s no justice in the world-so you might feel demotivated about the project, or prioritise other projects ahead of writing up and submitting your negative finding to an academic journal, and so the data just sits, rotting, in your bottom drawer. Months pa.s.s. You get a new grant. The guilt niggles occasionally, but Monday"s your day in clinic, so Tuesday"s the beginning of the week really, and there"s the departmental meeting on Wednesday, so Thursday"s the only day you can get any proper work done, because Friday"s your teaching day, and before you know it, a year has pa.s.sed, your supervisor retires, the new guy doesn"t even know the experiment ever happened, and the negative trial data is forgotten forever, unpublished. If you are smiling in recognition at this paragraph, then you are a very bad person.

Even if you do get around to writing up your negative finding, it"s hardly news. You"re probably not going to get it into a big-name journal, unless it was a ma.s.sive trial on something everybody thought was really whizzbang until your negative trial came along and blew it out of the water, so as well as this being a good reason for you not bothering, it also means the whole process will be heinously delayed: it can take a year for some of the slacker journals to reject a paper. Every time you submit to a different journal you might have to re-format the references (hours of tedium). If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that"s years of people not knowing about your study.

Publication bias is common, and in some fields it is more rife than in others. In 1995, only 1 per cent of all articles published in alternative medicine journals gave a negative result. The most recent figure is 5 per cent negative. This is very, very low, although to be fair, it could be worse. A review in 1998 looked at the entire canon of Chinese medical research, and found that not one single negative trial had ever been published. Not one. You can see why I use CAM as a simple teaching tool for evidence-based medicine.

Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot. This requires, only briefly, that you pay attention.

If there are lots of trials on a subject, then quite by chance they will all give slightly different answers, but you would expect them all to cl.u.s.ter fairly equally around the true answer. You would also expect that the bigger studies, with more partic.i.p.ants in them, and with better methods, would be more closely cl.u.s.tered around the correct answer than the smaller studies: the smaller studies, meanwhile, will be all over the shop, unusually positive and negative at random, because in a study with, say, twenty patients, you only need three freak results to send the overall conclusions right off.

A funnel plot is a clever way of graphing this. You put the effect (i.e., how effective the treatment is) on the x-axis, from left to right. Then, on the y-axis (top-to-bottom, maths-skivers) you put how big the trial was, or some other measure of how accurate it was. If there is no publication bias, you should see a nice inverted funnel: the big, accurate trials all cl.u.s.ter around each other at the top of the funnel, and then as you go down the funnel, the little, inaccurate trials gradually spread out to the left and right, as they become more and more wildly inaccurate-both positively and negatively.

If there is publication bias, however, the results will be skewed. The smaller, more rubbish negative negative trials seem to be missing, because they were ignored-n.o.body had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer-and so only the positive ones were published. Not only has publication bias been demonstrated in many fields of medicine, but a paper has even found evidence of publication bias in studies of publication bias. Here is the funnel plot for that paper. This is what pa.s.ses for humour in the world of evidence-based medicine. trials seem to be missing, because they were ignored-n.o.body had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer-and so only the positive ones were published. Not only has publication bias been demonstrated in many fields of medicine, but a paper has even found evidence of publication bias in studies of publication bias. Here is the funnel plot for that paper. This is what pa.s.ses for humour in the world of evidence-based medicine.

The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in the New England Journal of Medicine New England Journal of Medicine at the beginning of 2008 which listed all the trials on SSRIs which had ever been formally registered with the FDA, and examined the same trials in the academic literature. Thirty-seven studies were a.s.sessed by the FDA as positive: with one exception, every single one of those positive trials was properly written up and published. Meanwhile, twenty-two studies that had negative or iffy results were simply not published at all, and eleven were written up and published in a way that described them as having a positive outcome. at the beginning of 2008 which listed all the trials on SSRIs which had ever been formally registered with the FDA, and examined the same trials in the academic literature. Thirty-seven studies were a.s.sessed by the FDA as positive: with one exception, every single one of those positive trials was properly written up and published. Meanwhile, twenty-two studies that had negative or iffy results were simply not published at all, and eleven were written up and published in a way that described them as having a positive outcome.

This is more than cheeky. Doctors need reliable information if they are to make helpful and safe decisions about prescribing drugs to their patients. Depriving them of this information, and deceiving them, is a major moral crime. If I wasn"t writing a light and humorous book about science right now, I would descend into gales of rage.

Duplicate publication Drug companies can go one better than neglecting negative studies. Sometimes, when they get positive results, instead of just publishing them once, they publish them several times, in different places, in different forms, so that it looks as if there are lots of different positive trials. This is particularly easy if you"ve performed a large "multicentre" trial, because you can publish overlapping bits and pieces from each centre separately, or in different permutations. It"s also a very clever way of kludging the evidence, because it"s almost impossible for the reader to spot.

A cla.s.sic piece of detective work was performed in this area by a vigilant anaesthetist from Oxford called Martin Tramer, who was looking at the efficacy of a nausea drug called ondansetron. He noticed that lots of the data in a meta-a.n.a.lysis he was doing seemed to be replicated: the results for many individual patients had been written up several times, in slightly different forms, in apparently different studies, in different journals. Crucially, data which showed the drug in a better light were more likely to be duplicated than the data which showed it to be less impressive, and overall this led to a 23 per cent overestimate of the drug"s efficacy.

Hiding harm That"s how drug companies dress up the positive results. What about the darker, more headline-grabbing side, where they hide the serious harms?

Side-effects are a fact of life: they need to be accepted, managed in the context of benefits, and carefully monitored, because the unintended consequences of interventions can be extremely serious. The stories that grab the headlines are ones where there is foul play, or a cover-up, but in fact important findings can also be missed for much more innocent reasons, like the normal human processes of accidental neglect in publication bias, or because the worrying findings are buried from view in the noise of the data.

Anti-arrhythmic drugs are an interesting example. People who have heart attacks get irregular heart rhythms fairly commonly (because bits of the timekeeping apparatus in the heart have been damaged), and they also commonly die from them. Anti-arrhythmic drugs are used to treat and prevent irregular rhythms in people who have them. Why not, thought doctors, just give them to everyone who has had a heart attack? It made sense on paper, they seemed safe, and n.o.body knew at the time that they would actually increase the risk of death in this group-because that didn"t make sense from the theory (like with antioxidants). But they do, and at the peak of their use in the 1980s, anti-arrhythmic drugs were causing comparable numbers of deaths to the total number of Americans who died in the Vietnam war. Information that could have helped to avert this disaster was sitting, tragically, in a bottom drawer, as a researcher later explained: When we carried out our study in 1980 we thought that the increased death rate...was an effect of chance...The development of [the drug] was abandoned for commercial reasons, and so this study was therefore never published; it is now a good example of "publication bias". The results described here...might have provided an early warning of trouble ahead. When we carried out our study in 1980 we thought that the increased death rate...was an effect of chance...The development of [the drug] was abandoned for commercial reasons, and so this study was therefore never published; it is now a good example of "publication bias". The results described here...might have provided an early warning of trouble ahead.

That was neglect, and wishful thinking. But sometimes it seems that dangerous effects from drugs can be either deliberately downplayed or, worse than that, simply not published.

There has been a string of major scandals from the pharmaceutical industry recently, in which it seems that evidence of harm for drugs including Vioxx and the SSRI antidepressants has gone missing in action. It didn"t take long for the truth to out, and anybody who claims that these issues have been brushed under the medical carpet is simply ignorant. They were dealt with, you"ll remember, in the three highest-ranking papers in the BMJ BMJ"s archive. They are worth looking at again, in more detail.

Vioxx Vioxx was a painkiller developed by the company Merck and approved by the American FDA in 1999. Many painkillers can cause gut problems-ulcers and more-and the hope was that this new drug might not have such side-effects. This was examined in a trial called VIGOR, comparing Vioxx with an older drug, naproxen: and a lot of money was riding on the outcome. The trial had mixed results. Vioxx was no better at relieving the symptoms of rheumatoid arthritis, but it did halve the risk of gastrointestinal events, which was excellent news. But an increased risk of heart attacks was also found.

When the VIGOR trial was published, however, this cardiovascular risk was hard to see. There was an "interim a.n.a.lysis" for heart attacks and ulcers, where ulcers were counted for longer than heart attacks. It wasn"t described in the publication, and it overstated the advantage of Vioxx regarding ulcers, while understating the increased risk of heart attacks. "This untenable feature of trial design," said a swingeing and unusually critical editorial in the New England Journal of Medicine New England Journal of Medicine, "which inevitably skewed the results, was not disclosed to the editors or the academic authors of the study." Was it a problem? Yes. For one thing, three additional myocardial infarctions occurred in the Vioxx group in the month after they stopped counting, while none occurred in the naproxen control group.

An internal memo from Edward Scolnick, the company"s chief scientist, shows that the company knew about this cardiovascular risk ("It is a shame but it is a low incidence and it is mechanism based as we worried it was"). The New England Journal of Medicine New England Journal of Medicine was not impressed, publishing a pair of spectacularly critical editorials. was not impressed, publishing a pair of spectacularly critical editorials.

The worrying excess of heart attacks was only really picked up by people examining the FDA data, something that doctors tend-of course-not to do, as they read academic journal articles at best. In an attempt to explain the moderate extra risk of heart attacks that could could be seen in the final paper, the authors proposed something called "the naproxen hypothesis": Vioxx wasn"t causing heart attacks, they suggested, but naproxen was preventing them. There is no accepted evidence that naproxen has a strong protective effect against heart attacks. be seen in the final paper, the authors proposed something called "the naproxen hypothesis": Vioxx wasn"t causing heart attacks, they suggested, but naproxen was preventing them. There is no accepted evidence that naproxen has a strong protective effect against heart attacks.

The internal memo, discussed at length in the coverage of the case, suggested that the company was concerned at the time. Eventually more evidence of harm emerged. Vioxx was taken off the market in 2004; but a.n.a.lysts from the FDA estimated that it caused between 88,000 and 139,000 heart attacks, 30 to 40 per cent of which were probably fatal, in its five years on the market. It"s hard to be sure if that figure is reliable, but looking at the pattern of how the information came out, it"s certainly felt, very widely, that both Merck and the FDA could have done much more to mitigate the damage done over the many years of this drug"s lifespan, after the concerns were apparent to them. Data in medicine is important: it means lives. Merck has not admitted liability, and has proposed a $4.85 billion settlement in the US.

Authors forbidden to publish data This all seems pretty bad. Which researchers are doing it, and why can"t we stop them? Some, of course, are mendacious. But many have been bullied or pressured not to reveal information about the trials they have performed, funded by the pharmaceutical industry.

Here are two extreme examples of what is, tragically, a fairly common phenomenon. In 2000, a US company filed a claim against both the lead investigators and their universities in an attempt to block publication of a study on an HIV vaccine that found the product was no better than placebo. The investigators felt they had to put patients before the product. The company felt otherwise. The results were published in JAMA that year.

In the second example, Nancy Olivieri, director of the Toronto Haemoglobinopathies Programme, was conducting a clinical trial on deferip.r.o.ne, a drug which removes excess iron from the bodies of patients who become iron-overloaded after many blood transfusions. She was concerned when she saw that iron concentrations in the liver seemed to be poorly controlled in some of the patients, exceeding the safety threshold for increased risk of cardiac disease and early death. More extended studies suggested that deferip.r.o.ne might accelerate the development of hepatic fibrosis.

The drug company, Apotex, threatened Olivieri, repeatedly and in writing, that if she published her findings and concerns they would take legal action against her. With great courage-and, shamefully, without the support of her university-Olivieri presented her findings at several scientific meetings and in academic journals. She believed she had a duty to disclose her concerns, regardless of the personal consequences. It should never have been necessary for her to need to make that decision.

The single cheap solution that will solve all of the problems in the entire world What"s truly extraordinary is that almost all of these problems-the suppression of negative results, data dredging, hiding unhelpful data, and more-could largely be solved with one very simple intervention that would cost almost nothing: a clinical trials register, public, open, and properly enforced. This is how it would work. You"re a drug company. Before you even start your study, you publish the "protocol" for it, the methods section of the paper, somewhere public. This means that everyone can see what you"re going to do in your trial, what you"re going to measure, how, in how many people, and so on, before you start before you start.

The problems of publication bias, duplicate publication and hidden data on side-effects-which all cause unnecessary death and suffering-would be eradicated overnight, in one fell swoop. If you registered a trial, and conducted it, but it didn"t appear in the literature, it would stick out like a sore thumb. Everyone, basically, would a.s.sume you had something to hide, because you probably would. There are trials registers at present, but they are a mess.

How much of a mess is ill.u.s.trated by this last drug company ruse: "moving the goalposts". In 2002 Merck and Schering Plough began a trial to look at Ezetimibe, a drug to reduce cholesterol. They started out saying they were going to measure one thing as their test of whether the drug worked, but then announced, after the results were in, that they were going to count something else as the real test instead. This was spotted, and they were publicly rapped. Why? Because if you measure lots of things (as they did), some might be positive simply by chance. You cannot find your starting hypothesis in your final results. It makes the stats go all wonky.

Adverts "Clomicalm tablets are the only medication approved for the treatment of separation anxiety in dogs."

There are currently no direct-to-consumer drug adverts in Britain, which is a shame, because the ones in America are properly bizarre, especially the TV ones. Your life is in disarray, your restless legsmigrainecholesterol have taken over, all is panic, there is no sense anywhere. Then, when you take the right pill, suddenly the screen brightens up into a warm yellow, granny"s laughing, the kids are laughing, the dog"s tail is wagging, some nauseating child is playing with the hose on the lawn, spraying a rainbow of water into the sunshine whilst absolutely laughing his head off as all your relationships suddenly become successful again. Life is good.

Patients are so much more easily led than doctors by drug company advertising that the budget for direct-to-consumer advertising in America has risen twice as fast as the budget for addressing doctors directly. These adverts have been closely studied by medical academic researchers, and have been repeatedly shown to increase patients" requests for the advertised drugs, as well as doctors" prescriptions for them. Even adverts "raising awareness of a condition" under tighter Canadian regulations have been shown to double demand for a specific drug to treat that condition.

This is why drug companies are keen to sponsor patient groups, or to exploit the media for their campaigns, as has been seen recently in the news stories singing the praises of the breast cancer drug Herceptin, or Alzheimer"s drugs of borderline efficacy.

These advocacy groups demand vociferously in the media that the companies" drugs be funded by the NHS. I know people a.s.sociated with these patient advocacy groups-academics-who have spoken out and tried to change their stance, without success: because in the case of the British Alzheimer"s campaign in particular, it struck many people that the demands were rather one-sided. The National Inst.i.tute for Clinical Excellence (NICE) concluded that it couldn"t justify paying for Alzheimer"s drugs, partly because the evidence for their efficacy was weak, and often looked only at soft, surrogate outcomes. The evidence is indeed weak, because the drug companies have failed to subject their medications to sufficiently rigorous testing on real-world outcomes, rigorous testing that would be much less guaranteed to produce a positive result. Does the Alzheimer"s Society challenge the manufacturers to do better research? Do its members walk around with large placards campaigning against "surrogate outcomes in drugs research", demanding "More Fair Tests"? No.

Oh G.o.d. Everybody"s bad. How did things get so awful?

12 How the Media Promote the Public Misunderstanding of Science

We need to make some sense of all this, and appreciate just how deep into our culture the misunderstandings and misrepresentations of science go. If I am known at all, it is for dismantling foolish media stories about science: it is the bulk of my work, my oeuvre oeuvre, and I am slightly ashamed to say that I have over five hundred stories to choose from, in ill.u.s.trating the points I intend to make here. You may well count this as obsessional.

We have covered many of the themes elsewhere: the seductive march to medicalise everyday life; the fantasies about pills, mainstream and quack; and the ludicrous health claims about food, where journalists are every bit as guilty as nutritionists. But here I want to focus on the stories that can tell us about the way science is perceived, and the repet.i.tive, structural patterns in how we have been misled.

My basic hypothesis is this: the people who run the media are humanities graduates with little understanding of science, who wear their ignorance as a badge of honour. Secretly, deep down, perhaps they resent the fact that they have denied themselves access to the most significant developments in the history of Western thought from the past two hundred years; but there is an attack implicit in all media coverage of science: in their choice of stories, and the way they cover them, the media create a parody of science. On this template, science is portrayed as groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality; they do work that is either wacky or dangerous, but either way, everything in science is tenuous, contradictory, probably going to change soon and, most ridiculously, "hard to understand". Having created this parody, the commentariat then attack it, as if they were genuinely critiquing what science is all about.

Science stories generally fall into one of three categories: the wacky stories, the "breakthrough" stories, and the "scare" stories. Each undermines and distorts science in its own idiosyncratic way. We"ll do them in order.

Wacky stories-money for nothing If you want to get your research in the media, throw away the autoclave, abandon the pipette, delete your copy of Stata Stata, and sell your soul to a PR company.

At Reading University there is a man called Dr Kevin Warwick, and he has been a fountain of eye-catching stories for some time. He puts a chip from a wireless ID card in his arm, then shows journalists how he can open doors in his department using it. "I am a cyborg," he announces, "a melding of man and machine,"* and the media are duly impressed.

- This is a paraphrase, but it"s not entirely inaccurate. - This is a paraphrase, but it"s not entirely inaccurate.

A favourite research story from his lab-although it"s never been published in any kind of academic journal, of course-purported to show that watching Richard and Judy Richard and Judy improves children"s IQ test performance much more effectively than all kinds of other things you might expect to do so, like, say, some exercise, or drinking some coffee. improves children"s IQ test performance much more effectively than all kinds of other things you might expect to do so, like, say, some exercise, or drinking some coffee.

This was not a peripheral funny: it was a news story, and unlike most genuine science stories, it produced an editorial leader in the Independent Independent. I don"t have to scratch around to find more examples: there are five hundred to choose from, as I"ve said. "Infidelity is genetic," say scientists. "Electricity allergy real," says researcher. "In the future, all men will have big w.i.l.l.i.e.s," says an evolutionary biologist from LSE.

These stories are empty, wacky filler, masquerading as science, and they reach their purest form in stories where scientists have "found" the formula for something. How wacky those boffins are. Recently you may have enjoyed the perfect way to eat ice cream (AxTpxTmFtxAt+VxLTxSpxWTt=3d20), the perfect TV sitcom (C=3d[(RxD)+V]xFA+S, according to the Telegraph Telegraph), the perfect boiled egg (Daily Mail), the perfect joke (the Telegraph Telegraph again), and the most depressing day of the year ([W+(D-d)] XTQ MxNA, in almost every newspaper in the world). I could go on. again), and the most depressing day of the year ([W+(D-d)] XTQ MxNA, in almost every newspaper in the world). I could go on.

These stories are invariably written up by science correspondents, and hotly followed-to universal approbation-by comment pieces from humanities graduates on how bonkers and irrelevant scientists are, because from the bunker-like mentality of my "parody" hypothesis, that is the appeal of these stories: they play on the public"s view of science as irrelevant, peripheral boffinry.

They are also there to make money, to promote products, and to fill pages cheaply, with a minimum of journalistic effort. Let"s take some of the most prominent examples. Dr Cliff Arnall is the king of the equation story, and his recent output includes the formulae for the most miserable day of the year, the happiest day of the year, the perfect long weekend and many, many more. According to the BBC he is "Professor Arnall"; usually he is "Dr Cliff Arnall of Cardiff University". In reality he"s a private entrepreneur running confidence-building and stress-management courses, who has done a bit of part-time instructing at Cardiff University. The university"s press office, however, are keen to put him in their monthly media-monitoring success reports. This is how low we have sunk.

Perhaps you nurture fond hopes for these formulae-perhaps you think they make science "relevant" and "fun", a bit like Christian rock. But you should know that they come from PR companies, often fully formed and ready to have a scientist"s name attached to them. In fact PR companies are very open to their customers about this practice: it is referred to as "advertising equivalent exposure", whereby a "news" story is put out which can be attached to a client"s name.

Cliff Arnall"s formula to identify the most miserable day of the year has now become an annual media stalwart. It was sponsored by Sky Travel, and appeared in January, the perfect time to book a holiday. His "happiest day of the year" formula appears in June-it received yet another outing in the Telegraph Telegraph and the and the Mail Mail in 2008-and was sponsored by Wall"s ice cream. Professor Cary Cooper"s formula to grade sporting triumphs was sponsored by Tesco. The equation for the beer-goggle effect, whereby ladies become more attractive after some ale, was produced by Dr Nathan Efron, Professor of Clinical Optometry at the University of Manchester, and sponsored by the optical products manufacturer Bausch&Lomb; the formula for the perfect penalty kick, by Dr David Lewis of Liverpool John Moores, was sponsored by Ladbrokes; the formula for the perfect way to pull a Christmas cracker, by Dr Paul Stevenson of the University of Surrey, was commissioned by Tesco; the formula for the perfect beach, by Dr Dimitrios Buhalis of the University of Surrey, sponsored by travel firm Opodo. These are people from proper universities, putting their names to advertising equivalent exposure for PR companies. in 2008-and was sponsored by Wall"s ice cream. Professor Cary Cooper"s formula to grade sporting triumphs was sponsored by Tesco. The equation for the beer-goggle effect, whereby ladies become more attractive after some ale, was produced by Dr Nathan Efron, Professor of Clinical Optometry at the University of Manchester, and sponsored by the optical products manufacturer Bausch&Lomb; the formula for the perfect penalty kick, by Dr David Lewis of Liverpool John Moores, was sponsored by Ladbrokes; the formula for the perfect way to pull a Christmas cracker, by Dr Paul Stevenson of the University of Surrey, was commissioned by Tesco; the formula for the perfect beach, by Dr Dimitrios Buhalis of the University of Surrey, sponsored by travel firm Opodo. These are people from proper universities, putting their names to advertising equivalent exposure for PR companies.

I know how Dr Arnall is paid, because when I wrote critically in the newspaper about his endless equations stories just before Christmas, he sent me this genuinely charming email: Further to your mentioning my name in conjunction with "Walls" I just received a cheque from them. Cheers and season"s greetings, Cliff Arnall. Further to your mentioning my name in conjunction with "Walls" I just received a cheque from them. Cheers and season"s greetings, Cliff Arnall.

It"s not a scandal: it"s just stupid. These stories are not informative. They are promotional activity masquerading as news. They play-rather cynically-on the fact that most news editors wouldn"t know a science story if it danced naked in front of them. They play on journalists being short of time but still needing to fill pages, as more words are written by fewer reporters. It is, in fact, a perfect example of what investigative journalist Nick Davies has described as Churnalism Churnalism, the uncritical rehashing of press releases into content, and in some respects this is merely a microcosm of a much wider problem that generalises to all areas of journalism. Research conducted at Cardiff University in 2007 showed that 80 per cent of all broadsheet news stories were "wholly, mainly or partially constructed from second-hand material, provided by news agencies and by the public relations industry".

It strikes me that you can read press releases on the internet, without paying for them in newsagents.

"All men will have big w.i.l.l.i.e.s"

For all that they are foolish PR slop, these stories can have phenomenal penetrance. Those w.i.l.l.i.e.s can be found in the Sun"s Sun"s headline for a story on a radical new "Evolution Report" by Dr Oliver Curry, "evolution theorist" from the [email protected] research centre. The story is a cla.s.sic of the genre. headline for a story on a radical new "Evolution Report" by Dr Oliver Curry, "evolution theorist" from the research centre. The story is a cla.s.sic of the genre.

By the year 3000, the average human will be 6ft tall, have coffee-coloured skin and live for 120 years, new research predicts. And the good news does not end there. Blokes will be chuffed to learn their w.i.l.l.i.e.s will get bigger-and women"s b.o.o.bs will become more pert. By the year 3000, the average human will be 6ft tall, have coffee-coloured skin and live for 120 years, new research predicts. And the good news does not end there. Blokes will be chuffed to learn their w.i.l.l.i.e.s will get bigger-and women"s b.o.o.bs will become more pert.

This was presented as important "new research" in almost every British newspaper. In fact it was just a fanciful essay from a political theorist at LSE. Did it hold water, even on its own terms?

No. Firstly, Dr Oliver Curry seems to think that geographical and social mobility are new things, and that they will produce uniformly coffee-coloured humans in 1,000 years. Oliver has perhaps not been to Brazil, where black Africans, white Europeans and Native Americans have been having children together for many centuries. The Brazilians have not gone coffee-coloured: in fact they still show a wide range of skin pigmentation, from black to tan. Studies of skin pigmentation (some specifically performed in Brazil) show that skin pigmentation seems not to be related to the extent of your African heritage, and suggest that colour may be coded for by a fairly small number of genes, and probably doesn"t blend and even out as Oliver suggests.

What about his other ideas? He theorised that ultimately, through extreme socio-economic divisions in society, humans will divide into two species: one tall, thin symmetrical, clean, healthy, intelligent and creative; the other short, stocky, asymmetrical, grubby, unhealthy and not as bright. Much like the peace-loving Eloi and the cannibalistic Morlocks in H.G. Wells" The Time Machine The Time Machine.

Evolutionary theory is probably one of the top three most important ideas of our time, and it seems a shame to get it wrong. This ridiculous set of claims was covered in every British newspaper as a news story, but none of them thought to mention that dividing into species, as Curry thinks we will do, usually requires some fairly strong pressures, like, say, geographical divisions. The Tasmanian Aboriginals, for example, who had been isolated for 10,000 years, were still able to have children with other humans from outside. "Sympatric speciation", a division into species where the two groups live in the same place, divided only by socioeconomic factors, as Curry is proposing, is even tougher. For a while, many scientists didn"t think it happened at all. It would require that these divides were absolute, although history shows that attractive impoverished females and wealthy ugly men can be remarkably resourceful in love.

I could go on-the full press release is at badscience.net badscience.net for your amus.e.m.e.nt. But the trivial problems in this trivial essay are not the issue: what"s odd is how it became a "boffins today said" science story all over the media, with the BBC, the for your amus.e.m.e.nt. But the trivial problems in this trivial essay are not the issue: what"s odd is how it became a "boffins today said" science story all over the media, with the BBC, the Telegraph Telegraph, the Sun Sun, the Scotsman, Metro Scotsman, Metro and many more lapping it up without criticism. and many more lapping it up without criticism.

How does this happen? By now you don"t need me to tell you that the "research"-or "essay"-was paid for by Bravo, a bikini-and-fast-car "men"s TV channel" which was celebrating its twenty-first year in operation. (In the week of Dr Curry"s important science essay, just to give you a flavour of the channel, you could catch the movie cla.s.sic Temptations Temptations: "When a group of farm workers find that the bank intends to foreclose on their property, they console each other with a succession of steamy romps." This might go some way to explaining the "pert b.r.e.a.s.t.s" angle of his "new research".) I spoke to friends on various newspapers, proper science reporters who told me they had stand-up rows with their newsdesks, trying to explain that this was not a science news story. But if they refused to write it, some other journalist would-you will often find that the worst science stories are written by consumer correspondents, or news generalists-and if I can borrow a concept from evolutionary theory myself, the selection pressure on employees in national newspapers is for journalists who compliantly and swiftly write up commercial puff nonsense as "science news".

One thing that fascinates me is this: Dr Curry is a proper academic (although a political theorist, not a scientist). I"m not seeking to rubbish his career. I"m sure he"s done lots of stimulating work, but in all likelihood nothing he will ever do in his profession as a relatively accomplished academic at a leading Russell Group university will ever generate as much media coverage-or have as much cultural penetrance-as this childish, lucrative, fanciful, wrong essay, which explains nothing to anybody. Isn"t life strange?

"Jessica Alba has the perfect wiggle, study says"

That"s a headline from the Daily Telegraph Daily Telegraph, over a story that got picked up by Fox News, no less, and in both cases it was accompanied by compelling imagery of some very hot totty. This is the last wacky story we"ll do, and I"m only including this one because it features some very fearless undercover work.

"Jessica Alba, the film actress, has the ultimate s.e.xy strut, according to a team of Cambridge mathematicians." This important study was the work of a team-apparently-headed by Professor Richard Weber of Cambridge University. I was particularly delighted to see it finally appear in print since, in the name of research, I had discussed prost.i.tuting my own reputation for it with Clarion, the PR company responsible, six months earlier, and there"s nothing like watching flowers bloom.

Here is their opening email: We are conducting a survey into the celebrity top ten s.e.xiest walks for my client Veet (hair removal cream) and we would like to back up our survey with an equation from an expert to work out which celebrity has the s.e.xiest walk, with theory behind it. We would like help from a doctor of psychology or someone similar who can come up with equations to back up our findings, as we feel that having an expert comment and an equation will give the story more weight. We are conducting a survey into the celebrity top ten s.e.xiest walks for my client Veet (hair removal cream) and we would like to back up our survey with an equation from an expert to work out which celebrity has the s.e.xiest walk, with theory behind it. We would like help from a doctor of psychology or someone similar who can come up with equations to back up our findings, as we feel that having an expert comment and an equation will give the story more weight.

It got them, as we have seen, onto the news pages of the Daily Telegraph Daily Telegraph.

I replied immediately. "Are there any factors you would particularly like to have in the equation?" I asked. "Something s.e.xual, perhaps?" "Hi Dr Ben," replied Kiren. "We would really like the factors of the equation to include the thigh to calf ratio, the shape of the leg, the look of the skin and the wiggle (swing) of the hips...There is a fee of 500 which we would pay for your services."

There was survey data too. "We haven"t conducted the survey yet," Kiren told me, "but we know what results we want to achieve." That"s the spirit! "We want Beyonce to come out on top followed by other celebrities with curvy legs such as J-Lo and Kylie and celebrities like Kate Moss and Amy Winehouse to be at the bottom e.g.-skinny and pale unshapely legs are not as s.e.xy." The survey, it turned out, was an internal email sent around the company. I rejected their kind offer, and waited. Professor Richard Weber did not. He regrets it. When the story came out, I emailed him, and it turned out that things were even more absurd than was necessary. Even after rigging their survey, they had to re-rig it: The Clarion press release was not approved by me and is factually incorrect and misleading in suggesting there has been any serious attempt to do serious mathematics here. No "team of Cambridge mathematicians" has been involved. Clarion asked me to help by a.n.a.lysing survey data from eight hundred men in which they were asked to rank ten celebrities for "s.e.xiness of walk". And Jessica Alba did not come top. She came seventh. The Clarion press release was not approved by me and is factually incorrect and misleading in suggesting there has been any serious attempt to do serious mathematics here. No "team of Cambridge mathematicians" has been involved. Clarion asked me to help by a.n.a.lysing survey data from eight hundred men in which they were asked to rank ten celebrities for "s.e.xiness of walk". And Jessica Alba did not come top. She came seventh.

Are these stories so bad? They are certainly pointless, and reflect a kind of contempt for science. They are merely PR promotional pieces for the companies which plant them, but it"s telling that they know exactly where newspapers" weaknesses lie: as we shall see, bogus survey data is a hot ticket in the media.

And did Clarion Communications really get eight hundred respondents to an internal email survey for their research, where they knew the result they wanted beforehand, and where Jessica Alba came seventh, but was mysteriously promoted to first after the a.n.a.lysis? Yes, maybe: Clarion is part of WPP, one of the world"s largest "communications services" groups. It does advertising, PR and lobbying, has a turnover of around 6 billion, and employs 100,000 people in a hundred countries.

These corporations run our culture, and they riddle it with bulls.h.i.t.

Stats, miracle cures and hidden scares How can we explain the hopelessness of media coverage of science? A lack of expertise is one part of the story, but there are other, more interesting elements. Over half of all the science coverage in a newspaper is concerned with health, because stories of what will kill or cure us are highly motivating, and in this field the pace of research has changed dramatically, as I have already briefly mentioned. This is important background. Before 1935 doctors were basically useless. We had morphine for pain relief- a drug with superficial charm, at least-and we could do operations fairly cleanly, although with huge doses of anaesthetics, because we hadn"t yet sorted out well-targeted muscle-relaxant drugs. Then suddenly, between about 1935 and 1975, science poured out an almost constant stream of miracle cures. If you got TB in the 1920s, you died, pale and emaciated, in the style of a romantic poet. If you got TB in the 1970s, then in all likelihood you would live to a ripe old age. You might have to take rifampicin and isoniazid for months on end, and they"re not nice drugs, and the side-effects will make your eyeb.a.l.l.s and wee go pink, but if all goes well you will live to see inventions unimaginable in your childhood.

It wasn"t just the drugs. Everything we a.s.sociate with modern medicine happened in that time, and it was a barrage of miracles: kidney dialysis machines allowed people to live on despite losing two vital organs. Transplants brought people back from a death sentence. CT scanners could give three-dimensional images of the inside of a living person. Heart surgery rocketed forward. Almost every drug you"ve ever heard of was invented. Cardiopulmonary resuscitation (the business with the chest compressions and the electric shocks to bring you back) began in earnest.

Let"s not forget polio. The disease paralyses your muscles, and if it affects those of your chest wall, you literally cannot pull air in and out: so you die. Well, reasoned the doctors, polio paralysis often retreats spontaneously. Perhaps, if you could just keep these patients breathing somehow, for weeks on end if necessary, with mechanical ventilation, a bag and a mask, then they might, with time, start to breathe independently once more. They were right. People almost literally came back from the dead, and so intensive care units were born.

Alongside these absolute undeniable miracles, we really were finding those simple, direct, hidden killers that the media still pine for so desperately in their headlines. In 1950 Richard Doll and Austin Bradford Hill published a preliminary "case-control study"-where you gather cases of people with a particular disease, and find similar people who don"t have it, and compare the lifestyle risk factors between the groups-which showed a strong relationship between lung cancer and smoking. The British Doctors Study in 1954 looked at 40,000 doctors-medics are good to study, because they"re on the GMC register, so you can find them again easily to see what happened later in their life-and confirmed the finding. Doll and Bradford Hill had been wondering if lung cancer might be related to tarmac, or petrol; but smoking, to everybody"s genuine surprise, turned out to cause it in 97 per cent of cases. You will find a ma.s.sive distraction on the subject in this footnote.*

- In some ways, perhaps it shouldn"t have been a surprise. The Germans had identified a rise in lung cancer in the 1920s, but suggested-quite reasonably-that it might be related to poison-gas exposure in the Great War. In the 1930s, identifying toxic threats in the environment became an important feature of the n.a.z.i project to build a master race through "racial hygiene". - In some ways, perhaps it shouldn"t have been a surprise. The Germans had identified a rise in lung cancer in the 1920s, but suggested-quite reasonably-that it might be related to poison-gas exposure in the Great War. In the 1930s, identifying toxic threats in the environment became an important feature of the n.a.z.i project to build a master race through "racial hygiene". Two researchers, Schairer and Schoniger, published their own case-control study in 1943, demonstrating a relationship between smoking and lung cancer almost a decade before any researchers elsewhere. Their paper wasn"t mentioned in the cla.s.sic Doll and Bradford Hill paper of 1950, and if you check in the Science Citation Index, it was referred to only four times in the 1960s, once in the 1970s, and then not again until 1988, despite providing valuable information. Some might argue that this shows the danger of dismissing sources you dislike. But n.a.z.i scientific and medical research was bound up with the horrors of cold-blooded ma.s.s murder, and the strange puritanical ideologies of n.a.z.ism. It was almost universally disregarded, and with good reason. Doctors had been active partic.i.p.ants in the n.a.z.i project, and joined Hitler"s National Socialist Party in greater numbers than any other profession (45 per cent of them were party members, compared with 20 per cent of teachers). Two researchers, Schairer and Schoniger, published their own case-control study in 1943, demonstrating a relationship between smoking and lung cancer almost a decade before any researchers elsewhere. Their paper wasn"t mentioned in the cla.s.sic Doll and Bradford Hill paper of 1950, and if you check in the Science Citation Index, it was referred to only four times in the 1960s, once in the 1970s, and then not again until 1988, despite providing valuable information. Some might argue that this shows the danger of dismissing sources you dislike. But n.a.z.i scientific and medical research was bound up with the horrors of cold-blooded ma.s.s murder, and the strange puritanical ideologies of n.a.z.ism. It was almost universally disregarded, and with good reason. Doctors had been active partic.i.p.ants in the n.a.z.i project, and joined Hitler"s National Socialist Party in greater numbers than any other profession (45 per cent of them were party members, compared with 20 per cent of teachers). German scientists involved in the smoking project included racial theorists, but also researchers interested in the heritability of frailties created by tobacco, and the question of whether people could be rendered "degenerate" by their environment. Research on smoking was directed by Karl Astel, who helped to organist- the "euthanasia" operation that murdered 200,000 mentally and physically disabled people, and a.s.sisted in the "final solution of the Jewish question" as head of the Office of Racial Affairs. German scientists involved in the smoking project included racial theorists, but also researchers interested in the heritability of frailties created by tobacco, and the question of whether people could be rendered "degenerate" by their environment. Research on smoking was directed by Karl Astel, who helped to organist- the "euthanasia" operation that murdered 200,000 mentally and physically disabled people, and a.s.sisted in the "final solution of the Jewish question" as head of the Office of Racial Affairs.

The golden age-mythical and simplistic though that model maybe-ended in the 1970s. But medical research did not grind to a halt. Far from it: your chances of dying as a middle-aged man have probably halved over the past thirty years, but this is not because of any single, dramatic, headline-grabbing breakthrough. Medical academic research today moves forward through the gradual emergence of small incremental improvements, in our understanding of drugs, their dangers and benefits, best practice in their prescription, the nerdy refinement of obscure surgical techniques, identification of modest risk factors, and their avoidance through public health programmes (like "five-a-day") which are themselves hard to validate.

This is the major problem for the media when they try to cover medical academic research these days: you cannot crowbar these small incremental steps-which in the aggregate make a sizeable contribution to health-into the pre-existing "miracle-cure-hidden-scare" template.

I would go further, and argue that science itself works very badly as a news story: it is by its very nature a subject for the "features" section, because it does not generally move ahead by sudden, epoch-making breakthroughs. It moves ahead by gradually emergent themes and theories, supported by a raft of evidence from a number of different disciplines on a number of different explanatory levels. Yet the media remain obsessed with "new breakthroughs".

© 2024 www.topnovel.cc