How Reporters Screw up Health and Medical Reporting (and How You Can Catch Them Doing So)

I’ve written before about common mistakes in interpreting medical research in my blog post How to Read Media Coverage of Scientific Research: Sorting out the Stupid Science from Smart Science. I recently read a very interesting post by Gary Schwitzer about the most common mistakes that journalists make when reporting health and medical findings.

The three mistakes that he discusses:

 1.      Absolute versus relative risk/benefit data

“Many stories use relative risk reduction or benefit estimates without providing  the absolute data. So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).

2.      Association does not equal causation

A second key observation is that journalists often fail to explain the inherent limitations in observational studies – especially that they cannot establish cause and effect. They can point to a strong statistical association but they can’t prove that A causes B, or that if you do A you’ll be protected from B. But over and over we see news stories suggesting causal links. They use active verbs in inaccurately suggesting established benefits.

3.      How we discuss screening tests

The third recurring problem I see in health news stories involves screening tests. … “Screening,” I believe, should only be used to refer to looking for problems in people who don’t have signs or symptoms or a family history. So it’s like going into Yankee Stadium filled with 50,000 people about whom you know very little and looking for disease in all of them. … I have heard women with breast cancer argue, for example, that mammograms saved their lives because they were found to have cancer just as their mothers did. I think that using “screening” in this context distorts the discussion because such a woman was obviously at higher risk because of her family history. She’s not just one of the 50,000 in the general population in the stadium. There were special reasons to look more closely in her. There may not be reasons to look more closely in the 49,999 others.”

Let’s discuss each of these in a little bit more depth. The first mistake is probably the most common one, where statistically significant findings are not put into clinical perspective. Let me explain. Suppose we are looking at a drug that prevents a rare illness. The base rate of this illness, which we will call Catachexia is 4 in 10,000 people. The drug reduces this illness to one in 10,000 people, a 75% decrease. Sounds good, right?

Not so fast. Let me add a few facts to this hypothetical case. Let’s imagine that the drug costs $10,000 a year, and also has some bad side effects. So in order to reduce the incidence from four people to one person in ten thousand, 9996 people who would never develop this rare but serious illness must be treated. The cost of doing so would be $99,960,000! Plus those 9996 people would be unnecessarily exposed to side effects.

So which headline sounds better to you?

New Drug Prevents 75% of Catachexia Cases!

Or

New Drug Lowers the Prevalence of Catachexia Cases by Three People per 10,000, at a Cost of Almost $100 Million Dollars

The first headline reflects a reporting of the relative risk reduction, without cost data, and the second headline reflects the absolute risk reduction, and the costs. The second headline is the only one that should be reported but unfortunately the first headline is much more typical in science and medical reporting.

The second error where association or correlation does not equal causation is terribly common as well. The best example of this is all of the studies looking at the health effects of coffee. Almost every week we get a different study that claims either a health benefit of coffee or a negative health impact of coffee. These findings are typically reported in the active tense such as, “drinking coffee makes you smarter.”

So which headline sounds better to you?

Drinking Coffee Makes You Smarter

Or

Smarter People Drink More Coffee

Or

Scientists Find a Relatively Weak Association between Intelligence Levels and Coffee Consumption

Of course the first headline is the one that will get reported, even though the second headline is equally inaccurate. Only the third headline accurately reports the findings.

The theoretical problem with any correlational study of two different variables is that we never know, nor can we ever know, what intervening variables might be correlated with each. Let me give you a classic example. There is a high correlation between the consumption of ice cream in Iowa and the death rate in Mumbai, India. This sounds pretty disturbing. Maybe those people in Iowa should stop eating ice cream. But of course the intervening variable here is summer heat. When the temperature goes up in Iowa people eat more ice cream. And when the temperature goes up in India, people are more likely to die.

The only way that one could actually verify a correlational finding would be to do a follow-up study where you randomly assign people to either consume or not consume the substance or food that you wish to test. The problem with this is that you would have to get coffee drinkers to agree not to drink coffee and non-coffee drinkers to agree to drink coffee, for example, which might be very difficult. But if you can do this with coffee, chocolate, broccoli, exercise, etc. then at least you could demonstrate a real causal effect. (I’ve oversimplified some of the complexity of controlled random assignment studies, but my point stands.)

The final distortion which involves confusion about screening tests is also very common, and unfortunately, incredibly complex. The main point that Schwitzer is trying to make here though is simple; screening tests are only those tests which are applied to a general population which is not at high risk for any illness. Evaluating the usefulness of screening tests must be done in the context of a low risk population, because that is how most screening tests are used. Most people don’t get colon cancer, breast cancer, or prostate cancer, even over 50. If you use a screening test only with high-risk individuals then it’s not really a screening test.

There is the whole other issue with reporting on screening tests that I’m only going to briefly mention because it’s so complicated and so controversial. It’s that many screening tests may do as much harm as good. Recently there has been a lot of discussion of screening for cancer, especially prostate and breast cancer. The dilemma with screening tests is that once you find cancer you almost always are obligated to treat it because of medical malpractice issues and psychological issues (“Get that cancer out of me!”) The problem with this automatic treatment is that current screening doesn’t distinguish between fast-growing dangerous tumors and very slow growing indolent tumors. Thus we may apply treatments which have considerable side effects or even mortality to tumors that would never harm the person.

Another problem is that screening often misses the onset of fast-growing dangerous tumors because they begin to grow between the screening tests. The bottom line is that screening for breast cancer and prostate cancer may have relatively little impact on the only statistic that counts – the cancer death rate. If we had screening tests that could distinguish between relatively harmless tumors and dangerous tumors then screening might be more helpful, but that is not where we are yet.

One more headline test. Which headline do you prefer?

Screening for Prostate Cancer Leads to Detection and Cure of Prostate Cancer

Or

Screening for Prostate Cancer Leads to Impotence and Incontinence in Many Men Who Would Never Die from Prostate Cancer

The first headline is the one that will get reported even though the second headline is scientifically more accurate.

I suggest that every time you see a health or medicine headline that you rewrite it in a more accurate way after you read the article. Remember to use absolute differences rather than relative differences, to report association instead of causation, and add in the side effects and costs of any suggested treatment or screening test. This will give you practice in reading health and medical research accurately.

Also remember the most important rule, one small study does not mean anything. It’s actually quite humorous how the media will seize upon a study, even though the study was based on 20 people and hasn’t been replicated or repeated by anybody. They also typically fail to put into context the results of one study versus other studies of the same thing. A great example is eggs and type II diabetes. The same researcher, Luc Djousse, published a study in 2008 (link) that showed a strong relationship between the consumption of eggs and the occurrence of type II diabetes, but then in 2010 published another study finding absolutely no correlation whatsoever. Always be very skeptical, and most often you will be right.

I’m off to go make a nice vegetarian omelette…

 

Copyright © 2011 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

Money and Drugs

We hear a lot these days about the drug wars in Mexico and the US. There’s another drug war being waged, the war over drugs and money. That’s the war where drug companies pay influential psychiatrists to recommend the prescriptions of potent and dangerous drugs in children, adolescents and adults. Today the little guy won a small skirmish in that war.

National Public Radio (NPR) reported today that Harvard has punished three well-known psychiatrists over failing to reveal payments from drug companies. These three doctors are accused of accepting more than $4.2 million from drug companies between 2000 and 2007 without reporting the income to Harvard, Massachusetts General Hospital, or the federal government. $4.2 million!  That is $1.4 million per Doctor!

What did they do for this money?  Supposedly they did research, but if that were true then they probably would’ve revealed these payments. One can only guess, but it’s useful to look at their positions on prescribing. Dr. Joseph Biederman is well-known for being a proponent of the off-label use of antipsychotic drugs to treat supposed  “bipolar illness” in young children. Much of his work is seen as encouraging the growth in these kinds of prescriptions, and his funding came from drug companies that make these drugs. Biederman and the other two psychiatrists accused also have published extensively on the use of drugs to treat attention deficit hyperactivity disorder, also a huge growth industry in the Psycho-Pharma business.

All three doctors have been banned from taking any industry money for one year, and will be under probation for two more years after that. Good for Harvard and Massachusetts General Hospital! But it would be better if Harvard and Massachusetts General Hospital were more open themselves about disclosing what these doctors did, who they took money from, and whether they accepted federal grant money while not disclosing industry support which would be a violation of federal guidelines.

(I should add that most psychiatrists that I know and refer to don’t get any money from drug companies. In fact, when I ran some of the data regarding drug company’s payments to psychiatrists by a friend and colleague, he jokingly said, “Wow, I wish I had known, I could be driving a much nicer car!”  It appears that the biggest offenders are psychiatrists in academic settings or large hospital settings. Maybe we should all be asking our doctors to reveal their non-clinical funding so we know their biases.)

Copyright © 2010, 2011 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

Want to Sleep Better? Get Brief CBT-I Therapy for Sleep Instead of Sleeping Pills

“To sleep–perchance to dream. Ay, there’s the rub”

The New York Times reported on a terrific study at the University of Pittsburgh, looking at ultra short treatment of insomnia in the elderly. According to the article roughly 1/4 of older adults suffer from insomnia. The researchers streamlined an approach called CBT-I, which stands for cognitive behavioral therapy of insomnia.

There were only two sessions of treatment, totaling about 90 minutes. There were also two brief follow-up phone calls, over the first month. They tested this brief treatment and 79 seniors with chronic insomnia.

So what were the results of this study? They couldn’t have been very powerful, right?

Wrong. Two thirds of the CBT-I group reported a clear improvement in sleep, compared with only 25% of the people in the control group. Even better, 55% were cured of their insomnia. And six months later the results were even better.

So what was this magic treatment and the magic rules for curing insomnia? There were only four rules.

  • Spend only seven or eight hours in bed.
  • Set your alarm and get up at the same time everyday.
  • Never go to bed until you actually feel sleepy.
  • If you are tossing and turning and can’t sleep, get out of bed and do something relaxing until you get sleepy again. Then go back to bed.

These are standard cognitive behavioral sleep hygiene rules. And they are very powerful. Although not mentioned in the study, a few other rules are also helpful.

  • Regular exercise performed no later than midday is also helpful.
  • Reducing caffeine, nicotine, and alcohol all are helpful.
  • Avoid all naps.
  • Only use your bedroom for sleep and sex. Don’t watch TV or read in bed.

So why isn’t this treatment widely available? Could it be because there isn’t a powerful drug lobby for sleeping pills pushing this very effective therapy?

What is really tragic is that most seniors end up being prescribed sleeping pills for insomnia. And this is in spite of very clear data from research that shows that modern sleeping pills such as Ambien, Lunesta, or Sonata, have very minimal effects. On average they reduced the average time to fall asleep by 12.8 minutes compared to placebo, and increased the total sleeping time by only 11.4 minutes.

Patients who took older sleeping medications like Halcion and Restoril fell asleep 10 minutes faster, and slept 32 minutes longer.

How can this be? Why is it that patients believe that sleeping pills are much more effective? The answer is very simple. All of these drugs produce a condition called anterograde amnesia. This means that you cannot form memories under the influence of these drugs. So you don’t remember tossing and turning.  If you can’t remember tossing and turning even though you may have, then you perceive your sleep has been better. The drugs also tend to reduce anxiety, so people worry less about having insomnia, and thus feel better.

The hazards of sleeping pills in older adults include cognitive impairment, poor balance, and an increased risk of falling. One study in the Journal of the American geriatrics Society found that even after being awake for two hours in the morning, elder adults who took Ambien the night before failed a simple balance test at the rate of 57% compared to 0% in the group who took placebo. This is pretty serious impairment. Interestingly enough, in the same study, even young adults who took Ambien showed impaired balance in the morning.

So what are the key messages here?

1. Even though sleeping pills give people a sense of perceived improvement in sleep, the actual improvement tends to be almost insignificant, especially with the newer and very expensive sleeping medications. The older medications increased sleep time a little better, but have more issues with addiction and tolerance. Side effects of these medications are potentially very worrisome, since they can cause cognitive impairment and increased falling which leads to injuries, especially in the elderly. Why risk these side effects for such small improvements in sleep quality?

2. Cognitive behavioral therapy for insomnia works better than sleeping pills, has no side effects, is cheaper in the long run, and has a lasting impact on sleep improvement.

3. Most people who suffer insomnia will see their physician, who will prescribe sleeping pills. This is partly because of the lack of availability of cognitive behavioral treatment for insomnia. There are relatively few cognitive behavioral practitioners, and even fewer who regularly do CBT-I. We need to improve the availability of these treatments, and should follow in the footsteps of the University of Pittsburgh researchers in learning how to streamline these treatments. Most people don’t have the patience to spend 6 to 8 weeks in cognitive behavioral therapy for insomnia. Instead we need treatments that can be administered in a single week or two with some brief follow-up.

4. CBT-I availability will always suffer from the fact that there is no powerful corporate interest backing it. There are no CBT-I sales reps going to doctors offices offering free samples of CBT-I for doctors to pass out to their patients. I don’t have a solution for this problem, but would be interested in hearing from my readers as to how we might more effectively promote effective and safe treatments such as CBT-I.

Okay, now that I’ve written this, it’s time to trundle off to bed. As Hamlet said, “To sleep — perchance to dream. Ay, there’s the rub!”

Copyright © 2010, 2011 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions