How to Read Media Coverage of Scientific Research: Sorting Out the Stupid Science from Smart Science

Reading today’s headlines I saw an interesting title, “New Alzheimer’s Gene Identified.”

I was intrigued. Discovering a gene that caused late onset Alzheimer’s would be a major scientific breakthrough, perhaps leading to effective new treatments. So I read the article carefully.

To summarize the findings, a United States research team looked at the entire genome of 2269 people who had late onset Alzheimer’s and 3107 people who did not. They were looking for differences in the genome.

In the people who had late onset Alzheimer’s, 9% had a variation in the gene MTHFD1L, which lives on chromosome 6. Of those who did not have late-onset Alzheimer’s 5% had this variant.

So is this an important finding? The article suggested it was. But I think this is a prime example of bad science reporting. For instance, they went on to say that this particular gene is involved with the metabolism of folate, which influences levels of homocysteine. It’s a known fact that levels of homocysteine can affect heart disease and Alzheimer’s. So is it the gene, or is it the level of homocysteine?

The main reason why I consider this an example of stupid science reporting is that the difference is trivial. Let me give you an example of a better way to report this. The researchers could have instead reported that among people with late-onset Alzheimer’s, 91% of them had no gene changes, and then among people without late onset Alzheimer’s 95% of them had normal genes. But this doesn’t sound very impressive and calls into question whether measurement errors would account for the differences.

So this very expensive genome test yields absolutely no predictive value in terms of who will develop Alzheimer’s and who will not. There is a known genetic variant, called APOE, which lives on chromosome 19. Forty percent of those who develop late-onset Alzheimer’s have this gene, while only 25 to 30% of the general population has it. So even this gene, which has a much stronger association with Alzheimer’s, isn’t a particularly useful clinical test.

The other reason this is an example of stupid science is that basically, this is a negative finding. To scan the entire human genome looking for differences between normal elderly people and elderly people with Alzheimer’s, and discover only a subtle and tiny difference, must’ve been a huge disappointment for the researchers. If I had been the journal editor reviewing this study, I doubt I would’ve published it. Imagine a similar study of an antidepressant, which found that in the antidepressant group, 9% of people got better, and in the placebo group 5% got better. I doubt this would get published.

Interestingly enough, the study hasn’t been published yet, but is being presented as a paper at the April 14 session of the American Academy of Neurology conference in Toronto. This is another clue to reading scientific research. If it hasn’t been published in a peer-reviewed scientific journal, be very skeptical of the research. Good research usually gets published in top journals, and research that is more dubious often is presented at conferences but never published. It’s much easier to get a paper accepted for a conference than in a science journal.

It’s also important when reading media coverage of scientific research to read beyond the headlines, and to look at the actual numbers that are being reported. If they are very small numbers, or very small differences, be very skeptical of whether they mean anything at all.

As quoted in the article, “While lots of genetic variants have been singled out as possible contributors to Alzheimer’s, the findings often can’t be replicated or repeated, leaving researchers unsure if the results are a coincidence or actually important,” said Dr. Ron Petersen, director of the Mayo Alzheimer’s disease research Center in Rochester, Minnesota.

So to summarize, to be a savvy consumer of media coverage of scientific research:

1. Be skeptical of media reports of scientific research that hasn’t been published in top scientific journals. Good research gets published in peer-reviewed journals, which means that other scientists skeptically read the article before it’s published.

2. Read below the headlines and look for actual numbers that are reported, and apply common sense to these numbers. If the differences are very small in absolute numbers, it often means that the research has very little clinical usefulness. Even if the differences are large in terms of percentages, this doesn’t necessarily mean that they are useful findings.

An example would be a finding that drinking a particular type of bourbon increases a very rare type of brain tumor from one in 2,000,00 to three in 2 million. If this was reported in percentage terms the headline would say drinking this bourbon raises the risk of brain tumor by 300%, which would definitely put me and many other people off from drinking bourbon. (By the way, this is a completely fictitious example.) But if you compare the risk to something that people do every day such as driving, and revealed the driving is 1000 times more risky than drinking this type of bourbon, it paints the research in a very different light.

3. Be very skeptical of research that has not been reproduced or replicated by other scientists. There’s a long history in science of findings that cannot be reproduced or replicated by other scientists, and therefore don’t hold up as valid research findings.

4. On the web, be very skeptical of research that’s presented on sites that sell products. Unfortunately a common strategy for selling products, particularly vitamin supplements, is to present pseudoscientific research that supports the use of the supplement. In general, any site that sells a product cannot be relied on for objective information about that product. It’s much better to go to primarily information sites like Web M.D., or the Mayo Clinic site, or one can go directly to the original scientific articles (in some cases), by using PubMed.

So be a smart consumer of science, so that you can tell the difference between smart science and stupid science.

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

New Study Finds the Best Pharmacological Stop Smoking Solution: (Hint, it’s not what you’d think)

A new study at the Center for Tobacco Research and Intervention, School of Medicine and Public Health, University of Wisconsin, Madison, compared all except one of the current drug treatments that help with quitting smoking. They looked at the following treatments and combined treatments:

  • “bupropion SR (sustained release; Zyban, GlaxoSmithKline), 150 mg twice daily for 1 week before a target quit date and 8 weeks after the quit date;
  • nicotine lozenge (2 or 4 mg) for 12 weeks after the quit date;
  • nicotine patch (24-hour, 21, 14, and 7 mg titrated down during 8 weeks after quitting;
  • nicotine patch plus nicotine lozenge;
  • bupropion SR plus nicotine lozenge; or
  • placebo (1 matched to each of the 5 treatments).”

Everyone received six 10- to 20-minute individual counseling sessions, with the first 2 sessions scheduled before quitting.

What were the results?

Three treatments worked better than placebo during the immediate quit period: the patch, bupropion plus lozenge, and patch plus lozenge.

At six months, only one treatment was effective; the nicotine patch plus nicotine lozenge. The exact numbers , as confirmed by carbon monoxide tests, were: “40.1% for the patch plus lozenge, 34.4% for the patch alone, 33.5% for the lozenge alone, 33.2% for bupropion plus lozenge, 31.8% for bupropion alone, and 22.2% for placebo.”

So we see that the combined nicotine substitution therapy worked best, followed closely by either nicotine substitute alone. Zyban or Welbutrin (bupropion) was a bust, no more effective than the simple nicotine lozenge. The only advantage to Zyban would be if one prefers not to use nicotine substitutes.

Now I mentioned that they omitted one drug treatment, which is the drug Chantix (varenicline). This is probably because the drug is a nicotine receptor blocker, so wouldn’t have made sense to combine with nicotine substitutes. Also, there have been some disturbing case reports of people having severe depressive reactions to Chantrix.

Of course, there was one glaring omission that any card-carrying psychologist would spot in a moment–the lack of a behavior therapy component. Giving 6 ten minute sessions is hardly therapy. I would have liked to see true smoking cessation behavior therapy combined with the drug treatments.

So, if you’re trying to quit smoking, combine nicotine patches with nicotine lozenges, sold in any pharmacy. If you do, you have a 40 percent chance of succeeding at 6 months.

Now I am off to have a cigarette….just kidding.

Study: http://cme.medscape.com/viewarticle/712074_print

Copyright © 2009/2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

Why do Most Psychologists Ignore Science Based Therapy? Evidence Based Psychotherapy and the Failure of Practicioners

A new article in Newsweek magazine titled Ignoring the Evidence documents how most psychologists ignore scientific evidence about treatments such as cognitive behavioral therapy which have been proven to be effective.

A two-year study which is going to be published in November in Psychological Science in the Public Interest, found that most psychologists “give more weight to their personal experiences then to science.”

The Newsweek article has a wonderful quote,

“Thanks to clinical trials as rigorous as those for, say, cardiology, we now know that cognitive and cognitive-behavior therapy (teaching patients to think about their thoughts in new, healthier ways and to act on those new ways of thinking) are effective against depression, panic disorder, bulimia nervosa, obsessive-compulsive disorder, and -posttraumatic-stress disorder, with multiple trials showing that these treatments-the tools of psychology-bring more durable benefits with lower relapse rates than drugs, which non-M.D. psychologists cannot prescribe. Studies have also shown that behavioral couples therapy helps alcoholics stay on the wagon, and that family therapy can help schizophrenics function. “


The article documents how most psychologists fail to provide empirically proven treatment approaches and instead use methods which are often ineffective. The truth is there is very little evidence for most of the types of therapy commonly performed in private practices by psychologists and by Masters level therapists. If you are shopping for the most effective types of therapy you need to find a practitioner who is skilled at cognitive behavioral therapy (CBT) which is one of the few psychotherapy approaches that has been proven to work on a variety of problems.

Another interesting article in Newsweek about evidence-based treatment discussed bulimia. Here’s the summary:

“On bulimia (which affects about 1 percent of women) and binge eating disorders (2 to 5 percent), the verdict is more optimistic: psychological treatment can help a lot, and cognitive behavioral therapy (CBT) is the most effective talk therapy. That’s based on 48 studies with 3,054 participants. CBT (typically, 15 to 20 sessions over five months) helps patients understand their patterns of binge eating and purging, recognize and anticipate the triggers for it, and summon the strength to resist them; it stops bingeing in just over one third of patients. Interpersonal therapy produced comparable results, but took months longer; other therapies helped no more than 22 percent of patients. If you or someone you love seeks treatment for bulimia, and is offered something other than CBT first, it’s not unreasonable to ask why. Cynthia Bulik, director of the University of North Carolina Eating Disorders Program, summarized it this way: “Bulimia nervosa is treatable; some treatment is better than no treatment; CBT is associated with the best outcome.”

So the bottom line is this:

1. Most psychologists who don’t practice Cognitive Behavioral Therapy (CBT) are just winging it, using treatments that haven’t been shown to work by scientific studies. It’s as if you went to a regular physician and got treatment with leaches!

2. Many psychologists claim to use CBT but haven’t really trained in the use of CBT, or have taken a weekend workshop. Unless they prescribe weekly homework that involves writing down thoughts, and learning skills to analyze and change your thoughts, then they aren’t really doing CBT, and I recommend you find someone else.

3. If you have an anxiety disorder, depression, bulimia, or obsessive compulsive disorder, and haven’t been offered CBT, then you are not receiving state of the art treatment.

Copyright © 2009 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions