How to Read Media Coverage of Scientific Research: Sorting Out the Stupid Science from Smart Science

Reading today’s headlines I saw an interesting title, “New Alzheimer’s Gene Identified.”

I was intrigued. Discovering a gene that caused late onset Alzheimer’s would be a major scientific breakthrough, perhaps leading to effective new treatments. So I read the article carefully.

To summarize the findings, a United States research team looked at the entire genome of 2269 people who had late onset Alzheimer’s and 3107 people who did not. They were looking for differences in the genome.

In the people who had late onset Alzheimer’s, 9% had a variation in the gene MTHFD1L, which lives on chromosome 6. Of those who did not have late-onset Alzheimer’s 5% had this variant.

So is this an important finding? The article suggested it was. But I think this is a prime example of bad science reporting. For instance, they went on to say that this particular gene is involved with the metabolism of folate, which influences levels of homocysteine. It’s a known fact that levels of homocysteine can affect heart disease and Alzheimer’s. So is it the gene, or is it the level of homocysteine?

The main reason why I consider this an example of stupid science reporting is that the difference is trivial. Let me give you an example of a better way to report this. The researchers could have instead reported that among people with late-onset Alzheimer’s, 91% of them had no gene changes, and then among people without late onset Alzheimer’s 95% of them had normal genes. But this doesn’t sound very impressive and calls into question whether measurement errors would account for the differences.

So this very expensive genome test yields absolutely no predictive value in terms of who will develop Alzheimer’s and who will not. There is a known genetic variant, called APOE, which lives on chromosome 19. Forty percent of those who develop late-onset Alzheimer’s have this gene, while only 25 to 30% of the general population has it. So even this gene, which has a much stronger association with Alzheimer’s, isn’t a particularly useful clinical test.

The other reason this is an example of stupid science is that basically, this is a negative finding. To scan the entire human genome looking for differences between normal elderly people and elderly people with Alzheimer’s, and discover only a subtle and tiny difference, must’ve been a huge disappointment for the researchers. If I had been the journal editor reviewing this study, I doubt I would’ve published it. Imagine a similar study of an antidepressant, which found that in the antidepressant group, 9% of people got better, and in the placebo group 5% got better. I doubt this would get published.

Interestingly enough, the study hasn’t been published yet, but is being presented as a paper at the April 14 session of the American Academy of Neurology conference in Toronto. This is another clue to reading scientific research. If it hasn’t been published in a peer-reviewed scientific journal, be very skeptical of the research. Good research usually gets published in top journals, and research that is more dubious often is presented at conferences but never published. It’s much easier to get a paper accepted for a conference than in a science journal.

It’s also important when reading media coverage of scientific research to read beyond the headlines, and to look at the actual numbers that are being reported. If they are very small numbers, or very small differences, be very skeptical of whether they mean anything at all.

As quoted in the article, “While lots of genetic variants have been singled out as possible contributors to Alzheimer’s, the findings often can’t be replicated or repeated, leaving researchers unsure if the results are a coincidence or actually important,” said Dr. Ron Petersen, director of the Mayo Alzheimer’s disease research Center in Rochester, Minnesota.

So to summarize, to be a savvy consumer of media coverage of scientific research:

1. Be skeptical of media reports of scientific research that hasn’t been published in top scientific journals. Good research gets published in peer-reviewed journals, which means that other scientists skeptically read the article before it’s published.

2. Read below the headlines and look for actual numbers that are reported, and apply common sense to these numbers. If the differences are very small in absolute numbers, it often means that the research has very little clinical usefulness. Even if the differences are large in terms of percentages, this doesn’t necessarily mean that they are useful findings.

An example would be a finding that drinking a particular type of bourbon increases a very rare type of brain tumor from one in 2,000,00 to three in 2 million. If this was reported in percentage terms the headline would say drinking this bourbon raises the risk of brain tumor by 300%, which would definitely put me and many other people off from drinking bourbon. (By the way, this is a completely fictitious example.) But if you compare the risk to something that people do every day such as driving, and revealed the driving is 1000 times more risky than drinking this type of bourbon, it paints the research in a very different light.

3. Be very skeptical of research that has not been reproduced or replicated by other scientists. There’s a long history in science of findings that cannot be reproduced or replicated by other scientists, and therefore don’t hold up as valid research findings.

4. On the web, be very skeptical of research that’s presented on sites that sell products. Unfortunately a common strategy for selling products, particularly vitamin supplements, is to present pseudoscientific research that supports the use of the supplement. In general, any site that sells a product cannot be relied on for objective information about that product. It’s much better to go to primarily information sites like Web M.D., or the Mayo Clinic site, or one can go directly to the original scientific articles (in some cases), by using PubMed.

So be a smart consumer of science, so that you can tell the difference between smart science and stupid science.

Copyright © 2010 Andrew Gottlieb, Ph.D. /The Psychology Lounge/TPL Productions

5 thoughts on “How to Read Media Coverage of Scientific Research: Sorting Out the Stupid Science from Smart Science

  1. This is an enormously useful post! These days, there’s so much wonderful research going on, so many bits of instant science news, and we are all so hungry for good news, we’re vulnerable to those who mislead (and are misled)in the name of a scoop. I’d be very curious to know your take on Kirsch’s research on antidepressant studies as reported in Newsweek by Sharon Begley earlier this year.

  2. Agreed. But we owe it to ourselves, as long as we’re going to read pro lit (for what its worth), or base our discussions on pro lit, to learn about how it works, and not to trust it automatically. Having read about the many studies that have weak, or tainted, or doctored outcomes; knowing the power of the Rosenthal Effect; and sometimes simply re-reading “How to Lie with Statistics” or some other similar book, I consider myself a reasonably informed, if rather skeptical consumer: I know what’s going on, I’m probably well- protected against getting hoodwinked, I’m generally able to separate the wheat from the chaff…. so I go back and read the literature hoping to dredge up some encouraging possibilities…and I find that the state of the current art in just about everything is so depressing that I have to take a couple of those little yellow pills (the ones that peer-reviewed j0urnals have concluded have little more effect than placebos) in order to feel better. And sometimes I actually do feel better. Seems like there’s something wrong here, but I’m feeling too down today to try to figure out what it is…

  3. Thanks for the interesting comment. I suspect one of the main problems with most research is that it is naturally biased by profit or success motives. As much medical research is funded by drug companies or device manufacturers, there is a natural bias towards amplifying weak results, and then hyping them in the media. Government studies have less profit motive, but often have success bias, as no one wants to be the principal investigator on a 20 million dollar, ten year study that failed to get results. I don’t have a good solution other than to always read between the lines, look at the actual numbers and effect sizes, apply large doses of common sense and skepticism and keep taking those little yellow pills! (Although given the latest results on placebos, I like those more, since they have no side effects, and sugar pills are really really cheap!) The bottom line is that everything we know for sure is probably wrong, and the road to knowledge is paved with much confusion.

  4. Good information for the lay audience.

    I am fortunate that I know how to distinguish good science from bad science. Oftentime, we are fooled by the numbers and statistics. Good science should have 1. a hypothesis, 2. design key experiments to address the unanswered questions, 3. use appropriate statistical analysis to determine if the findings are significantly different/similar from the null hypothesis, and 4. is reproducible.

    To ensure that the research work is scientifically sound, one should also check the impact factor (# of articles cited per year)of the journal even if it is peer-reviewed. In general, the journal within a particular field is considered secondary if the impact factor is below 5, and is considered prestigious if the impact factor is above 10.

    p.s. first time visitor for Psychology Lounge.

  5. Good article, and spot on. One correction: where you say “If I had been the journal editor reviewing this study, I doubt I would’ve published it.” If the study was well structured, but the conclusions were problematic, our issue should be only with the unsupported claims and not with the methodology. Negative findings can be very useful and should be published (more often than they are). If this study shows very little difference in the genotype, and is methodologically sound, that’s of real value and can lead to new avenues of inquiry.

Leave a Reply

Your email address will not be published. Required fields are marked *