An essay by Ivan Oransky, “Medical News: Evidence Not a Factor,” published on MedPageToday.com, and additional commentary by Gary Schwitzer of HealthNewsReview.org show that news coverage of health and medical news perversely tends to favor less scientific research.
Such poor quality information makes it difficult for medical consumers to understand and assess their options for medical treatment and, generally, healthful behavior to prevent needing medical attention in the first place. We’ve blogged about the situation, particularly as it relates to the concept of “disease-mongering,” the habit of making a medical mountain out of a health mole hill.
The latest bad news on the news front comes courtesy of an article in the journal PLoS, “Media Coverage of Medical Journals: Do the Best Articles Make the News?”
No, they don’t. The ones that make the news are the ones that cover observational studies, not rigorously controlled clinical trials. The former are less scientifically sound, but the media tend to like them because they can turn an “association” into a “cause.” That such a conclusion is wrong and misleading is of lesser concerns than grabbing attention.
According to PLoS:
Media outlets must make choices when deciding which studies deserve public attention. We sought to examine if there exists a systematic bias favoring certain study design in the choice of articles covered in the press. Our results suggest such a bias; the media [are] more likely to cover observational studies and less likely to report RCTs [randomized controlled trials] than a reference of contemporary articles that appear in high impact journals. When the media [do] cover observational studies, [they] selects those with lower sample sizes than observational studies appearing in high impact journals.
…[I]n doing so, they preferentially choose articles lower in the hierarchy of research design, thus favoring studies of lesser scientific credibility. If anything, as top newspapers have their pick of all original articles, not just those selected by high impact general medical journals, newspapers could choose to cover the most credible studies, i.e. large, well-done RCTs. Instead, collectively they appear to make an alternative decision.”
As Schwitzer points out, sometimes the medical journals contribute to the misinformation mess. “The next day after I wrote about [the PLoS] study, I pointed out how a news release written by the BMJ group for one of the journals it publishes stated that ‘HRT [hormone replacement therapy] cuts risk of repeat knee/hip replacement surgery by 40%.’ Two words – ‘cuts risk’ – sours the entire message. Because that conveys proof of cause-and-effect, which the observational study in question didn’t prove.”
Observational studies, by design, can’t prove anything so definite. They can show possible associations, and point the way toward more thorough studies that might be able to prove the theory.
For example, Oransky refers to a regular department in Britain’s Daily Mail called Kill or Cure. It’s an interactive page that, as he says, chronicles the paper’s “ongoing effort to classify every inanimate object into those that cause cancer and those that prevent it.”
But many of the stories that Kill or Cure features don’t distinguish correlation from causation. Observational studies are designed to show the former, and RCTs are designed to demonstrate (or not) the latter.
“Newspapers,” Oransky writes, “preferentially cover medical research with weaker methodology.”
The PLoS researchers looked at what clinical studies the top five U.S. newspapers by circulation covered. They compared those results with trials that appeared in the top five clinical journals, ranked by impact factor.
Even the otherwise credible, substantive news sources – The Wall Street Journal, USA Today, The New York Times, Los Angeles Times and the San Jose Mercury News – fell prey to the more “glamorous” if misguided talking points.
The prestigious clinical journals were The New England Journal of Medicine (NEJM), The Lancet, Journal of the American Medical Association (JAMA), Annals of Internal Medicine and PLoS Medicine.
To measure the quality of the evidence, they relied on the standards of the U.S. Preventive Services Task Force, the independent panel of nonfederal experts in prevention and evidence-based medicine that conducts scientific evidence reviews of a broad range of clinical preventive health-care services and develops recommendations for primary care clinicians and health systems.
Oransky and Schwitzer continue to advocate for more responsible coverage of health and medicine news, but what are the chances that their reasonable, measured approach to understanding medical science will be heard over the noise that much of the media generate?
Learn how to understand health and medical studies – and therefore become a better consumer of health care – at HealthNewsReview.org.