If you can get your favorite sports fans peeled away from the latest broadcast pro event ─ whether it’s the basketball playoffs, hockey championship series, golf tourneys, or the heating up baseball season ─ a conversation of sorts could be sparked by dropping numbers on them. See what kind of rise you can get by telling them their data-driven obsession with improving their own athletic performance may be built on shoddy calculation.
In the “Moneyball,” statistics’ crazy world of contemporary sports and athletic fandom, that statement could be heretical. But the numbers-driven folks at the web site “528” deserve credit for digging into a popular but dubious approach employed by researchers in sports medical science: Magnitude-based inference, aka MBI. Their article’s worth a read, especially for wonks and the numerically inclined. For those who are less so, here’s a taste of what’s at stake, as 528 reported:
At first blush, the studies look reasonable enough. Low-intensity stretching seems to reduce muscle soreness. Beta-alanine supplements may boost performance in water polo players. Isokinetic strength training could improve swing kinematics in golfers. Foam rollers can reduce muscle soreness after exercise. The problem: All of these studies shared a statistical analysis method unique to sports science. And that method is severely flawed.
MBI, the mathematical progeny of an exercise physiologist from New Zealand, mashes up “frequentist and Bayesian” statistical approaches and formulas most of us would never use but are embedded in Microsoft’s Excel software. It seeks, 528 explains, to provide a work-around ─ valid or not ─ to major challenges in sports studies. As with research on diet and nutrition, sports science investigations can be daunting, because they often require live, human subjects, with whom it is nigh impossible to control for all variables and to provide comparable test subjects, including some who do not get a given therapy or medication. In too many sports performance studies, instead, 528 reports, researchers try to extract valid conclusions from small numbers of highly fit participants willing to participate in possibly tedious and uncomfortable tests and measures.
Traditional researchers use statistical means to counteract errors that can crop up in studies, especially those with few participants and subject to data “noise” in their findings. Researchers often employ “null hypothesis significance testing” to determine if a finding is “statistically significant” or a “false positive.” This approach roughly tries to keep the latter, an occurrence that ought to be chalked up to “random chance,” and, thus, eliminated, at around 5 percent.
MBI, in comparison, lets researchers set up their own statistical parameters to determine what constitutes a “substantial” finding. Translation: It lets them put not just a finger, or a thumb, but their whole arm on their weighing scale to support theories they hope to advance in their research ─ though, again, 528 provides the detailed, wonkier explanation, including an example of how divergent study results can be in one statistical approach vs. another.
Why does this matter? In my practice, I see not only the significant harms that patients suffer while seeking medical services but also their confusion and frustration as they try to lock on to safe, accurate, reliable, and actionable information on their medical care. Moderation and common sense should rule when we all deal with the health and medical reports we’re inundated with every day. It can be a chore, albeit a crucial one, for patients much less their doctors, to keep up with changes in medicine, including by actually reading important research studies and trying to see how they reach their findings.
It can be a challenge of another kind when patients confront various metrics about their treatment, including information, say, on absolute vs. relative risk. I’ve long advocated for patients, in decision-making about their care, to push to learn about an indispensable figure when it’s available: the NNT or number needed to treat. It asks the question: How many people need to get this particular drug/test/treatment for one person to benefit? The lower the number, the better. If the NNT of a treatment is one, that means everyone treated is helped. One person treated equals one person’s life made better. But that’s true only for imminently life-threatening conditions when everyone dies who is not treated: like an appendix about to burst or a heart that has stopped beating and needs to be shocked back into rhythm. For every other medical condition, the NNT is higher than one, sometimes a lot higher. Screening tests for early detection of cancer frequently have NNT’s in the thousands: one person’s life saved for every few thousand tested. Seeing data this way can be clarifying and exceedingly useful.
Too many of us, though, lose our all-important skepticism when we see nifty reports on studies on diet, fitness, and wellness. Just consider that the New York Times has just put up a two-part series on protecting young athletes with such heavy-duty advice, like parents’ shouldn’t push kids to the point of injury and protective gear is key to harm prevention. OK, really?
But then, eavesdrop on the routine chatter at restaurants or locker rooms, barber shops or beauty parlors, and you might find that some otherwise sensible people would kiss crow’s feet or snuggle up with poison ivy if, say, they were hyped as keys to wellness by certain hot sports magazines or those periodicals popular on the grocery check-out aisle. Jocks and other celebrities, not to mention the buff or perky gym denizens, all are too ready to push purported performance or health enhancements, including dubious supplements or questionable techniques, all supposedly backed by scientific evidence. It gets dodgy when it veers into medical advice, like treatment of serious conditions, but how to shut down the relentless sports, fitness, and wellness bunk? The next time the brother-in-law starts chattering, look him dead in the eye and ask for the medical or sports science journal citation ─ and maybe inquire if that big, big study was based on MBI?