Print “PDF version

What’s New in the Patient Safety World

June 2019

More Spin

 

 

These results sound too good to be true! Such pronouncements always raise our “hype radar” or “spin radar” (see our February 16, 2010 Patient Safety Tip of the Week “Spin/Hype…Knowing It When You See It”). In that column we noted how results of one clinical trial were “spun” and published in various forms in not one, not two, but three well-respected peer-reviewed medical journals.

 

Khan and colleagues (Khan 2019) recently published a systematic review of published reports of cardiovascular randomized clinical trials (RCT’s) in 6 high-impact medical journals, with a focus on comparing the language in the abstracts, main text, discussion or conclusions against the actual data in the RCT. These were all RCT’s in which the primary outcome of the study failed to reach statistical significance.

 

Spin was identified in 57% of abstracts and 67% of main texts of published articles that met their inclusion criteria. 11% had spin in the title, 38% in the results section, and 54% in the conclusions. Among the abstracts, spin was observed in 41% of results sections 48% of conclusions sections.

 

The authors conclude that, in reports of cardiovascular RCTs with statistically nonsignificant primary outcomes, investigators often manipulate the language of the report to detract from the neutral primary outcomes. They caution all who read such literature be aware that peer review does not always preclude the use of misleading language in scientific articles.

 

There are numerous ways in which the results of a study may be “spun”. Khan and colleagues identified specific spin strategies that were used:

1.     authors pivoted on statistically significant secondary results in the form of focus on within-group comparison, secondary outcomes, subgroup, or per-protocol analyses

2.     authors interpreted statistically nonsignificant results of the primary outcomes to show treatment equivalence or to rule out an adverse event

3.     authors emphasized the beneficial effect of the treatment with or without acknowledging the statistically nonsignificant primary outcome

4.     Strategies of spin that could not be classified under 1 of the 3 schemes were systematically recorded as “other”

 

Perhaps surprisingly, conflicts of interest disclosures of the first authors and last authors did not correlate with spin. Even more surprisingly, it was not associated with industry funding.

 

The accompanying editorial (Fihn 2019) cites the following examples of “spin” in the various sections of a publication as identified by Boutron and Ravaud (Boutron  2018):

Methods misreporting

·       Changing objectives or hypothesis to conform to the results.

·       Not distinguishing prespecified from post hoc analyses.

·       Failing to report protocol deviations.

Results misreporting

·       Selective reporting or focus on outcomes favorable to the study hypothesis, particularly statistically significant results.

·       Disregarding results that contradict initial hypotheses.

Misinterpretation

·       Misleading interpretation (eg, ignoring regression to the mean, confounding, or small effect size).

·       Misinterpreting a significant P value as a measure of effect, or lack of significance as indicative of equivalence or safety.

·       Unfounded extrapolation to a larger population or different setting.

·       Ignoring limitations.

 

In our February 16, 2010 Patient Safety Tip of the Week “Spin/Hype…Knowing It When You See It”) we noted an excellent review on the limitations of randomized controlled trials (RCT’s) that was published in Journal of the American College of Cardiology (Kaul and Diamond 2010). This paper is very good at helping you understand some complicated statistical issues but really emphasizes three points we have often made in the past:

·       Many articles report outcomes that are statistically significant but are of little clinical significance.

·       Post-hoc subgroup analyses are prone to error and inappropriate interpretation and should be used only to generate ideas for futures studies. Otherwise, they may erroneously lead to adoption of practices that are not evidence-based.

·       Use of composite outcomes is especially likely to give rise to inappropriate conclusions when the outcomes are driven by one component of that composite, especially when that component is not as clinically significant as other components.

 

Whenever you read a study published in a journal, even a respected peer-reviewed journal, you must carefully scrutinize the language used and make sure that such language is not at odds with the data presented. And the Khan study, along with the others noted here, put the publishers of all medical journals on notice that it is their responsibility to ensure that results of studies they publish accurately reflect the facts and are not “spun”.

 

 

References:

 

 

Khan MS, Lateef N, Siddiqi J, et al. Level and Prevalence of Spin in Published Cardiovascular Randomized Clinical Trial Reports With Statistically Nonsignificant Primary Outcomes. A Systematic Review. JAMA Netw Open. 2019; 2(5): e192622

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2732330

 

 

Fihn SD. Combating Misrepresentation of Research Findings. JAMA Netw Open 2019; 2(5): e192553 May 3, 2019

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2732324

 

 

Boutron  I, Ravaud  P.  Misrepresentation and distortion of research in biomedical literature.  Proc Natl Acad Sci USA 2018; 115(11): 2613-2619

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5856510/

 

 

Kaul S, Diamond GA. Trial and Error: How to Avoid Commonly Encountered Limitations of Published Clinical Trials. J Am Coll Cardiol 2010; 55: 415-427

http://www.onlinejacc.org/content/55/5/415.abstract

 

 

 

 

Print “PDF version

 

 

 

 

 

 

 

 


 

 

 

http://www.patientsafetysolutions.com/

 

Home

 

Tip of the Week Archive

 

What’s New in the Patient Safety World Archive