FEATURE
should be designed and powered to as-
sess that.
Even when presenters note that
subgroup effects are only hypothesis-
generating and should not be taken as
evidence of benefit, that message can get
watered down during dissemination.
Dr. Nowakowski pointed to a recent
example of appropriate subgroup analysis
reporting from the 2018 ASH Annual
Meeting. Principal investigator Anas
Younes, MD, from Memorial Sloan
Kettering Cancer Center, presented
findings from a phase III study that
evaluated whether the combination
of ibrutinib plus R-CHOP (rituximab,
cyclophosphamide, doxorubicin,
vincristine, prednisone) was associated
with greater efficacy than R-CHOP alone
in patients with previously untreated
diffuse large B-cell lymphoma (DLBCL). 4
The overall findings were negative:
The addition of ibrutinib did not improve
event-free survival in the entire intent-
to-treat population. However, a subgroup
analysis showed an interaction between
age and treatment effect, with patients
younger than 65 years experiencing a
“clinically meaningful improvement” in
survival outcomes with the addition of
ibrutinib. This signal of benefit is excit-
ing because it highlights the possibility
of finally improving on the R-CHOP
backbone regimen in younger patients
with DLBCL.
“In his presentation, Dr. Younes
shared the data in an unbiased way and
stressed that, although there is benefit
in this subset, the subgroup finding is
hypothesis-generating only and needs
to be validated in a future study before
we can consider implementing it in the
clinic,” said Dr. Nowakowski.
“At the end of the day, you can torture
the data until it confesses and there can
be a lot of pressure to overstate things,
but it is the principal investigator’s re-
sponsibility to present the data accurately
and fairly,” he stressed.
In the study abstract, however, this
subtlety was less clear. The subgroup
benefit was noted in the first sentence of
the conclusion and it headlined some of
the news coverage aimed at professional
audiences. 4
Arming Yourself Against Spin
Cases like the retracted Nature paper raise
the question of who bears the responsibil-
ity for ferreting out data manipulation or
misinterpretation – reviewers or readers?
It’s important for clinicians to be
able to read research with a critical eye,
said Bob Löwenberg, MD, PhD, from
Erasmus University Medical Center
in Rotterdam, the Netherlands, and
the editor-in-chief of Blood. But, he
added, recognizing that some lack this
experience, the journal will often provide
expert comments on papers “to put
the data in perspective and discuss the
ASHClinicalNews.org
strengths and limitations of the research.”
One simple way to test the veracity of
a clinical trial report: Check the protocol.
“The analysis should be done accord-
ing to the plan that was in the registered
protocol, which we require to have been
registered before the first patient was
entered,” he explained.
Despite efforts to present data objec-
tively, spin, which refers to any attempt
to misrepresent study findings to influ-
ence the interpretation of those findings,
is more prevalent than many realize.
Examples include overemphasizing a
nonsignificant finding or highlighting a
secondary endpoint or post hoc finding in
a trial that missed its primary endpoint.
(Editor’s note: In our reporting in
ASH Clinical News, we are trying to be
part of the spin solution. This includes
translating complex statistics into mean-
ingful statements, providing context
for the findings presented in clinical
abstracts and research papers, and
identifying when a paper was prepared
with outside editorial assistance or by
pharmaceutical sponsors. We also do not
repeat statements in print or in abstract
presentations about “practice-changing
findings,” or those that exaggerate
clinical impact or minimize significant
adverse events. Have any feedback on our
reporting? Let us know at ACNeditor@
hematology.org.)
Practitioners who lack in-depth
statistical training hope to rely on others
– like responsible authors, journals, and
editors – to weed out specious findings,
but in many cases, confirmed by a ran-
domized clinical trial, readers just aren’t
as good at spotting spin as they think
they are.
In 2014, Dr. Tannock and colleagues
published findings from the SPIIN trial,
which showed that abstracts containing
spin can fool even experienced readers. 5
SPIIN authors randomly assigned 300
clinician-researchers (all of whom were
corresponding authors on published
oncology trials) to review a sample of
published abstracts in their original form
with spin or versions that were rewritten
to eliminate spin. This included delet-
ing information that could distort the
understanding of the trial’s aim, report-
ing complete results with no wording
of judgment, and replacing the author’s
conclusion with a standardized conclu-
sion (e.g., “treatment A was not more
effective than comparator B in patients
with …”).
All abstracts had statistically non-
significant primary outcomes, but re-
viewers who read the original abstracts
rated the investigational treatment as
more beneficial, compared with review-
ers who read the rewritten versions
(p=0.03; effect size = 0.25).
“To try to minimize the impact of
spin and thus biased dissemination
of research results, authors should be
educated on how to interpret research
results,” the SPIIN trialists wrote. “Peer
reviewers and journal editors also play
an important role; they should systemati-
cally check whether the abstract conclu-
sions are consistent with the study results
and whether the results reported in the
abstract are free from bias.”
“Given that a large percentage of the
large trials done today are supported by
industry, researchers can feel pressured
toward inappropriate reporting, and I
think spin remains fairly common across
the board,” said Dr. Tannock.
Still, participants who reviewed
abstracts written with spin rated the
study as less rigorous (p=0.034) and
noted an increased interest in reading
the full-text article (p=0.029), compared
with participants who reviewed the
rewritten abstracts. This suggests that,
while readers initially can be drawn in by
overstated findings and linguistic spin,
they are interested in digging deeper into
the findings.
“In the end, I don’t think we’re
dealing with a whole bunch of people
who are trying to game the system,”
noted Dr. Mast. “There are a lot of
reasons why information might be
incorrect or misleading, and that’s why
we keep doing science: to keep check-
ing each other and repeating things and
seeing what’s really right when it goes out
to the real world.”
—By Debra L. Beck ●
REFERENCES
1. Chassé M, Tinmouth A, English SW, et al. Association of blood donor
age and sex with recipient survival after red blood cell transfusion.
JAMA Intern Med. 2016;176:1307-14.
2. Vasan SK, Chiesa F, Rostgaard K, et al. Lack of association between
blood donor age and survival of transfused patients. Blood.
2016;127:658-61.
3. Edgren G, Ullum H, Rostgaard K, et al. Association of donor age
and sex with survival of patients receiving transfusion. JAMA Intern
Med. 2017;177:854-60.
4. Younes A, Sehn LH, Johnson P, et al. A global, randomized, placebo-
controlled, phase 3 study of ibrutinib plus rituximab, cyclophospha-
mide, doxorubicin, vincristine, and prednisone (RCHOP) in patients
with previously untreated non-germinal center B-cell-like (GCB)
diffuse large B-cell lymphoma (DLBCL). Abstract #784. Presented
at the 2018 ASH Annual Meeting, December 3, 2018; San Diego, CA.
5. Boutron I, Altman DG, Hopewell S, et al. Impact of spin in the
abstracts of articles reporting results of randomized controlled
trials in the field of cancer: the SPIIN randomized controlled trial. J
Clin Oncol. 2014:34:4120-26.
Crowdsourced Peer Review
In case researchers lack intrinsic motivation to avoid deceptive data reporting, social media and crowdsourced
post-publication peer review provide extrinsic motivation. The PubPeer Foundation launched its PubPeer website
in 2012 to do just that. 1 The foundation’s stated goal is “to improve the quality of scientific research by enabling
innovative approaches for community interaction.” Through the website, registered users can post anonymously
as “peers” and comment on published scientific research.
The system works. On September 5, 2018, Nature published a paper that reported a new technique for deliver-
ing chimeric antigen receptor T-cell therapies in patients with brain cancers. The 27 co-authors claimed they had
developed a molecule that allowed T cells to cross the blood-brain barrier and “home in” on brain tumors. 2 The
results were almost immediately called into question.
On October 25, Nature editors acknowledged that “the reliability of data presented in this manuscript has
been the subject of criticisms, which we are currently considering.” Many of those criticisms came from PubPeer,
where the paper had amassed more than 50 comments about misleading, mislabeled, and duplicative figures. 3 On
February 20, 2019, the authors retracted their paper “to correct the scientific literature, due to issues with figure
presentation and underlying data.”
REFERENCES
1. PubPeer. “About PubPeer.” Accessed March 4, 2019, from https://pubpeer.com/static/about.
2. Samaha H, Pignata A, Fousek K, et al. A homing system targets therapeutic T cells to brain cancer. Nature. 2018;561:331-7.
3. Kwon D. Nature retracts paper on delivery system for CAR T immunotherapy. The Scientist. Accessed March 4, 2019, from https://www.the-scientist.com/news-opinion/
natureretracts-paper-on-delivery-system-for-car-t-immunotherapy-65488.
ASH Clinical News
33