ASH Clinical News ACN_4.11_Full Issue_web | Page 59

FEATURE
publishing research with positive results , and manipulating statistics .
In fact , many retractions issued by journals originate from researchers themselves . For example , in 2012 , the Journal of Clinical Oncology issued a retraction of a 2007 paper after the co-authors “ identified several instances of misalignment of genomic and clinical outcome data .” 6 ( For examples of the most common types of research errors , see SIDEBAR 2 ).
How Common Is Misconduct ?
With research misconduct taking so many forms , its prevalence is hard to pin down . “ Anybody who tells you they know the overall prevalence is being overly enthusiastic ,” Dr . Oransky alleged .
However , some researchers have attempted to get a handle on research misconduct by determining its frequency . In a 2009 meta-analysis , Daniele Fanelli , PhD , compared results of 21 surveys asking scientists directly whether they or a colleague have committed research misconduct , finding that an average of 2 percent of scientists admitted to “ fabricating , falsifying , or modifying data or results at least once .” 7 Still , more than one-third of respondents admitted to engaging in other questionable research practices , such as “ dropping data points based on a gut feeling ” and “ changing the design , methodology , or results of a study in response to pressures from a funding source .”
“ Considering that these surveys ask sensitive questions and have other limitations , it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct ,” Dr . Fanelli noted .
In 2012 , The BMJ published results of an electronic survey that included replies from about 3,000 U . K . -based authors and reviewers . 8 The survey showed that 13 percent of respondents reported having witnessed or having firsthand knowledge of scientists or doctors inappropriately adjusting , excluding , altering , or fabricating data during their research or for purposes of publication .
Looking specifically at the U . S . federal definition of misconduct , the ORI confirmed approximately 200 cases of misconduct over 20 years , according to a report from Nicholas Steneck , PhD , a consultant to the ORI . Divided by the total number of researchers , Dr . Steneck wrote , it yielded a rate of one in 100,000 cases of research misconduct per year . 9
“ Overall , that frequency seems pretty rare ,” said Michael Kalichman , PhD , founding director of the University of California San Diego ’ s Research Ethics Program . “ The problem is that you don ’ t know what you don ’ t know . Those were only the cases that were discovered or the cases of people not smart enough to not get caught .”
Truth and Consequences
When a scientist gets caught in the act of research misconduct , the type of punishment
depends on the severity of the misconduct and the perpetrator ’ s track record .
If the misconduct qualifies as fraud under the legal definition , a scientist can be prosecuted for civil or criminal fraud , though it is rare for people to be threatened with jail time for this type of offense , according to David B . Resnik , JD , PhD , a bioethicist and Institutional Review Board Chair at the National Institute for Environmental Health Sciences at the National Institutes of Health ( NIH ).
In cases of civil fraud , the offending party would have to pay back the money he or she fraudulently received from the government . If found guilty of criminal fraud , prison time is possible .
In one of the rare cases of scientific misconduct leading to jail sentencing , Eric Poehlman , PhD , a former research professor at the University of Vermont ( UVM ) College of Medicine , pled guilty to falsifying and fabricating research data on obesity , menopause , and aging in numerous federal grant applications and academic articles between 1992 and 2002 . 10 Dr . Poehlman had secured $ 2.9 million in NIH funding during his tenure at UVM . In addition to paying nearly $ 200,000 to settle a civil complaint with the institution , he was sentenced to one year and one day in prison , with two years of probation .
Federal agencies also may impose a kind of “ sanction ” on researchers found guilty of misconduct , which can include banning them from receiving any federal funding for a certain time , Dr . Resnik said .
Lying on a federal grant application is a crime punishable by up to five years in federal prison , but the punishment for committing FFP in a published journal article at the discretion of the journal . Typically , the journal that published the fraudulent work will issue a retraction .
The consequences of these retractions vary , as well . In addition to “ seeing a decline in citations for the author ,” Dr . Oransky said , “ we could see a decline in citations for the whole specialty .”
One study found that , after a retraction is issued , “ ordinary authors [ within that specialty ] experience large citation losses to their prior work .” 11 However , “ eminent ” authors experienced little loss in citation frequency .
“[ These results show that ] reputation is important ,” he said , adding that “ if there is a retraction for an honest error , there is no decline in citations . That is good news .”
Journals also might impose publishing sanctions on researchers who commit scientific misconduct , and the scientists ’ home institution or company may impose consequences on the individuals , ranging from increasing supervision of their research practices to placing a letter in a permanent file detailing the misconduct . In more severe cases , researchers can be fired .
Blowing the Whistle
Low rates of scientific misconduct could be attributed to the difficulty of reporting and investigating such allegations . Often , researchers may “ self-retract ” when they identify honest errors in their own research , or a reader or fellow researcher may contact a journal to allege misconduct . In other cases , issues of scientific misconduct may be raised within an institution where the researcher works , and then journals are contacted later .
“ Journals have an obligation to try to look into every allegation they receive , whether or not they believe it to be true ,” Dr . Kalichman said . “ The problem with that ideal is that the journal ’ s office may be located in Washington , and the institution in Oregon , and it is unlikely they have the resources to investigate fully .” Some technological advances are making misconduct allegations less challenging to investigate though , Dr . Byrne said . For example , researchers at Harvard University are working with the scientific publisher Elsevier to develop technology that would detect manipulated or misused images – one of the most common types of misconduct . 12 One study estimated that about 4 percent of published papers pulled from 40 scientific journals contained “ problematic figures ,” defined as figures that were inappropriately duplicated or altered . 13 Additional papers from authors found to have used problematic figures were at an increased likelihood for also containing problematic images .
Other programs are designed to scan article text to detect plagiarism . Plagiarism is not always an exact copy of previously written text , though , so software programs search for word frequencies and distribution of text across the whole submission . “ In particular , journals might look for papers that have a high degree of similarities , outside of things like confidence intervals ,” Dr . Byrne said .
New programs also are recalculating statistical values to identify people who have rounded down p values to achieve statistical significance .
“ Overall though , wide application of these programs could have huge downstream consequences , particularly in the short term ,” Dr . Byrne admitted . “ It creates a situation where , if a publisher uses this software and finds a large number of questionable articles , what does the publisher do with them ? We can use technology to create a high-throughput screening system , but each queried paper then needs to be individually assessed , and that takes time .”
Even if journals investigate every allegation of misconduct , “ bad apples ” will always be able to get through even the most stringent peer-review processes .
Dr . Resnik compared the situation to looking at a beautiful painting of a mountain . The artist may say that the
SIDEBAR 1
The Office of Research Integrity ’ s Definition of Research Misconduct
According to the U . S . Department of Health and Human Services ’ Office of Research Integrity ( ORI ), “ research misconduct means fabrication , falsification , or plagiarism in proposing , performing , or reviewing research , or in reporting research results .”
This definition includes the following types of misconduct :
• Fabrication : making up data or results and recording or reporting them
• Falsification : manipulating research materials , equipment , or processes , or changing or omitting data or results such that the research is not accurately represented in the research record
• Plagiarism : the appropriation of another person ’ s ideas , processes , results , or words without giving appropriate credit
The ORI also notes that research misconduct does not include honest error or differences of opinion .
Source : The Office of Research Integrity . Definition of research misconduct . Accessed May 22 , 2018 , from https :// ori . hhs . gov / definition-misconduct .
SIDEBAR 2
The Most Common Types of Errors
Mistakes in peer-reviewed papers are “ easy to find but hard to fix ,” according to David B . Allison , PhD , from the department of biostatistics at the University of Alabama ’ s School of Public Health . In an article published in Nature , Dr . Allison and co-authors analyzed dozens of peer-reviewed studies published in the field of obesity research to identify errors or miscalculations that sully the scientific record .
While some articles described mathematically or physiologically impossible results , the authors also identified three common “ substantial and invalidating ” errors :
• Mistaken design or analysis of clusterrandomized trials : In these studies , all participants in a cluster are given the same treatment . The number of clusters ( not just the number of individuals ) must be incorporated into the analysis . Otherwise , associations or differences reported in results sections may be found , falsely , to be statistically significant . Designs with only one cluster per treatment are not valid as randomized experiments , regardless of how many individuals are included .
• Miscalculation in meta-analyses : Effect sizes are often miscalculated when metaanalysts are confronted with incomplete information and do not adapt appropriately . Different study designs and meta-analyses require different approaches . Incorrect or inconsistent choices can change effect sizes , study weighting , or the overall conclusions .
• Inappropriate baseline comparisons : Rather than comparing “ differences in nominal significance ” ( the DINS error ), differences between groups must be compared directly . For studies comparing two equal-sized groups , the DINS error can inflate the false-positive rate from 5 percent to as much as 50 percent .
Source : Allison DB , Brown AW , George BJ , Kaiser KA . Reproducibility : a tragedy of errors . Nature . 2016 ; 530:27-9 .
ASHClinicalNews . org ASH Clinical News
57