Neuromag May 2017 - Page 16

Christopher Lance Best practice science in the age of irreplicability Written by Thomas Wallis Replication - the ability to find the same effect repeatedly in independent experiments — is a corner- stone of experimental science. However, the published literature is likely full of irreplicable results [1, 2]. There are many reasons for this problem, but the root cause is arguably that the incentive structure of science has selected for flashiness and surprise rather than for truth and rigour. Au- thors who publish in high-impact journals tend to be rewarded with jobs, grants, and career success, whether or not the result turns out to actually be replicable [3]. These incentives can facilitate poor experimental and statistical practices that make faulty conclusions more likely. Jens Klinzing wrote a nice over- view on these issues for the last ad- dition of Neuromag [4]. In this article, I’m going to take his lead and discuss a few practices in more detail that you can incorporate into your work, now and into the future, that will help to ensure the quality of scientific output. case that numerous decisions about how to test the research question of interest are made after or during the process of data collection. By allowing our analyses to depend on the particu- lar data we observe in an experiment we invite the possibility that we are shining a spotlight on noise: random fluctuations that are a property of this dataset and not a replicable property of the world at large. The garden of forking paths In an article you should definitely read, Gelman and Loken characterise this as walking through a “garden of fork- ing paths” [6]. The point this article makes, which should give us all pause, is that even if you do not sit and try a bunch of different analyses until you found the one that “worked” (i.e. gave the result you wanted), you might still be on thin inferential ice. Given a dif- ferent dataset (but the same experi- ment) you might have done a different analysis and possibly drawn a differ- ent conclusion. I believe that the vast majority of sci- entists are honestly trying to do the best and most accurate science they can. One of the most startling reali- sations I have had over the past few years, however, is how easy it is for even well-intentioned researchers to unconsciously mislead themselves (and thus also the larger scientific community) [5]. In practice, it’s almost always the ؁9UI=55)ѥե͡Ʌѽ䁅)ɵѽ䁕ɥ)]ȁ͕́)фԁɔՍѥɅѽ)ɕ͕ɍɵѽѕа)Ʌа́ݡٕѡ́ɔ)ɔфѥ)ѡȁѡ̰ѡ́ѥѥ́Ս)ȁѡѕɕхѥمՕ̸Q)p׊tѽ͡ձѡ)ɔѡЀԔȁݕȁ́)ɕqͥӊtɔ͔ͥѥٕ(ѡձ́ՅՔɽ́)䁽ѕɅɔ!ݕٕȰم)Օ́䁍ɕѼѡȁ)͔ͥѥٔɅѕ́ȁɵѽ)ɕ͕ɍLݡȁѡ͕̰)ͥͥ́ɔ)ɔфѥȁɅѽ)͕́ѡՔ͔ͥѥٔɅѕ)ȁȀ͕)ϊdѥ)ѡЁ9ɽ) ɕѱ䰁Ʌѽɕ͕ɍ́)Ё݅́ɕ͕ѕ́Ё́