Editorial

Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond

  • Received: 01 May 2014 Accepted: 04 May 2014 Published: 08 May 2014
  • Citation: Christopher D. Chambers, Eva Feredoes, Suresh D. Muthukumaraswamy, Peter J. Etchells. Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond[J]. AIMS Neuroscience, 2014, 1(1): 4-17. doi: 10.3934/Neuroscience.2014.1.4

    Related Papers:

    [1] Carolyn E. Jones, Miranda M. Lim . Phasic Sleep Events Shape Cognitive Function after Traumatic Brain Injury: Implications for the Study of Sleep in Neurodevelopmental Disorders. AIMS Neuroscience, 2016, 3(2): 232-236. doi: 10.3934/Neuroscience.2016.2.232
    [2] Masao Ito . Via Autonomic Functions and Peptidergic Neuromodulation: Commentary on the AIMS Neuroscience Special Issue on “What is the Role of the Cerebellum in Emotional Processing and Behavior?”. AIMS Neuroscience, 2014, 1(2): 142-144. doi: 10.3934/Neuroscience.2014.2.142
    [3] Lídia Puigvert Mallart, Ramón Flecha García, Sandra Racionero-Plaza, Teresa Sordé-Martí . Socioneuroscience and its contributions to conscious versus unconscious volition and control. The case of gender violence prevention. AIMS Neuroscience, 2019, 6(3): 204-218. doi: 10.3934/Neuroscience.2019.3.204
    [4] Jarrod Moss . Introduction to AIMS Neuroscience Special Issue “What Function Does the Anterior Insula Play in Human Cognition?”. AIMS Neuroscience, 2015, 2(3): 153-154. doi: 10.3934/Neuroscience.2015.3.153
    [5] Hyein Cho, Wei Dou, Zaviera Reyes, Mark W. Geisler, Ezequiel Morsella . The reflexive imagery task: An experimental paradigm for neuroimaging. AIMS Neuroscience, 2018, 5(2): 97-115. doi: 10.3934/Neuroscience.2018.2.97
    [6] Byron Bernal, Alfredo Ardila, Monica Rosselli . The Network of Brodmanns Area 22 in Lexico-semantic Processing: A Pooling-data Connectivity Study. AIMS Neuroscience, 2016, 3(3): 306-316. doi: 10.3934/Neuroscience.2016.3.306
    [7] Anna Lardone, Marianna Liparoti, Pierpaolo Sorrentino, Roberta Minino, Arianna Polverino, Emahnuel Troisi Lopez, Simona Bonavita, Fabio Lucidi, Giuseppe Sorrentino, Laura Mandolesi . Topological changes of brain network during mindfulness meditation: an exploratory source level magnetoencephalographic study. AIMS Neuroscience, 2022, 9(2): 250-263. doi: 10.3934/Neuroscience.2022013
    [8] Galina V. Portnova, Michael S. Atanov . Nonlinear EEG parameters of emotional perception in patients with moderate traumatic brain injury, coma, stroke and schizophrenia. AIMS Neuroscience, 2018, 5(4): 221-235. doi: 10.3934/Neuroscience.2018.4.221
    [9] M. Rosario Rueda, Joan P. Pozuelos, Lina M. Cómbita, Lina M. Cómbita . Cognitive Neuroscience of Attention
    From brain mechanisms to individual differences in efficiency. AIMS Neuroscience, 2015, 2(4): 183-202. doi: 10.3934/Neuroscience.2015.4.183
    [10] Dai Mitsushima . Contextual Learning Requires Functional Diversity at Excitatory and Inhibitory Synapses onto CA1 Pyramidal Neurons. AIMS Neuroscience, 2015, 2(1): 7-17. doi: 10.3934/Neuroscience.2015.1.7


  • The last ten years have witnessed increasing awareness of questionable research practices (QRPs) in the life sciences [1,2], including p-hacking [3], HARKing [4], lack of replication [5], publication bias [6], low statistical power [7] and lack of data sharing ([8]; see Figure 1). Concerns about such behaviours have been raised repeatedly for over half a century [9,10,11] but the incentive structure of academia has not changed to address them.

    Figure 1. The hypothetico-deductive model of the scientific method is compromised by a range of questionable research practices (QRPs; red). Lack of replication impedes the elimination of false discoveries and weakens the evidence base underpinning theory. Low statistical power increases the chances of missing true discoveries and reduces the likelihood that obtained positive effects are real. Exploiting researcher degrees of freedom (p-hacking) manifests in two general forms: collecting data until analyses return statistically significant effects, and selectively reporting analyses that reveal desirable outcomes. HARKing, or hypothesizing after results are known, involves generating a hypothesis from the data and then presenting it as a priori. Publication bias occurs when journals reject manuscripts on the basis that they report negative or undesirable findings. Finally, lack of data sharing prevents detailed meta-analysis and hinders the detection of data fabrication.

    Despite the complex motivations that drive academia, many QRPs stem from the simple fact that the incentives which offer success to individual scientists conflict with what is best forscience [12]. On the one hand are a set of gold standards that centuries of the scientific method have proven to be crucial for discovery: rigour, reproducibility, and transparency. On the other hand are a set of opposing principles born out of the academic career model: the drive to produce novel and striking results, the importance of confirming prior expectations, and the need to protect research interests from competitors. Within a culture that pressures scientists to produce rather than discover, the outcome is a biased and impoverished science in which most published results are either unconfirmed genuine discoveries or unchallenged fallacies [13]. This observation implies no moral judgement of scientists, who are as much victims of this system as they are perpetrators.

    While there is no single answer to QRPs, we believe that a key part of the solution is to reform academic publishing. For too long the life sciences have been dominated by an incentive structure in which success depends on presenting results that are impressive, novel, clear, and groundbreaking. Many journals warn authors that falling short of these benchmarks will lead to manuscripts being rejected, often before peer review even takes place. Such policies are born from the hubristic ambition—and commercial imperative—of each journal to be regarded as the outlet of choice for the most important discoveries in its particular field, with the highest-impact journals setting the standard for such policies.

    These incentives do great harm to both science and scientists. By selecting which papers to publish based on the results of experiments, we motivate scientists to engage in dishonesty and delusion, burying negative findings in the file drawer and spinning cherry picked data into “publishable” packages.

    Reforming this system will require us to apply the rigour of the scientific method to the editorial process itself. Fortunately we already have a precedent for such a mechanism. In clinical drug trials it is standard practice for researchers to be initially blinded as to whether individual patients are receiving the treatment or the placebo—this prevents the desires of the researchers from biasing the study outcome. We believe this logic should also apply to scientific publishing: in reaching editorial decisions, journal editors should remain blind to the results in order to minimize the effect of their own bias on the scientific record.

    Registered Reports

    The most secure way to prevent publication bias, p-hacking, and HARKing is for authors to pre-register their hypotheses and planned analyses before the collection of data. In return, if the rationale and methods are sound then the journal should agree to publish the final paper regardless of the specific outcome. This format of publishing has become known as the Registered Reports (RR) model and was first introduced in 2013 by the journal Cortex [14] together with a related format at Perspectives on Psychological Science.1 Since then, RRs have been taken up and launched by several other journals, including Attention, Perception & Psychophysics [15], Experimental Psychology [16], Drug and Alcohol Dependence [17], Social Psychology [18], and now AIMS Neuroscience.

    1 https://www.psychologicalscience.org/index.php/replication

    The RR model integrates pre-registration into scientific publishing through a two-stage peer review process, summarised in Figure 2. Authors initially submit a Stage 1 manuscript that includes an Introduction, Methods, and the results of any pilot experiments that motivate the research proposal.

    Figure 2. The submission pipeline and review criteria for Registered Reports at AIMS Neuroscience. Further details can be found at http://www.aimspress.com/reviewers.pdf.

    Following assessment of the protocol by editors and reviewers, the manuscript can then be offered in principle acceptance (IPA), which means that the journal virtually guarantees publication if the authors conduct the experiment in accordance with their approved protocol. With IPA in hand, the researchers can then implement the experiment. Following data collection, they resubmit a Stage 2 manuscript that includes the Introduction and Methods from the original submission plus the Results and Discussion. The Results section includes the outcome of the pre-registered analyses together with any additional unregistered analyses in a separate section titled “Exploratory Analyses”. Authors must also share their data on a public and freely accessible archive such as Figshare and are encouraged to release data analysis scripts. The final article is published only after this process is complete. A published Registered Report will thus appear very similar to a standard research report but will give readers confidence that the hypotheses and main analyses are free of QRPs. Detailed author and reviewer guidelines to RRs are available on the AIMS Neuroscience website at http://www.aimspress.com/reviewers.pdf.

    The RR model at AIMS Neuroscience will also incorporate the Badges initiative maintained by the Center for Open Science, which recognises adherence to transparent research practices. All RRs will automatically qualify for the Preregistration and Open Data badges. Authors who additionally share data analysis scripts will be awarded an Open Materials badge. The badges will be printed in the title bar of published articles. For more information on badges, authors are referred to https://osf.io/tvyxz/wiki/home/ or invited to contact the AIMS Neuroscience editorial board at Neuroscience@aimspress.com.

    Responses to questions and concerns about Registered Reports

    In June 2013, a group of more than 80 scientists signed an open letter to the Guardian newspaper in the United Kingdom calling for RRs to be introduced across the life sciences [19]. The signatories to this letter included researchers and members of journal editorial boards spanning epidemiology, genetics, clinical and experimental medicine, neuroscience, physiology, pharmacology, psychiatry, and psychology. In response, a number of neuroscientists voiced concerns about the RR model and possible unintended consequences of this new format [20,21]. We respond below to 25 of their most pressing concerns and related FAQs.

    1. Can’t authors cheat the Registered Reports model by ‘pre-registering’ a protocol for a study they have already completed?

    Under the current RR model this is not possible without committing fraud. When authors submit a Stage 2 manuscript it must be accompanied by a basic laboratory log indicating the range of dates during which data collection took place together with a certification from all authors that no data was collected prior to the date of IPA (other than pilot data included in the Stage 1 submission). Time-stamped raw data files generated by the pre-registered study must also be shared publicly, with the time-stamps post-dating IPA. Submitting a Stage 1 protocol for a completed study would therefore constitute an act of deliberate misconduct.

    Beyond these measures, fraudulent pre-registration would backfire for authors because editors are likely to require revisions to the proposed experimental procedures following Stage 1 review. Even minor changes to the protocol would of course be impossible if the experiment had already been conducted, and would therefore defeat the purpose of pre-registration. Unless authors were willing to engage in further dishonesty about what their experimental procedures involved, “pre-registering” a completed study would be a highly ineffective publication strategy.

    It bears mention that no publishing mechanism, least of all the status quo, can protect science against complex and premeditated acts of fraud. By requiring certification and data sharing, the RR model closes an obvious loophole that opportunistic researchers might exploit where doing so didn’t require the commission of outright fraud. But what RRs achieve, above all, is to incentivise adherence to the scientific method by eliminating the pressure to massage data, reinvent hypotheses, or behave dishonestly in the first place.

    2. Won’t Registered Reports become a dumping ground for negative or ambiguous findings that have little impact?

    For studies that employ null hypothesis significance testing (NHST), adequate statistical power is crucial for interpreting all findings, whether positive or negative. Low power not only increases the chances of missing genuine effects; it also reduces the likelihood that statistically significant effects are genuine [7]. To address both concerns, RRs that include NHST-based analyses must include a priori power of ≥ 90% for all tests of the proposed hypotheses. Ensuring high statistical power increases the credibility of all findings, regardless of whether they are clearly positive, clearly negative or inconclusive.

    It is of course the case that statistical non-significance, regardless of power, can never be taken as direct support for the null hypothesis. However this reflects a fundamental limitation of NHST rather than a shortcoming of the RR model. Authors wishing to estimate the likelihood of any given hypothesis being true, whether H0 or H1, are welcome to adopt alternative Bayesian inferential methods as part of their RR submissions [22,23,24].

    3. Won’t Registered Reports limit data exploration and the reporting of unexpected results?

    This is one of the most commonly voiced concerns about RRs and would be a legitimate worry if the RR model limited the reporting of study outcomes to pre-registered analyses only. However, we stress that no such constraint applies at AIMS Neuroscience or for RRs launched at any other journal. To be clear, the RR model places no restrictions on the reporting of unregistered exploratory analyses — it simply requires that the Results section of the final article distinguishes those analyses that were pre-registered and confirmatory from those that were post hoc and exploratory. Ensuring a clear separation between confirmatory hypothesis testing and exploratory analysis is vital for preserving the evidential value of both forms of enquiry [11].

    In contrast to what several critics have suggested [20,21], RRs will not hinder the reporting of unexpected or serendipitous findings. On the contrary the RR model will protect such observations from publication bias. Editorial decisions for RRs are made independently of whether the results support the pre-registered hypotheses; therefore Stage 2 manuscripts cannot be rejected because editors or reviewers find the outcomes of hypothesis testing to be surprising, counterintuitive, or unappealing. This stands in contrast to conventional peer review, where editorial decisions areroutinely based on the extent to which results conform to prior expectations or desires.

    4. How are Registered Reports different from clinical trial registration?

    Some scientists have criticised the RR model on the grounds that is apparently reinventing the existing pre-registration mechanism used in medical research and attempting to apply it to basic science. This argument overlooks the three key features that advance RRs beyond clinical trial registration. First, RRs enjoy continuity of the review process from the initial Stage 1 submission to the final publication, thus ensuring that authors remain true to their registered protocol. This is particularly salient given that only 1 in 3 peer reviewers of clinical research compare authors’ protocols to their final submitted manuscripts [25]. Second, in contrast to RRs, most forms of clinical trial registration (e.g. clinicaltrials.gov) do not peer review study protocols, which provides the opportunity for authors to (even unconsciously) include sufficient “wiggle room” in the methods or proposed analyses to allow later p-hacking or HARKing [2,4,26]. Third, even in the limited cases where journals do review and publish trial protocols (e.g. Lancet Protocol Reviews, BMC Protocols, Trials), none of these formats provides any guarantee that the journal will publish the final outcome. These features of the RR model ensure that it represents a substantial innovation over and above existing systems of study pre-registration.

    5. Why are Registered Reports needed for grant-funded research? Isn’t the process of grant assessment in itself a form of pre-registration?

    There are many differences between these types of review. The level of detail in the assessment of RRs differs at a scalar level from grants — a funding application typically includes only a general or approximate description of the methods to be employed, whereas a Stage 1 RR includes a step-by-step account of the experimental procedures and analysis plan. Furthermore, since researchers frequently deviate from their funded protocols, which themselves are rarely published, the suitability of successful funding applications as registered protocols is extremely limited. Finally, RRs are intended to provide an option that is suitable for all applicable research, not only studies that are supported by grant funding.

    6. Where authors are unable to predict the likely effect size for an experiment, how can they report a power analysis as part of a Stage 1 submission?

    Statistical power analysis requires prior estimation of expected effect sizes. Because our research culture emphasizes the need for novelty of both methods and results, it is understandable that researchers may sometimes feel there is no appropriate precedent for their particular choice of methodology. In such cases, however, a plausible effect size can usually be gleaned from related prior work. After all, very few experimental designs are truly unique and exist in isolation. Even when the expected effect size is inestimable, the RR model welcomes the inclusion of pilot results in Stage 1 submissions to establish probable effect sizes for subsequent pre-registered experimentation.

    Where expected effect sizes cannot be estimated and authors have no pilot data, a minimal effect size of theoretical interest can be still used to determine a priori power. Authors can also adopt NHST approaches with corrected peeking (e.g. [27]) or Bayesian methods that specify a prior distribution of possible effect sizes rather than a single point estimate [24]. Interestingly, in informal discussions, some researchers — particularly in neuroimaging — have responded to concerns about power on the grounds that they do not care about the size of an effect, only whether or not an effect is present. The problem with this argument is that if the effect under scrutiny has no lower bound of theoretical importance then the experimental hypothesis (H1) becomes unfalsifiable, regardless of power.

    7. Setting a requirement of 90% for statistical power is unrealistic for expensive methods and would require impossibly large sample sizes. The Registered Reports format therefore disadvantages researchers who work with expensive techniques or who have limited resources.

    It is true that RRs are not suitable for underpowered experiments. But underpowered experiments themselves are detrimental to science, dragging entire fields down blind alleys and limiting the potential for reproducibility [28]. We would argue that if a particular research field systematically fails to reach the standards of statistical power set by RRs then this is not “unfair” but rather a deep problem within that field that needs to be addressed. One solution is to combine resources across research teams to increase power, such as the highly successful IMAGEN fMRI consortium [29].

    8. If reviewers have only the proposed design and methods to assess, won’t they rely more on the reputation of the authors in judging study protocols? This could make life harder for scientists who are more junior or less influential.

    Structured review criteria mean that reviewers must find concrete reasons for arguing that a Stage 1 submission is inappropriate or flawed. Author reputation is not among them. To provide further assurance, the RR format at AIMS Neuroscience will employ masked review in which the reviewers are blinded as much as possible to the identity of the authors.

    9. Sometimes a design is sound, but the data are uninterpretable because researchers run the experiment poorly. How will Registered Reports distinguish negative findings and unexpected results arising from poor practice from those that are genuine?

    As noted in the Stage 1 publication criteria, authors must include outcome-neutral conditions for ensuring that the proposed methods are capable of testing the stated hypotheses. These might include positive control conditions, manipulation checks, and standard benchmarks such as the absence of floor and ceiling effects. Manuscripts that fail to specify these criteria will not be offered IPA, and Stage 2 submissions that fail any critical outcome-neutral tests will not be accepted for publication.

    10. If publication is guaranteed in advance, why would researchers bother running their experiments carefully? This scheme could incentivize false negatives arising from sloppy research practices.

    For this criticism to be valid, scientists would need to be motivated solely by the act of publishing, with no desire to make true discoveries or to build a coherent body of research findings across multiple publications. We are more optimistic about the motivations of the scientific community, but nevertheless, it is important to note that running a pre-registered study carelessly would also sabotage the outcome-neutral tests (see #9) that are necessary for final acceptance of the Stage 2 submission.

    11. Stage 1 submissions must have institutional ethical approval to be considered for IPA, and such ethical approval can be highly specific. This means that if a researcher has to change anything about their study design to obtain IPA, the ethics application would need to be amended and resubmitted to the ethics committee. This back-and-forth will be too time-consuming and bureaucratic for many researchers.

    This is a legitimate concern with no easy solution. An ideal strategy, where possible, is to build in minor procedural flexibility when applying for ethics approval. The RR editorial team at AIMS Neuroscience is happy to provide letters of support for authors seeking to amend ethics approval following Stage 1 peer review.

    12. How will RRs prevent pre-registrations for studies that have no funding or approvals and will never actually happen?

    Some scientists have argued out that the RR model could increase the workload for reviewers if authors were to deliberately submit more protocols than they could carry out. As one critic put it: “Pre-registration sets up a strong incentive to submit as many ideas/experiments as possible to as many high impact factor journals as possible” [20]. Armed with IPA, the researcher could then prepare grant applications to support only the successful protocols, discarding the rejected ones. Were such a strategy to be widely adopted it could indeed overburden the peer review system.

    Again, however, this problem does not apply to the RR model at AIMS Neuroscience or at any of the other journals where the format has been launched. All Stage 1 submissions must include a cover letter stating that all necessary support (e.g. funding, facilities) and approvals (e.g. ethics) are already in place and that the researchers could start immediately following IPA. Since these guarantees could not be made for unsupported proposals, this concern is moot.

    13. Pre-registration of hypotheses and analysis plans is too arduous to be feasible for authors.

    The amount of work required to prepare an RR is similar to conventional manuscript preparation; the key difference is that much of the work is done before, rather than after, data collection. The fact that researchers often decide their hypotheses and analysis strategies after an experiment is complete [2] doesn’t change the fact that these decisions still need to be made. And the reward for thinking through these decisions in advance, rather than at the end, is that IPA virtually guarantees a publication.

    14. The peer review process for Registered Reports includes two phases. Won’t this create too much additional work for reviewers?

    It is true that peer review under the RR model is more thorough than conventional manuscript review. However, critics who raise this point overlook a major shortcoming of the conventional review process: the fact that manuscripts are often rejected sequentially by multiple journals, passing through many reviewers before finding a home. Under the RR model, at least two of the problems that lead to such systematic rejection, and thus additional load on reviewers, are circumvented. First, reviewers of Stage 1 submissions have the opportunity to help authors correct methodological flaws before they occur by assessing the experimental design prior to data collection. Second, because RRs cannot be rejected based on the perceived importance of the results, the RR model avoids a common reason for conventional rejection: that the results are not considered sufficiently novel or groundbreaking.

    We believe the overall reviewer workload under the RR model will be similar to conventional publishing. Consider a case where a conventional manuscript is submitted sequentially to four journals, and the first three journals reject it following 3 reviews each. The fourth journal accepts the manuscript after 3 reviews and 3 re-reviews. In total the manuscript will have been seen by up to 12 reviewers and gone through 15 rounds of review. Now consider what might have happened if the study had been submitted prior to data collection as a Stage 1 RR, assessed by 3 reviewers. Even if it passes through three rounds of Stage 1 review plus two rounds of Stage 2 review, the overall reviewer burden (15 rounds) is the same as the conventional model (15 rounds).

    15. Reviewers could steal my ideas at the pre-registration stage and scoop me.

    This is an understandable concern but highly unlikely. Only a small group of individuals will know about Stage 1 submissions, including the editors plus a small set of reviewers; and the information in Stage 1 submissions is not published until the study is completed. It is also noteworthy that once IPA is awarded, the journal cannot reject the final Stage 2 submission because similar work was published elsewhere in the meantime. Therefore, even in the unlikely event of a reviewer rushing to complete a pre-registered design ahead of the authors, such a strategy would confer little career advantage for the perpetrator (especially because the ‘manuscript received’ date in the final published RR refers to the initial Stage 1 submission date and so will predate the ‘manuscript received’ date of any standard submission published by a competitor). Concerns about being scooped do not stop researchers applying for grant funding or presenting ideas at conferences, both of which involve releasing ideas to a larger group of potential competitors than would typically see a Stage 1 RR.

    16. What is to stop authors with IPA withdrawing their manuscript after getting striking results and resubmitting it to a higher impact journal?

    Nothing. Contrary to some stated concerns [20], authors are free to withdraw their manuscript at any time and are not “locked” into publishing with the journal that reviews the Stage 1 submission. If the withdrawal happens after IPA has been awarded, the journal will simply publish a Withdrawn Registration that includes the abstract from the Stage 1 submission plus a brief explanation for the withdrawal.

    17. Some of my analyses will depend on the results, so how can I pre-register each step in detail?

    Pre-registration does not require every step of an analysis to be specified or “hardwired”; instead, in such cases where the analysis decision is contingent on some aspect of the data itself then the pre-registration only requires the decision tree to be specified (e.g. “If A is observed then we will adopt analysis A1 but if B is observed then we will adopt analysis B1”). Authors can thus pre-register the contingencies and rules that underpin future analysis decisions.

    It bears reiterating that not all analyses need to be pre-registered — authors are welcome to report the outcome of exploratory tests for which specific steps or contingencies could not be determined in advance of data collection.

    18. I have access to an existing data set that I have not yet analysed. Can I submit this proposed analysis as a Registered Report?

    No. The current RR model applies only to original research. However we agree that a pre-registered article type for existing data sets would be a useful additional form of RRs.

    19. My aim is to publish a series of experiments but the design of the later experiments is contingent upon the outcomes of the earlier ones. Isn’t Registered Reports limited to single experiments?

    No. The RR model welcomes sequential registrations in which authors add experiments at Stage 1 via an iterative mechanism and complete them at Stage 2. With each completed cycle, the previous accepted version of the paper is guaranteed to be published, regardless of the outcome of the next round of experimentation. Authors are also welcome to submit a Stage 1 manuscript that includes multiple parallel experiments.

    20. How will RRs incentivise replications?

    Replications are can be expensive and carry little career benefit to authors. Ensuring that a direct replication is convincing often requires a much larger sample than the original study. Once completed, a replication can be difficult to publish because many journals refuse to report them, regardless of the outcome. Unless there is an assurance of publication in advance, it can make little sense from a strategic point of view for life scientists to pursue direct replications. IPA provides this assurance and thus offers the strongest possible incentive for scientists to directly replicate previous work.

    21. Registered Reports is based on a na?ve conceptualisation of the scientific method.

    We believe this criticism is misplaced. Some scientists may well believe that the hypothetico-deductive model is the wrong way to frame science, but if so, why do they routinely publish articles that test hypotheses and report p values? Those who oppose the hypothetico-deductive model are not raising a specific argument against RRs — they are criticising the fundamental way research is taught and published in the life sciences. We are agnostic about such concerns and simply note that the RR model aligns the way science is taught with the way it is published.

    22. As a junior researcher, I need to publish in high-impact journals. Until Nature / Science / PNAS offer Registered Reports, why would I settle for publishing in a specialist journal?

    This is a legitimate concern that will not be settled until life scientists either dispel the myth that journal hierarchy reflects quality [30] or the most prestigious journals offer RRs. The RR model isspreading quickly to many journals, and we believe it is only a matter of time before high-impact outlets come on board. In the meantime there are several rewards for junior scientists who choose the RR model where it is available. First, because RRs are immune to publication bias they ensure that high quality science is published regardless of the outcome. This means that a PhD student could publish every high-quality study from their PhD rather than selectively publishing the studies that “worked”. Second, a PhD student who submits RRs has the opportunity to gain IPA for several papers before even submitting their PhD, which in the stiff competition for post-doctoral jobs may provide an edge over graduates with fewer deliverables to show. Third, because RRs neutralise various QRPs, such as p-hacking, HARKing and low statistical power, it is likely that the findings they contain will be more reproducible, on average, than those in comparable unregistered articles.

    23. Much of my research stems from student projects, which operate over too short a time scale to be suitable for Registered Reports.

    This is a legitimate concern that cannot be solved by the RR model. However, one way authors can address this is to design and pre-register student projects several months before students commence. Although the cover letter for RRs requires certification that the study could commence immediately, it is possible to negotiate a delayed commencement date with the editors.

    24. Registered Reports may not apply to my specific field therefore it is not a good solution.

    Contrary to what some critics have suggested [21], the RR model has never been proposed as a “panacea” for all fields of science or all sub-disciplines within fields. On the contrary we have emphasised that “pre-registration doesn't fit all forms of science, and it isn't a cure-all for scientific publishing” [19]. Furthermore, to suggest that RRs are invalid because they don't solve all problems is to fall prey to the perfect solution fallacy in which a useful partial solution is discarded in favour of a non-existent complete solution.

    Some scientists have further prompted us to explain which types of research the RR model applies to and which it does not [20]. Ultimately such decisions are for the scientific community to reach as a whole, but we believe that the RR model is appropriate for any area of hypothesis-driven science that suffers from publication bias, p-hacking, HARKing, low statistical power, or a lack of direct replication. If none of these problems exist or the approach taken isn’t hypothesis-driven then the RR model need not apply because nothing is gained by pre-registration.

    25. Registered Reports may become seen as the gold standard for scientific publishing, which would unfairly disadvantage exploratory or observational studies that cannot be pre-registered.

    This need not be the case. It bears reiterating that the RR model does not prevent or hinder exploration — it simply enables readers to distinguish confirmatory hypothesis testing from exploratory analysis. Under the conventional publishing system, scientists are pressured to engage in QRPs in order to present exploration as confirmation (e.g. HARKing). Some researchers may even apply NHST in situations where it is not appropriate because there is no a priori hypothesis to be tested. Distinguishing confirmation from exploration can only disadvantage scientists who rely on exploratory approaches but, at the same time, feel entitled to present them as confirmatory.

    We believe this concern reflects a deeper problem that the life sciences do not adequately value exploratory, non-hypothesis driven research. Rather than threatening to devalue exploratory research, the RR model is the first step toward liberating it from this hegemony and increasing its traction. Once the boundaries between confirmation and exploration are made clear we can be free to develop a format of publication that is dedicated solely to reporting exploratory and observational studies. We know of at least one journal that, having launched RRs, is now poised to trial a complementary “Exploratory Reports” format. Rather than criticizing the RR model for devaluing exploration, our community would do better to push reforms that highlight the benefits of purely exploratory research.

    Conclusions

    The unique selling point of Registered Reports is its prevention of publication bias and QRPs in hypothesis testing. This article has introduced the format at AIMS Neuroscience and outlined our response to a number of the most pressing concerns. As we note in the beginning, our purpose is not to preach or to admonish — it is to inspire researchers to consider an alternative pathway and to provide a clear incentive for engaging in transparent practices. Pre-registration carries the reward of assuring readers that a scientist’s practices adhered to the hypothetico-deductive method, which benefits individual practitioners as much as it benefits science. As Leif Nelson has put it:

    Note that this line of thinking is not even vaguely self-righteous. It isn’t pushy. I am not saying, “you have to preregister or else!” Heck, I am not even saying that you should; I am saying that I should. In a world of transparent reporting, I choose preregistration as a way to selfishly show off that I predicted the outcome of my study. [31]

    Over the coming years we look forward to seeing Registered Reports expand across more journals and scientific fields, challenging traditional hegemonies and altering incentive structures. In parallel with this initiative, pre-registration of basic science is growing in prominence at the Open Science Framework (https://osf.io/), and the 2013 revision of the Declaration of Helsinki (DoH) now requires some form of study pre-registration for all research involving humans [32]. Although this requirement technically applies only to clinical research, many of the major journals that publish basic neuroscience also request or require adherence to the DoH, such as the Journal of Neuroscience2 and Cerebral Cortex3. The Registered Reports model complements these advances by incentivising rigour and reproducibility.

    2 http://www.sfn.org/Advocacy/Policy-Positions/Policies-on-the-Use-of-Animals-and-Humans-in-Research

    3 http://www.oxfordjournals.org/our_journals/cercor/for_authors/general.html

    The feedback we have received from the academic community has been invaluable in helping us shape this initiative. In return we hope this editorial responds usefully to some of the most frequent questions and concerns. The Registered Reports model complements these advances by incentivising rigour and reproducibility.

    Acknowledgements

    We are grateful to many of our colleagues and critics for helpful discussions, including but not limited to: Sven Bestmann, Ananyo Bhattacharya, Dorothy Bishop, Jon Brock, Kate Button, Molly Crockett, Sergio Della Sala, Andrew Gelman, Alex Holcombe, Alok Jha, James Kilner, Dani?l Lakens, Natalia Lawrence, Jason Mattingley, Marcus Munafò, Kevin Murphy, Neuroskeptic, Bas Neggers, Brian Nosek, Hal Pashler, Elena Rusconi, Chris Said, Sophie Scott, Sam Schwarzkopf, Dan Simons, Uri Simonsohn, Mark Stokes, Petroc Sumner, Frederick Verbruggen, EJ Wagenmakers, Matt Wall, and Ed Yong.

    [1] Ioannidis JPA. (2005) Why Most Published Research Findings Are False. PLoS Med 2: e124. doi: 10.1371/journal.pmed.0020124
    [2] John LK, Loewenstein G, Prelec D. (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci 23: 524-532. doi: 10.1177/0956797611430953
    [3] Simmons JP, Nelson LD, Simonsohn U. (2011) False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci 22: 359-366.
    [4] Kerr NL. (1998) HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 2: 196-217. doi: 10.1207/s15327957pspr0203_4
    [5] Makel MC, Plucker JA, Hegarty B. (2012) Replications in Psychology Research: How Often Do They Really Occur? Perspect Psychol Sci 7: 537-542. doi: 10.1177/1745691612460688
    [6] Faneli D. (2010) “Positive” Results Increase Down the Hierarchy of the Sciences. PLos One 5: e10068. doi: 10.1371/journal.pone.0010068
    [7] Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14: 365-376. doi: 10.1038/nrn3475
    [8] Wicherts JM, Bakker M, Molenaar D. (2011) Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS One 6: e26828. doi: 10.1371/journal.pone.0026828
    [9] Cohen J. (1962) The statistical power of abnormal-social psychological research: a review. J Abnorm Soc Psychol 65: 145-153. doi: 10.1037/h0045186
    [10] Sterling TD. (1959) Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa. J Am Stat Assoc 54: 30-34.
    [11] de Groot AD. (2014) The meaning of "significance" for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]. Acta Psychol (Amst) 148: 188-194. doi: 10.1016/j.actpsy.2014.02.001
    [12] Nosek BA, Spies JR, Motyl M. (2012) Scientific Utopia : II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspect Psychol Sci 7: 615-631.
    [13] Ioannidis JPA. (2012) Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci 7: 645-654. doi: 10.1177/1745691612464056
    [14] Chambers CD. (2013) Registered reports: a new publishing initiative at Cortex. Cortex 49: 609-610. doi: 10.1016/j.cortex.2012.12.016
    [15] Wolfe J. (2013) Registered Reports and Replications in Attention, Perception, & Psychophysics. Atten Percept Psycho 75: 781-783. doi: 10.3758/s13414-013-0502-5
    [16] Stahl C. (2014) Experimental psychology: toward reproducible research. Exp Psychol 61: 1-2. doi: 10.1027/1618-3169/a000257
    [17] Munafo MR, Strain E. (2014) Registered Reports: A new submission format at Drug and Alcohol Dependence. Drug Alcohol Depend 137: 1-2. doi: 10.1016/j.drugalcdep.2014.02.699
    [18] Nosek BA, Lakens D. (in press) Registered reports: A method to increase the credibility of published results. Soc Psychol.
    [19] Chambers CD, Munafo MR. (2013) Trust in science would be improved by study pre-registration. The Guardian:http://www. theguardian. com/science/blog/2013/jun/2005/trust-in-science-study-pre-registration.
    [20] Scott SK. (2013) Will pre-registration of studies be good for psychology? : https://sites. google. com/site/speechskscott/SpeakingOut/willpre-registrationofstudiesbegoodforpsychology.
    [21] Scott SK. (2013) Pre-registration would put science in chains. Times Higher Education: http://www. timeshighereducation. co. uk/comment/opinion/science-in-chains/2005954. article.
    [22] Rouder J, Speckman P, Sun D, Morey R, Iverson G. (2009) Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev 16: 225-237. doi: 10.3758/PBR.16.2.225
    [23] Wagenmakers EJ. (2007) A practical solution to the pervasive problems of p values. Psychon Bull Rev 14: 779-804. doi: 10.3758/BF03194105
    [24] Dienes Z. (2011) Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspect Psychol Sci 6: 274-290. doi: 10.1177/1745691611406920
    [25] Mathieu S, Chan AW, Ravaud P. (2013) Use of trial register information during the peer review process. PLoS One 8: e59910. doi: 10.1371/journal.pone.0059910
    [26] Gelman A, Loken E. (2014) The garden of forking paths: Why multiple comparisons can be a problem, even when there is no fishing expedition" or "p-hacking" and the research hypothesis was posited ahead of time. Unpublished manuscript: http://www. stat. columbia. edu/~gelman/research/unpublished/p_hacking. pdf.
    [27] Strube MJ. (2006) SNOOP:a program for demonstrating the consequences of premature and repeated null hypothesis testing. Behav Res Methods 38: 24-27. doi: 10.3758/BF03192746
    [28] Fiedler K, Kutzner F, Krueger JI. (2012) The Long Way From α-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspect Psychol Sci 7: 661-669. doi: 10.1177/1745691612462587
    [29] Whelan R, Conrod PJ, Poline JB, Lourdusamy A, Banaschewski T, et al. (2012) Adolescent impulsivity phenotypes characterized by distinct brain networks. Nat Neurosci 15: 920-925. doi: 10.1038/nn.3092
    [30] Brembs B, Button K, Munafo M. (2013) Deep impact: unintended consequences of journal rank. Front Hum Neurosci 7: 291.
    [31] Nelson LD. (2014) Preregistration: Not just for the Empiro-zealots. http://datacoladaorg/2014/01/07/12-preregistration-not-just-for-the-empiro-zealots/.
    [32] World Medical A (2013) World medical association declaration of helsinki: Ethical principles for medical research involving human subjects. JAMA 310: 2191-2194. doi: 10.1001/jama.2013.281053
  • This article has been cited by:

    1. Tom E. Hardwicke, Leila Jameel, Matthew Jones, Eryk J. Walczak, Lucía Magis-Weinberg, Only Human: Scientists, Systems, and Suspect Statistics, 2014, 2049-8128, 10.5334/opt.ch
    2. Tine Köhler, M. Gloria González-Morales, George C. Banks, Ernest H. O’Boyle, Joseph A. Allen, Ruchi Sinha, Sang Eun Woo, Lisa M. V. Gulick, Supporting robust, rigorous, and reliable reviewing as the cornerstone of our profession: Introducing a competency framework for peer review, 2020, 13, 1754-9426, 1, 10.1017/iop.2019.121
    3. Bryan G. Cook, Reforms in Academic Publishing: Should Behavioral Disorders and Special Education Journals Embrace Them?, 2016, 41, 0198-7429, 161, 10.17988/0198-7429-41.3.161
    4. Emma Marsden, Kara Morgan-Short, Pavel Trofimovich, Nick C. Ellis, Introducing Registered Reports atLanguage Learning: Promoting Transparency, Replication, and a Synthetic Ethic in the Language Sciences, 2018, 68, 00238333, 309, 10.1111/lang.12284
    5. Thomas P. Carpenter, Keyne C. Law, Optimizing the scientific study of suicide with open and transparent research practices, 2021, 51, 0363-0234, 36, 10.1111/sltb.12665
    6. M.J. Bayarri, Daniel J. Benjamin, James O. Berger, Thomas M. Sellke, Rejection odds and rejection ratios: A proposal for statistical practice in testing hypotheses, 2016, 72, 00222496, 90, 10.1016/j.jmp.2015.12.007
    7. Dennis M. Gorman, Amber D. Elkins, Mark Lawley, A Systems Approach to Understanding and Improving Research Integrity, 2019, 25, 1353-3452, 211, 10.1007/s11948-017-9986-z
    8. Gaëtan Mertens, Angelos-Miltiadis Krypotos, Preregistration of Analyses of Preexisting Data, 2019, 59, 2054-670X, 338, 10.5334/pb.493
    9. Gábor Csifcsák, Nya Mehnwolo Boayue, Per M. Aslaksen, Zsolt Turi, Andrea Antal, Josephine Groot, Guy E. Hawkins, Birte U. Forstmann, Alexander Opitz, Axel Thielscher, Matthias Mittner, Commentary: Transcranial stimulation of the frontal lobes increases propensity of mind-wandering without changing meta-awareness, 2019, 10, 1664-1078, 10.3389/fpsyg.2019.00130
    10. Sophia Crüwell, Johnny van Doorn, Alexander Etz, Matthew C. Makel, Hannah Moshontz, Jesse C. Niebaum, Amy Orben, Sam Parsons, Michael Schulte-Mecklenbeck, Seven Easy Steps to Open Science, 2019, 227, 2190-8370, 237, 10.1027/2151-2604/a000387
    11. Krzysztof J. Gorgolewski, Russell A. Poldrack, A Practical Guide for Improving Transparency and Reproducibility in Neuroimaging Research, 2016, 14, 1545-7885, e1002506, 10.1371/journal.pbio.1002506
    12. Itay Goldstein, Wei Jiang, G Andrew Karolyi, To FinTech and Beyond, 2019, 32, 0893-9454, 1647, 10.1093/rfs/hhz025
    13. Suzanne Hoogeveen, Lukas Snoek, Michiel van Elk, Religious belief and cognitive conflict sensitivity: A preregistered fMRI study, 2020, 129, 00109452, 247, 10.1016/j.cortex.2020.04.011
    14. Michel Tuan Pham, Travis Tae Oh, Preregistration Is Neither Sufficient nor Necessary for Good Science, 2021, 31, 1057-7408, 163, 10.1002/jcpy.1209
    15. Zoe Hitzig, Jacob Stegenga, The Problem of New Evidence: P-Hacking and Pre-Analysis Plans, 2020, 1733-5566, 1, 10.33392/diam.1587
    16. Eric Robinson, Kirsten E. Bevelander, Matt Field, Andrew Jones, Comments on methodological and reporting quality in laboratory studies of human eating behaviour, 2018, 130, 01956663, 344, 10.1016/j.appet.2018.07.001
    17. S. P. J. M. Horbach, W. Halffman, The ability of different peer review procedures to flag problematic publications, 2019, 118, 0138-9130, 339, 10.1007/s11192-018-2969-2
    18. Russell A. Poldrack, Chris I. Baker, Joke Durnez, Krzysztof J. Gorgolewski, Paul M. Matthews, Marcus R. Munafò, Thomas E. Nichols, Jean-Baptiste Poline, Edward Vul, Tal Yarkoni, Scanning the horizon: towards transparent and reproducible neuroimaging research, 2017, 18, 1471-003X, 115, 10.1038/nrn.2016.167
    19. Dennis M. Gorman, The decline effect in evaluations of the impact of the Strengthening Families Program for Youth 10-14 (SFP 10-14) on adolescent substance use, 2017, 81, 01907409, 29, 10.1016/j.childyouth.2017.07.009
    20. Jörg Matthes, Franziska Marquart, Brigitte Naderer, Florian Arendt, Desirée Schmuck, Karoline Adam, Questionable Research Practices in Experimental Communication Research: A Systematic Analysis From 1980 to 2013, 2015, 9, 1931-2458, 193, 10.1080/19312458.2015.1096334
    21. Brendan Nyhan, Increasing the Credibility of Political Science Research: A Proposal for Journal Reforms, 2015, 48, 1049-0965, 78, 10.1017/S1049096515000463
    22. Jan Lauwereyns, 2018, Chapter 5, 978-3-319-89299-3, 103, 10.1007/978-3-319-89300-6_5
    23. Saul Albert, J. P. de Ruiter, Rolf Zwaan, Mark Dingemanse, Improving Human Interaction Research through Ecological Grounding, 2018, 4, 2474-7394, 10.1525/collabra.132
    24. Itay Goldstein, Wei Jiang, George Andrew Karolyi, To FinTech and Beyond, 2019, 1556-5068, 10.2139/ssrn.3328172
    25. Harlan Campbell, Paul Gustafson, Daniele Marinazzo, Conditional equivalence testing: An alternative remedy for publication bias, 2018, 13, 1932-6203, e0195145, 10.1371/journal.pone.0195145
    26. Tahira M. Probst, Martin S. Hagger, Advancing the Rigour and Integrity of Our Science: The Registered Reports Initiative, 2015, 31, 15323005, 177, 10.1002/smi.2645
    27. Garret Christensen, Edward Miguel, Transparency, Reproducibility, and the Credibility of Economics Research, 2018, 56, 0022-0515, 920, 10.1257/jel.20171350
    28. Jake Bernards, Kimitake Sato, G. Haff, Caleb Bazyler, Current Research and Statistical Practices in Sport Science and a Need for Change, 2017, 5, 2075-4663, 87, 10.3390/sports5040087
    29. Eric M. Prager, Karen E. Chambers, Joshua L. Plotkin, David L. McArthur, Anita E. Bandrowski, Nidhi Bansal, Maryann E. Martone, Hadley C. Bergstrom, Anton Bespalov, Chris Graf, Improving transparency and scientific rigor in academic publishing, 2019, 9, 21623279, e01141, 10.1002/brb3.1141
    30. Jonathan P. Tennant, Jonathan M. Dugan, Daniel Graziotin, Damien C. Jacques, François Waldner, Daniel Mietchen, Yehia Elkhatib, Lauren B. Collister, Christina K. Pikas, Tom Crick, Paola Masuzzo, Anthony Caravaggi, Devin R. Berg, Kyle E. Niemeyer, Tony Ross-Hellauer, Sara Mannheimer, Lillian Rigling, Daniel S. Katz, Bastian Greshake Tzovaras, Josmel Pacheco-Mendoza, Nazeefa Fatima, Marta Poblet, Marios Isaakidis, Dasapta Erwin Irawan, Sébastien Renaut, Christopher R. Madan, Lisa Matthias, Jesper Nørgaard Kjær, Daniel Paul O'Donnell, Cameron Neylon, Sarah Kearns, Manojkumar Selvaraju, Julien Colomb, A multi-disciplinary perspective on emergent and future innovations in peer review, 2017, 6, 2046-1402, 1151, 10.12688/f1000research.12037.3
    31. Eric C. Fields, Gina R. Kuperberg, Having your cake and eating it too: Flexibility and power with mass univariate statistics for ERP data, 2020, 57, 0048-5772, 10.1111/psyp.13468
    32. Nils Muhlert, Gerard R. Ridgway, Failed replications, contributing factors and careful interpretations: Commentary on Boekel et al., 2015, 2016, 74, 00109452, 338, 10.1016/j.cortex.2015.02.019
    33. Mark Rubin, An Evaluation of Four Solutions to the Forking Paths Problem: Adjusted Alpha, Preregistration, Sensitivity Analyses, and Abandoning the Neyman-Pearson Approach, 2017, 21, 1089-2680, 321, 10.1037/gpr0000135
    34. C. J. Brainerd, Valerie F. Reyna, Replication, Registration, and Scientific Creativity, 2018, 13, 1745-6916, 428, 10.1177/1745691617739421
    35. Nathalie Percie du Sert, Amrita Ahluwalia, Sabina Alam, Marc T. Avey, Monya Baker, William J. Browne, Alejandra Clark, Innes C. Cuthill, Ulrich Dirnagl, Michael Emerson, Paul Garner, Stephen T. Holgate, David W. Howells, Viki Hurst, Natasha A. Karp, Stanley E. Lazic, Katie Lidster, Catriona J. MacCallum, Malcolm Macleod, Esther J. Pearl, Ole H. Petersen, Frances Rawle, Penny Reynolds, Kieron Rooney, Emily S. Sena, Shai D. Silberberg, Thomas Steckler, Hanno Würbel, Isabelle Boutron, Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0, 2020, 18, 1545-7885, e3000411, 10.1371/journal.pbio.3000411
    36. Hunter Gehlbach, Carly D. Robinson, Mitigating Illusory Results Through Pre-Registration in Education, 2017, 1556-5068, 10.2139/ssrn.3025214
    37. Tom E. Hardwicke, John P. A. Ioannidis, Mapping the universe of registered reports, 2018, 2, 2397-3374, 793, 10.1038/s41562-018-0444-y
    38. Dominique Makowski, The psycho Package: an Efficient and Publishing-Oriented Workflow for Psychological Science, 2018, 3, 2475-9066, 470, 10.21105/joss.00470
    39. Gregory Francis, Talis Bachmann, Registered reports for Consciousness and Cognition, 2018, 57, 10538100, A1, 10.1016/j.concog.2017.10.007
    40. Bryan G. Cook, John Wills Lloyd, David Mellor, Brian A. Nosek, William J. Therrien, Promoting Open Science to Increase the Trustworthiness of Evidence in Special Education, 2018, 85, 0014-4029, 104, 10.1177/0014402918793138
    41. Dominique Makowski, Marco Sperduti, Jérôme Pelletier, Phillippe Blondé, Valentina La Corte, Margherita Arcangeli, Tiziana Zalla, Stéphane Lemaire, Jérôme Dokic, Serge Nicolas, Pascale Piolino, Phenomenal, bodily and brain correlates of fictional reappraisal as an implicit emotion regulation strategy, 2019, 19, 1530-7026, 877, 10.3758/s13415-018-00681-0
    42. Neil A. Lewis, Open Communication Science: A Primer on Why and Some Recommendations for How, 2020, 14, 1931-2458, 71, 10.1080/19312458.2019.1685660
    43. Jonathan P. Tennant, Jonathan M. Dugan, Daniel Graziotin, Damien C. Jacques, François Waldner, Daniel Mietchen, Yehia Elkhatib, Lauren B. Collister, Christina K. Pikas, Tom Crick, Paola Masuzzo, Anthony Caravaggi, Devin R. Berg, Kyle E. Niemeyer, Tony Ross-Hellauer, Sara Mannheimer, Lillian Rigling, Daniel S. Katz, Bastian Greshake Tzovaras, Josmel Pacheco-Mendoza, Nazeefa Fatima, Marta Poblet, Marios Isaakidis, Dasapta Erwin Irawan, Sébastien Renaut, Christopher R. Madan, Lisa Matthias, Jesper Nørgaard Kjær, Daniel Paul O'Donnell, Cameron Neylon, Sarah Kearns, Manojkumar Selvaraju, Julien Colomb, A multi-disciplinary perspective on emergent and future innovations in peer review, 2017, 6, 2046-1402, 1151, 10.12688/f1000research.12037.2
    44. Brian A. Nosek, Charles R. Ebersole, Alexander C. DeHaven, David T. Mellor, The preregistration revolution, 2018, 115, 0027-8424, 2600, 10.1073/pnas.1708274114
    45. Sophia Crüwell, Angelika M. Stefan, Nathan J. Evans, Robust Standards in Cognitive Science, 2019, 2, 2522-0861, 255, 10.1007/s42113-019-00049-8
    46. Hunter Gehlbach, Carly D. Robinson, Mitigating Illusory Results through Preregistration in Education, 2018, 11, 1934-5747, 296, 10.1080/19345747.2017.1387950
    47. Nya Mehnwolo Boayue, Gábor Csifcsák, Per Aslaksen, Zsolt Turi, Andrea Antal, Josephine Groot, Guy E. Hawkins, Birte Forstmann, Alexander Opitz, Axel Thielscher, Matthias Mittner, Increasing propensity to mind‐wander by transcranial direct current stimulation? A registered report, 2020, 51, 0953-816X, 755, 10.1111/ejn.14347
    48. Daniël Lakens, The Practical Alternative to the p Value Is the Correctly Used p Value, 2021, 1745-6916, 174569162095801, 10.1177/1745691620958012
    49. Emir Efendic, Llewellyn E. Van Zyl, On reproducibility and replicability: Arguing for open science practices and methodological improvements at the South African Journal of Industrial Psychology, 2019, 45, 2071-0763, 10.4102/sajip.v45i0.1607
    50. Anna Elisabeth van 't Veer, Roger Giner-Sorolla, Pre-registration in social psychology—A discussion and suggested template, 2016, 67, 00221031, 2, 10.1016/j.jesp.2016.03.004
    51. Robert J. Bloomfield, Kristina M. Rennekamp, Blake A. Steenhoven, No System is Perfect: Understanding How Registration-Based Editorial Processes Affect Reproducibility and Investment in Research Quality, 2018, 1556-5068, 10.2139/ssrn.3118687
    52. Emilio M. Bruna, Robin Chazdon, Timothy M. Errington, Brian A. Nosek, A proposal to advance theory and promote collaboration in tropical biology by supporting replications, 2021, 53, 0006-3606, 6, 10.1111/btp.12912
    53. Gesche M. Huebner, Michael J. Fell, Nicole E. Watson, Improving energy research practices: guidance for transparency, reproducibility and quality, 2021, 2, 2632-6655, 1, 10.5334/bc.67
    54. Samuel Greiff, Mark S. Allen, EJPA Introduces Registered Reports as New Submission Format, 2018, 34, 1015-5759, 217, 10.1027/1015-5759/a000492
    55. Pepijn Obels, Daniël Lakens, Nicholas A. Coles, Jaroslav Gottfried, Seth A. Green, Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology, 2020, 3, 2515-2459, 229, 10.1177/2515245920918872
    56. Esther Aarts, Sahar El Aidy, Increasing reproducibility and interpretability of microbiota-gut-brain studies on human neurocognition and intermediary microbial metabolites, 2019, 42, 0140-525X, 10.1017/S0140525X18002777
    57. Sarahanne M. Field, E.-J. Wagenmakers, Henk A. L. Kiers, Rink Hoekstra, Anja F. Ernst, Don van Ravenzwaaij, The effect of preregistration on trust in empirical research findings: results of a registered report, 2020, 7, 2054-5703, 181351, 10.1098/rsos.181351
    58. Daniël Lakens, Neil McLatchie, Peder M Isager, Anne M Scheel, Zoltan Dienes, Derek Isaacowitz, Improving Inferences About Null Effects With Bayes Factors and Equivalence Tests, 2020, 75, 1079-5014, 45, 10.1093/geronb/gby065
    59. Imtiaz Khan, Ali Shahaab, A Peer-To-Peer Publication Model on Blockchain, 2021, 4, 2624-7852, 10.3389/fbloc.2021.615726
    60. Jonathan P. Tennant, Jonathan M. Dugan, Daniel Graziotin, Damien C. Jacques, François Waldner, Daniel Mietchen, Yehia Elkhatib, Lauren B. Collister, Christina K. Pikas, Tom Crick, Paola Masuzzo, Anthony Caravaggi, Devin R. Berg, Kyle E. Niemeyer, Tony Ross-Hellauer, Sara Mannheimer, Lillian Rigling, Daniel S. Katz, Bastian Greshake Tzovaras, Josmel Pacheco-Mendoza, Nazeefa Fatima, Marta Poblet, Marios Isaakidis, Dasapta Erwin Irawan, Sébastien Renaut, Christopher R. Madan, Lisa Matthias, Jesper Nørgaard Kjær, Daniel Paul O'Donnell, Cameron Neylon, Sarah Kearns, Manojkumar Selvaraju, Julien Colomb, A multi-disciplinary perspective on emergent and future innovations in peer review, 2017, 6, 2046-1402, 1151, 10.12688/f1000research.12037.1
    61. Christopher R Hill, Stephen Samendinger, Amanda M Rymal, P-Curve Analysis of the Köhler Motivation Gain Effect in Exercise Settings: A Demonstration of a Novel Technique to Estimate Evidential Value Across Multiple Studies, 2020, 0883-6612, 10.1093/abm/kaaa080
    62. Harrison Hong, G Andrew Karolyi, José A Scheinkman, Climate Finance, 2020, 33, 0893-9454, 1011, 10.1093/rfs/hhz146
    63. André Zugman, Anita Harrewijn, Elise M. Cardinale, Hannah Zwiebel, Gabrielle F. Freitag, Katy E. Werwath, Janna M. Bas‐Hoogendam, Nynke A. Groenewold, Moji Aghajani, Kevin Hilbert, Narcis Cardoner, Daniel Porta‐Casteràs, Savannah Gosnell, Ramiro Salas, Karina S. Blair, James R. Blair, Mira Z. Hammoud, Mohammed Milad, Katie Burkhouse, K. Luan Phan, Heidi K. Schroeder, Jeffrey R. Strawn, Katja Beesdo‐Baum, Sophia I. Thomopoulos, Hans J. Grabe, Sandra Van der Auwera, Katharina Wittfeld, Jared A. Nielsen, Randy Buckner, Jordan W. Smoller, Benson Mwangi, Jair C. Soares, Mon‐Ju Wu, Giovana B. Zunta‐Soares, Andrea P. Jackowski, Pedro M. Pan, Giovanni A. Salum, Michal Assaf, Gretchen J. Diefenbach, Paolo Brambilla, Eleonora Maggioni, David Hofmann, Thomas Straube, Carmen Andreescu, Rachel Berta, Erica Tamburo, Rebecca Price, Gisele G. Manfro, Hugo D. Critchley, Elena Makovac, Matteo Mancini, Frances Meeten, Cristina Ottaviani, Federica Agosta, Elisa Canu, Camilla Cividini, Massimo Filippi, Milutin Kostić, Ana Munjiza, Courtney A. Filippi, Ellen Leibenluft, Bianca A. V. Alberton, Nicholas L. Balderston, Monique Ernst, Christian Grillon, Lilianne R. Mujica‐Parodi, Helena Nieuwenhuizen, Gregory A. Fonzo, Martin P. Paulus, Murray B. Stein, Raquel E. Gur, Ruben C. Gur, Antonia N. Kaczkurkin, Bart Larsen, Theodore D. Satterthwaite, Jennifer Harper, Michael Myers, Michael T. Perino, Qiongru Yu, Chad M. Sylvester, Dick J. Veltman, Ulrike Lueken, Nic J. A. Van der Wee, Dan J. Stein, Neda Jahanshad, Paul M. Thompson, Daniel S. Pine, Anderson M. Winkler, Mega‐analysis methods in ENIGMA : The experience of the generalized anxiety disorder working group , 2020, 1065-9471, 10.1002/hbm.25096
    64. Karen Niven, Luke Boorman, Assumptions beyond the science: encouraging cautious conclusions about functional magnetic resonance imaging research on organizational behavior, 2016, 37, 08943796, 1150, 10.1002/job.2097
    65. Daniël Lakens, Joe Hilgard, Janneke Staaks, On the reproducibility of meta-analyses: six practical recommendations, 2016, 4, 2050-7283, 10.1186/s40359-016-0126-3
    66. Sarah J. Charles, James E. Bartlett, Kyle J. Messick, Thomas J. Coleman, Alex Uzdavines, Researcher Degrees of Freedom in the Psychology of Religion, 2019, 29, 1050-8619, 230, 10.1080/10508619.2019.1660573
    67. S. P. J. M. Horbach, W. ( Willem) Halffman, The changing forms and expectations of peer review, 2018, 3, 2058-8615, 10.1186/s41073-018-0051-5
    68. Barbara A. Spellman, Elizabeth A. Gilbert, Katherine S. Corker, 2018, 9781119170167, 1, 10.1002/9781119170174.epcn519
    69. Winston Lin, Donald P. Green, Standard Operating Procedures: A Safety Net for Pre-Analysis Plans, 2016, 49, 1049-0965, 495, 10.1017/S1049096516000810
    70. Kara Moranski, Nicole Ziegler, A Case for Multisite Second Language Acquisition Research: Challenges, Risks, and Rewards, 2021, 71, 0023-8333, 204, 10.1111/lang.12434
    71. Samuel Greiff, Lindie van der Westhuizen, Marcus Mund, John F. Rauthmann, Eunike Wetzel, Introducing New Open Science Practices at EJPA, 2020, 36, 1015-5759, 717, 10.1027/1015-5759/a000628
    72. Angela S. Attwood, Marcus R. Munafò, Navigating an open road, 2016, 70, 08954356, 264, 10.1016/j.jclinepi.2015.04.016
    73. Christian M. Stracke, 2020, Chapter 2, 978-981-15-4275-6, 17, 10.1007/978-981-15-4276-3_2
    74. Bruce Knuteson, The Solution to Science's Replication Crisis, 2016, 1556-5068, 10.2139/ssrn.2835131
    75. James A. Grand, Steven G. Rogelberg, George C. Banks, Ronald S. Landis, Scott Tonidandel, From Outcome to Process Focus: Fostering a More Robust Psychological Science Through Registered Reports and Results-Blind Reviewing, 2018, 13, 1745-6916, 448, 10.1177/1745691618767883
    76. Martin S. Hagger, Embracing Open Science and Transparency in Health Psychology, 2019, 13, 1743-7199, 131, 10.1080/17437199.2019.1605614
    77. Susan Reh, Niels Van Quaquebeke, Steffen R. Giessner, The aura of charisma: A review on the embodiment perspective as signaling, 2017, 28, 10489843, 486, 10.1016/j.leaqua.2017.01.001
    78. Gökhan Ocakoğlu, Aslı Ceren Macunluoğlu, Fatma Ezgi Can, Büşra Kaymak, Zeynep Yivlik, The opinion of sports science professionals for the benefit of statistics: an international web-based survey, 2019, 2149-3189, 10.18621/eurj.468686
    79. Haley M. Woznyj, Kelcie Grenier, Roxanne Ross, George C. Banks, Steven G. Rogelberg, Results-blind review: a masked crusader for science, 2018, 27, 1359-432X, 561, 10.1080/1359432X.2018.1496081
    80. Christof Weinhardt, Wil M. P. van der Aalst, Oliver Hinz, Introducing Registered Reports to the Information Systems Community, 2019, 61, 2363-7005, 381, 10.1007/s12599-019-00602-6
    81. Felix D. Schönbrodt, Markus Maier, Moritz Heene, Markus Bühner, Forschungstransparenz als hohes wissenschaftliches Gut stärken, 2018, 69, 0033-3042, 37, 10.1026/0033-3042/a000386
    82. Dominique Makowski, Mattan S. Ben-Shachar, S. H. Annabel Chen, Daniel Lüdecke, Indices of Effect Existence and Significance in the Bayesian Framework, 2019, 10, 1664-1078, 10.3389/fpsyg.2019.02767
    83. Eric M. Prager, Karen E. Chambers, Joshua L. Plotkin, David L. McArthur, Anita E. Bandrowski, Nidhi Bansal, Maryann E. Martone, Hadley C. Bergstrom, Anton Bespalov, Chris Graf, Improving transparency and scientific rigor in academic publishing, 2019, 97, 0360-4012, 377, 10.1002/jnr.24340
    84. Thomas Schultze, Jürgen Huber, Michael Kirchler, Andreas Mojzisch, Replications in economic psychology and behavioral economics, 2019, 75, 01674870, 102199, 10.1016/j.joep.2019.102199
    85. Herman Aguinis, George C. Banks, Steven G. Rogelberg, Wayne F. Cascio, Actionable recommendations for narrowing the science-practice gap in open science, 2020, 158, 07495978, 27, 10.1016/j.obhdp.2020.02.007
    86. Howard Bowman, Joseph L. Brooks, Omid Hajilou, Alexia Zoumpoulaki, Vladimir Litvak, Leyla Isik, Breaking the circularity in circular analyses: Simulations and formal treatment of the flattened average approach, 2020, 16, 1553-7358, e1008286, 10.1371/journal.pcbi.1008286
    87. Andreas Keil, Lisa M. Gatzke‐Kopp, János Horváth, J. Richard Jennings, Monica Fabiani, A registered report format for Psychophysiology , 2020, 57, 0048-5772, 10.1111/psyp.13663
    88. Zoltan Dienes, How Bayes factors change scientific practice, 2016, 72, 00222496, 78, 10.1016/j.jmp.2015.10.003
    89. Promoting reproducibility with registered reports, 2017, 1, 2397-3374, 10.1038/s41562-016-0034
    90. Israel Halperin, Andrew D. Vigotsky, Carl Foster, David B. Pyne, Strengthening the Practice of Exercise and Sport-Science Research, 2018, 13, 1555-0265, 127, 10.1123/ijspp.2017-0322
    91. ROBERT BLOOMFIELD, KRISTINA RENNEKAMP, BLAKE STEENHOVEN, No System Is Perfect: Understanding How Registration-Based Editorial Processes Affect Reproducibility and Investment in Research Quality, 2018, 56, 00218456, 313, 10.1111/1475-679X.12208
    92. Eric M. Prager, Karen E. Chambers, Joshua L. Plotkin, David L. McArthur, Anita E. Bandrowski, Nidhi Bansal, Maryann E. Martone, Hadley C. Bergstrom, Anton Bespalov, Chris Graf, Improving transparency and scientific rigor in academic publishing, 2019, 2, 25738348, e1150, 10.1002/cnr2.1150
    93. Anastasia Kiyonaga, Jason M. Scimeca, Practical Considerations for Navigating Registered Reports, 2019, 42, 01662236, 568, 10.1016/j.tins.2019.07.003
    94. Maestro Bayarri Jorge, Daniel J. Benjamin, James Berger, Thomas M. Sellke, Rejection Odds and Rejection Ratios: A Proposal for Statistical Practice in Testing Hypotheses, 2015, 1556-5068, 10.2139/ssrn.2714185
    95. Erika A. Patall, Implications of the open science era for educational psychology research syntheses, 2021, 0046-1520, 1, 10.1080/00461520.2021.1897009
    96. Daniela Mertzen, Sol Lago, Shravan Vasishth, The benefits of preregistration for hypothesis-driven bilingualism research, 2021, 1366-7289, 1, 10.1017/S1366728921000031
    97. Justin Reich, Preregistration and registered reports, 2021, 0046-1520, 1, 10.1080/00461520.2021.1900851
    98. Jessica Kay Flake, Strengthening the foundation of educational psychology by integrating construct validation into open science reform, 2021, 0046-1520, 1, 10.1080/00461520.2021.1898962
    99. David Mellor, Improving norms in research culture to incentivize transparency and rigor, 2021, 0046-1520, 1, 10.1080/00461520.2021.1902329
    100. Michael I. Demidenko, Ka I. Ip, Dominic P. Kelly, Kevin Constante, Leigh G. Goetschius, Daniel P. Keating, Ecological stress, amygdala reactivity, and internalizing symptoms in preadolescence: Is parenting a buffer?, 2021, 00109452, 10.1016/j.cortex.2021.02.032
    101. Eric Petit, A planned experiment on local adaptation in a host-parasite system: is adaptation to the host linked to its recent domestication?, 2021, 26064979, 10.24072/pci.ecology.100079
    102. Paul Whaley, Nicolas Roth, How we promote rigour in systematic reviews and evidence maps at Environment International, 2022, 170, 01604120, 107543, 10.1016/j.envint.2022.107543
    103. Emil Uffelmann, Qin Qin Huang, Nchangwi Syntia Munung, Jantina de Vries, Yukinori Okada, Alicia R. Martin, Hilary C. Martin, Tuuli Lappalainen, Danielle Posthuma, Genome-wide association studies, 2021, 1, 2662-8449, 10.1038/s43586-021-00056-9
    104. Jennifer A Byrne, Yasunori Park, Reese A K Richardson, Pranujan Pathmendra, Mengyi Sun, Thomas Stoeger, Protection of the human gene research literature from contract cheating organizations known as research paper mills, 2022, 50, 0305-1048, 12058, 10.1093/nar/gkac1139
    105. Randall J. Ellis, Questionable Research Practices, Low Statistical Power, and Other Obstacles to Replicability: Why Preclinical Neuroscience Research Would Benefit from Registered Reports, 2022, 9, 2373-2822, ENEURO.0017-22.2022, 10.1523/ENEURO.0017-22.2022
    106. Bartłomiej Kucharzyk, 2021, 20, 9781108623056, 421, 10.1017/9781108623056.020
    107. Sophia Crüwell, Nathan J. Evans, Preregistration in diverse contexts: a preregistration template for the application of cognitive models, 2021, 8, 2054-5703, 10.1098/rsos.210155
    108. Natasha L. Burke, Guido K. W. Frank, Anja Hilbert, Thomas Hildebrandt, Kelly L. Klump, Jennifer J. Thomas, Tracey D. Wade, B. Timothy Walsh, Shirley B. Wang, Ruth Striegel Weissman, Open science practices for eating disorders research, 2021, 54, 0276-3478, 1719, 10.1002/eat.23607
    109. Alexander Frankel, Maximilian Kasy, Which Findings Should Be Published?, 2022, 14, 1945-7669, 1, 10.1257/mic.20190133
    110. Jerzy Marian Brzeziński, Towards a comprehensive model of scientific research and professional practice in psychology, 2016, 4, 2353-4192, 1, 10.5114/cipp.2016.58442
    111. Thibaut Arpinon, Romain Espinosa, A practical guide to Registered Reports for economists, 2023, 2199-6784, 10.1007/s40881-022-00123-1
    112. Christoph Stahl, Experimental Psychology, 2015, 62, 1618-3169, 1, 10.1027/1618-3169/a000289
    113. Tom E. Hardwicke, Eric-Jan Wagenmakers, Reducing bias, increasing transparency and calibrating confidence with preregistration, 2023, 7, 2397-3374, 15, 10.1038/s41562-022-01497-2
    114. Felix Putze, Susanne Putze, Merle Sagehorn, Christopher Micek, Erin T. Solovey, Understanding HCI Practices and Challenges of Experiment Reporting with Brain Signals: Towards Reproducibility and Reuse, 2022, 29, 1073-0516, 1, 10.1145/3490554
    115. Alexa M. Tullett, The Limitations of Social Science as the Arbiter of Blame: An Argument for Abandoning Retribution, 2022, 17, 1745-6916, 995, 10.1177/17456916211033284
    116. Iratxe Puebla, Preprints: a tool and a vehicle towards greater reproducibility in the life sciences, 2020, 2, 2670-3815, 1465, 10.31885/jrn.2.2021.1465
    117. Larry J. Williams, George C. Banks, Robert J. Vandenberg, ORM-CARMA Virtual Feature Topics for Advanced Reviewer Development, 2021, 24, 1094-4281, 675, 10.1177/10944281211030648
    118. Hannah Bucher, Anne-Kathrin Stroppe, Axel M. Burger, Thorsten Faas, Harald Schoen, Marc Debus, Sigrid Roßteutscher, Special Issue Introduction, 2023, 64, 0032-3470, 1, 10.1007/s11615-022-00436-0
    119. ES Sena, GL Currie, How our approaches to assessing benefits and harms can be improved, 2019, 28, 0962-7286, 107, 10.7120/09627286.28.1.107
    120. Michel Tuan Pham, Travis Tae Oh, Preregistration Is Neither Sufficient nor Necessary for Good Science, 2020, 1556-5068, 10.2139/ssrn.3747616
    121. Juan A. Marin-Garcia, Three-stage publishing to support evidence-based management practice, 2021, 12, 1989-9068, 56, 10.4995/wpom.11755
    122. Michael Bradbury, Bryan Howieson, Research on Application and Impact of IFRS 9 Financial Instruments , 2022, 32, 1035-6908, 409, 10.1111/auar.12391
    123. Holly L. Storkel, Frederick J. Gallun, Announcing a New Registered Report Article Type at the Journal of Speech, Language, and Hearing Research , 2022, 65, 1092-4388, 1, 10.1044/2021_JSLHR-21-00513
    124. Robbie Clark, Katie Drax, Christopher D. Chambers, Marcus Munafò, Jacqueline Thompson, Evaluating Registered Reports Funding Partnerships: a feasibility study, 2021, 6, 2398-502X, 231, 10.12688/wellcomeopenres.17028.1
    125. Christopher Kavanagh, Rohan Kapitany, Promoting the Benefits and Clarifying Misconceptions about Preregistration, Preprints, and Open Science for the Cognitive Science of Religion, 2021, 6, 2049-7563, 10.1558/jcsr.38713
    126. Dorota Reis, Malte Friese, 2022, Chapter 5, 978-3-031-04967-5, 101, 10.1007/978-3-031-04968-2_5
    127. Rony Hirschhorn, Tom Schonberg, 2024, 9780128093245, 10.1016/B978-0-12-820480-1.00014-0
    128. Yuren Pang, Katharina Reinecke, René Just, 2022, Apéritif: Scaffolding Preregistrations to Automatically Generate Analysis Code and Methods Descriptions, 9781450391573, 1, 10.1145/3491102.3517707
    129. Lonni Besançon, Nathan Peiffer-Smadja, Corentin Segalas, Haiting Jiang, Paola Masuzzo, Cooper Smout, Eric Billy, Maxime Deforet, Clémence Leyrat, Open science saves lives: lessons from the COVID-19 pandemic, 2021, 21, 1471-2288, 10.1186/s12874-021-01304-y
    130. Erick H. Turner, Andrea Cipriani, Toshi A. Furukawa, Georgia Salanti, Ymkje Anna de Vries, Aaron S. Kesselheim, Selective publication of antidepressant trials and its influence on apparent efficacy: Updated comparisons and meta-analyses of newer versus older trials, 2022, 19, 1549-1676, e1003886, 10.1371/journal.pmed.1003886
    131. Nathalie Noret, Simon C. Hunter, Sofia Pimenta, Rachel Taylor, Rebecca Johnson, Open Science: Recommendations for Research on School Bullying, 2022, 2523-3653, 10.1007/s42380-022-00130-0
    132. Jaume F. Lalanza, Sonia Lorente, Raimon Bullich, Carlos García, Josep-Maria Losilla, Lluis Capdevila, Methods for Heart Rate Variability Biofeedback (HRVB): A Systematic Review and Guidelines, 2023, 1090-0586, 10.1007/s10484-023-09582-6
    133. Thibaut Arpinon, Romain Espinosa, A Practical Guide to Registered Reports for Economists, 2022, 1556-5068, 10.2139/ssrn.4110803
    134. Alexander A. Aarts, Psychological Science Replicates Just Fine, Thanks, 2022, 1556-5068, 10.2139/ssrn.4176922
    135. Ragnhild Gya, Kristine Birkeli, Ingrid J. Dahle, Christopher G. Foote, Sonya R. Geange, Joshua S. Lynn, Joachim P. Töpper, Vigdis Vandvik, Camilla Zernichow, Gareth B. Jenkins, Registered Reports: A new chapter at Ecology & Evolution , 2023, 13, 2045-7758, 10.1002/ece3.10023
    136. Philippe GORRY, Léo MIGNOT, Antoine SABOURAUD, 2023, Analysis of the Pubmed Commons Post-Publication Peer Review Plateform. , 10.55835/6442f02464eb99f94fe5a307
    137. Charlotte Olivia Brand, James Patrick Ounsley, Daniel Job Van der Post, Thomas Joshua Henry Morgan, Cumulative Science via Bayesian Posterior Passing, 2019, 3, 2003-2714, 10.15626/MP.2017.840
    138. Melissa Carsten, Rachel Clapp-Smith, S. Alexander Haslam, Nicolas Bastardoz, Janaki Gooty, Shane Connelly, Seth Spain, Doing better leadership science via replications and registered reports, 2023, 10489843, 101712, 10.1016/j.leaqua.2023.101712
    139. Heidi A. Baumgartner, Nicolás Alessandroni, Krista Byers-Heinlein, Michael C. Frank, J. Kiley Hamlin, Melanie Soderstrom, Jan G. Voelkel, Robb Willer, Francis Yuen, Nicholas A. Coles, How to build up big team science: a practical guide for large-scale collaborations, 2023, 10, 2054-5703, 10.1098/rsos.230235
    140. Ian D. Gow, The Elephant in the Room: P-Hacking and Accounting Research, 2023, 1556-5068, 10.2139/ssrn.4460192
    141. Peter Holtz, Two Questions to Foster Critical Thinking in the Field of Psychology, 2020, 4, 2003-2714, 10.15626/MP.2018.984
    142. Nicholas Fox, Nathan Honeycutt, Lee Jussim, Better Understanding the Population Size and Stigmatization of Psychologists Using Questionable Research Practices, 2022, 6, 2003-2714, 10.15626/MP.2020.2601
    143. John K. Sakaluk, Carm De Santis, Robyn Kilshaw, Merle-Marie Pittelkow, Cassandra M. Brandes, Cassandra L. Boness, Yevgeny Botanov, Alexander J. Williams, Dennis C. Wendt, Lorenzo Lorenzo-Luaces, Jessica Schleider, Don van Ravenzwaaij, Reconsidering what makes syntheses of psychological intervention studies useful, 2023, 2731-0574, 10.1038/s44159-023-00213-9
    144. Ergün KARA, Tekrarlanabilirlik Krizi ve Geçerlilik Krizi Kıskacındaki Psikoloji ve Sosyal Bilimlerde Krizden Çıkış İçin Öne Çıkan İki Trend: “Yeni İstatistik” ve “Bayesyen İstatistik”, 2023, 13, 2146-4014, 599, 10.18039/ajesi.1240655
    145. Ian D. Gow, The Elephant in the Room: p-hacking and Accounting Research, 2023, 0, 2152-2820, 10.1515/ael-2022-0111
    146. 2023, 9781119799979, 47, 10.1002/9781119800002.ch5
    147. Roman Briker, Fabiola H. Gerpott, Publishing Registered Reports in Management and Applied Psychology: Common Beliefs and Best Practices, 2023, 1094-4281, 10.1177/10944281231210309
    148. David Cropley, Arthur Cropley, Creativity and the Cyber Shock: The Ultimate Paradox, 2023, 0022-0175, 10.1002/jocb.625
    149. Michael I. Demidenko, Jeanette A. Mumford, Nilam Ram, Russell A. Poldrack, A multi-sample evaluation of the measurement structure and function of the modified monetary incentive delay task in adolescents, 2024, 65, 18789293, 101337, 10.1016/j.dcn.2023.101337
    150. Monika H.M. Schmidt, Douglas F. Dluzen, 2024, 9780128172186, 3, 10.1016/B978-0-12-817218-6.00012-7
    151. Peter E. Clayson, Beyond single paradigms, pipelines, and outcomes: Embracing multiverse analyses in psychophysiology, 2024, 197, 01678760, 112311, 10.1016/j.ijpsycho.2024.112311
    152. Fabiola H. Gerpott, Roman Briker, George Banks, New ways of seeing: Four ways you have not thought about Registered Reports yet, 2024, 10489843, 101783, 10.1016/j.leaqua.2024.101783
    153. Maria Pfeiffer, Andrea Kübler, Kirsten Hilger, Modulation of Human Frontal Midline Theta by Neurofeedback: A Systematic Review and Quantitative Meta-Analysis, 2024, 01497634, 105696, 10.1016/j.neubiorev.2024.105696
    154. Marija Purgar, Paul Glasziou, Tin Klanjscek, Shinichi Nakagawa, Antica Culina, Supporting study registration to reduce research waste, 2024, 2397-334X, 10.1038/s41559-024-02433-5
    155. Paul Riesthuis, Henry Otgaar, An overview of the replicability, generalizability and practical relevance of eyewitness testimony research in the Journal of Criminal Psychology, 2024, 2009-3829, 10.1108/JCP-04-2024-0031
    156. Steven R. Shaw, Sierra Pecsi, Erika Infantino, Yeon Hee Kang, Neha Verma, Alexa von Hagen, Registered Reports in School Psychology Research: Initial Experiences, Analyses, and Future, 2024, 0829-5735, 10.1177/08295735241263912
    157. Daniel M. Maggin, Rachel E. Robertson, Bryan G. Cook, Introduction to the Special Series on Results-Blind Peer Review: An Experimental Analysis on Editorial Recommendations and Manuscript Evaluations, 2020, 45, 0198-7429, 195, 10.1177/0198742920936619
    158. Matthew C. Makel, Jaret Hodges, Bryan G. Cook, Jonathan A. Plucker, Both Questionable and Open Research Practices Are Prevalent in Education Research, 2021, 50, 0013-189X, 493, 10.3102/0013189X211001356
    159. Lonni Besançon, Brian A. Nosek, Tamarinde Haven, Miriah Meyer, Cody Dunne, Mohammad Ghoniem, 2024, Merits and Limits of Preregistration for Visualization Research, 979-8-3315-2846-1, 89, 10.1109/BELIV64461.2024.00015
    160. Sarah Ariel Lamer, Haley B. Beck, Leanne ten Brinke, Gillian Michelle Preston, How Culturally Prevalent Patterns of Nonverbal Behavior Can Influence Discrimination Against Women Leaders, 2025, 0361-6843, 10.1177/03616843251318964
    161. Peter E. Clayson, Kaylie A. Carbine, John L. Shuford, Julia B. McDonald, Michael J. Larson, A Registered Report of Preregistration Practices in Studies of Electroencephalogram (EEG) and Event-Related Potentials (ERPs): A First-Look at Accessibility, Adherence, Transparency, and Selection Bias, 2025, 00109452, 10.1016/j.cortex.2025.02.008
  • Reader Comments
  • © 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(18340) PDF downloads(2245) Cited by(161)

Article outline

Figures and Tables

Figures(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog