The pipeline project : Pre-publication independent replications of a single laboratory's research pipeline

Published on Sep 1, 2016in Journal of Experimental Social Psychology
· DOI :10.1016/J.JESP.2015.10.001
Martin Schweinsberg7
Estimated H-index: 7
Nikhil Madan3
Estimated H-index: 3
+ 79 AuthorsEric Luis Uhlmann33
Estimated H-index: 33
This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had “in the pipeline” as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed.
Figures & Tables
📖 Papers frequently viewed together
3,366 Citations
3,340 Citations
503 Citations
#1Joseph Henrich (UBC: University of British Columbia)H-Index: 87
#2Steven J. Heine (UBC: University of British Columbia)H-Index: 60
Last. Ara Norenzayan (UBC: University of British Columbia)H-Index: 53
view all 3 authors...
Behavioral scientists routinely publish broad claims about human psychology and behavior in the world's top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers - often implicitly - assume that either there is little variation across human populations, or that these "standard subjects" are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative da...
7,246 CitationsSource
#1Charles R. Ebersole (UVA: University of Virginia)H-Index: 12
#2Olivia E. Atherton (UC Davis: University of California, Davis)H-Index: 10
Last. Brian A. Nosek (Center for Open Science)H-Index: 93
view all 64 authors...
Abstract The university participant pool is a key resource for behavioral research, and data quality is believed to vary over the course of the academic semester. This crowdsourced project examined time of semester variation in 10 known effects, 10 individual differences, and 3 data quality indicators over the course of the academic semester in 20 participant pools ( N = 2696) and with an online sample ( N = 737). Weak time of semester effects were observed on data quality indicators, participan...
141 CitationsSource
#1Joachim Hüffmeier (Technical University of Dortmund)H-Index: 17
#2Jens Mazei (Technical University of Dortmund)H-Index: 4
Last. Thomas Schultze (GAU: University of Göttingen)H-Index: 9
view all 3 authors...
Abstract In contrast to the truncated view that replications have only a little to offer beyond what is already known, we suggest a broader understanding of replications: We argue that replications are better conceptualized as a process of conducting consecutive studies that increasingly consider alternative explanations, critical contingencies, and real-world relevance. To reflect this understanding, we collected and summarized the existing literature on replications and combined it into a comp...
53 CitationsSource
#1Mark Schaller (UBC: University of British Columbia)H-Index: 67
Abstract Most discussions of rigor and replication focus on empirical practices (methods used to collect and analyze data). Typically overlooked is the role of conceptual practices: the methods scientists use to arrive at and articulate research hypotheses in the first place. This article discusses how the conceptualization of research hypotheses has implications for methodological decision-making and, consequently, for the replicability of results. The article identifies three ways in which emp...
19 CitationsSource
#1Adam J. Berinsky (MIT: Massachusetts Institute of Technology)H-Index: 24
#2Michele F. Margolis (UPenn: University of Pennsylvania)H-Index: 10
Last. Michael W. Sances (U of M: University of Memphis)H-Index: 11
view all 3 authors...
Abstract Survey researchers increasingly employ attention checks to identify inattentive respondents and reduce noise. Once inattentive respondents are identified, however, researchers must decide whether to drop such respondents, thus threatening external validity, or keep such respondents, thus threatening internal validity. In this article, we ask whether there is a third way: can inattentive respondents be induced to pay attention? Using three different strategies across three studies, we sh...
31 CitationsSource
#1Leandre R. Fabrigar (Queen's University)H-Index: 38
#2Duane T. Wegener (OSU: Ohio State University)H-Index: 48
Abstract Many recent discussions have focused on the role of replication in psychological science. In this article, we examine three key issues in evaluating the conclusions that follow from results of studies at least partly aimed at replicating previous results: the evaluation and status of exact versus conceptual replications, the statistical evaluation of replications, and the robustness of research findings to potential existing or future “non-replications.” In the first section of the arti...
82 CitationsSource
Abstract While outlining his vision of The New Statistics, Cumming (2014) proposes that a more rigorous and cumulative psychological science will be built, in part, by having psychologists abandon traditional null-hypothesis significance testing (NHST) approaches, and conducting small-scale meta-analyses on their data whenever possible. In the present paper, I propose an alternative system for conducting rigorous and replicable psychological investigations, which I describe as Exploring Small, C...
38 CitationsSource
Abstract Field research has the potential to substantially increase both the replicability and the impact of psychological science. Field methods sometimes are characterized by features – relatively high levels of participant diversity, relative lack of control over extraneous variables, greater focus on behavioral dependent variables, less room for researcher degrees of freedom, and lower likelihood of publication bias – that can increase the veracity and robustness of published research. Moreo...
33 CitationsSource
#1Wolfgang Stroebe (UG: University of Groningen)H-Index: 10
#1Wolfgang Stroebe (UG: University of Groningen)H-Index: 94
Last. Wolfgang Stroebe (UG: University of Groningen)H-Index: 15
view all 1 authors...
Abstract Based on Bayesian reasoning, Ioannidis (2005) made the bold claim that most published research findings are false. His claim has been widely cited. It also seems consistent with the findings of the Open Science Collaboration Project that a majority of psychological studies could not be replicated. In this article, I argue (1) that Ioannidis' claim has limited relevance for social psychology and (2) that mass replication does not allow general conclusions about the validity of social psy...
38 CitationsSource
Abstract Recent criticisms of social psychological research are considered in relation to an earlier crisis in social psychology. The current replication crisis is particularly severe because (1) psychologists are questioning the accuracy of findings rather than the meaning of findings, and (2) researchers are responding to real scientific failures, rather than hypothetical scientific failures. I present an expanded model of statistical decision making that can be used to help researchers draw m...
13 CitationsSource
Cited By54
#1Larry V. Hedges (NU: Northwestern University)H-Index: 97
#2Jacob M. Schauer (NU: Northwestern University)H-Index: 4
#1Michael Gordon (Massey University)H-Index: 3
#2Domenico ViganolaH-Index: 3
Last. Thomas PfeifferH-Index: 29
view all 5 authors...
The reproducibility of published research has become an important topic in science policy. A number of large-scale replication projects have been conducted to gauge the overall reproducibility in specific academic fields. Here, we present an analysis of data from four studies which sought to forecast the outcomes of replication projects in the social and behavioural sciences, using human experts who participated in prediction markets and answered surveys. Because the number of findings replicate...
#1Arvid Erlandsson (Linköping University)H-Index: 12
#2Mattias Wingren (Linköping University)
Last. Per A. Andersson (Linköping University)H-Index: 5
view all 3 authors...
Impression of helpers can vary as a function of the magnitude of helping (amount of help) and of situational and motivational aspects (type of help). Over three studies conducted in Sweden and the US, we manipulated both the amount and the type of help in ten diverse vignettes and measured participants' impressions of the described helpers. Impressions were almost unaffected when increasing the amount of help by 500%, but clearly affected by several type of help-manipulations. Particularly, help...
#1Zachary G. Baker (UH: University of Houston)H-Index: 7
#2Ersie‐Anastasia Gentzis (UH: University of Houston)
Last. C. Raymond Knee (UH: University of Houston)H-Index: 29
view all 11 authors...
#1Warren Tierney (Ad: INSEAD)H-Index: 4
#2Jay H. Hardy (OSU: Oregon State University)H-Index: 10
Last. Eric Luis Uhlmann (Ad: INSEAD)H-Index: 33
view all 11 authors...
Abstract Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typicall...
4 CitationsSource
#1Diego A. Reinero (NYU: New York University)H-Index: 6
#2Julian Wills (NYU: New York University)H-Index: 6
Last. Jay J. Van Bavel (Center for Neural Science)H-Index: 39
view all 6 authors...
Social science researchers are predominantly liberal, and critics have argued this representation may reduce the robustness of research by embedding liberal values into the research process. In an ...
3 CitationsSource
Increasingly, researchers are attempting to replicate published original studies by using large, multisite replication projects, at least 134 of which have been completed or are on going. These designs are promising to assess whether the original study is statistically consistent with the replications and to reassess the strength of evidence for the scientific effect of interest. However, existing analyses generally focus on single replications; when applied to multisite designs, they provide an...
4 CitationsSource
#1Masatoshi Sato (Andrés Bello National University)H-Index: 13
#2Neomy Storch (University of Melbourne)H-Index: 36
Researchers and teachers often invoke context to explain their particular research/teaching issues. However, definitions of context vary widely and the direct impact of the context is often unexpla...
5 CitationsSource
#1Jacob M. Schauer (NU: Northwestern University)H-Index: 4
#2Larry V. Hedges (NU: Northwestern University)H-Index: 97
In this study, we reanalyze recent empirical research on replication from a meta-analytic perspective. We argue that there are different ways to define "replication failure," and that analyses can focus on exploring variation among replication studies or assess whether their results contradict the findings of the original study. We apply this framework to a set of psychological findings that have been replicated and assess the sensitivity of these analyses. We find that tests for replication tha...
3 CitationsSource