Back to Headlines
Science AI Analysis

Investigating the replicability of the social and behavioural sciences | Nature

AI
AI Legal Analyst
April 1, 2026, 5:13 PM 5 min read 3 views

Summary

Data availability Data, materials and code associated with this research that can be shared without restriction are publicly available in a living OSF repository ( https://doi.org/10.17605/OSF.IO/G5SNY ) 48 . Code availability Code for individual replication projects is available alongside data and materials for each project in the OSF repository ( https://doi.org/10.17605/OSF.IO/G5SNY ). Article CAS PubMed PubMed Central Google Scholar Reproducibility and Replicability in Science (National Academies of Sciences, Engineering and Medicine, 2019). Article PubMed PubMed Central Google Scholar Open Science Collaboration.

## Summary
Data availability Data, materials and code associated with this research that can be shared without restriction are publicly available in a living OSF repository ( https://doi.org/10.17605/OSF.IO/G5SNY ) 48 . Code availability Code for individual replication projects is available alongside data and materials for each project in the OSF repository ( https://doi.org/10.17605/OSF.IO/G5SNY ). Article CAS PubMed PubMed Central Google Scholar Reproducibility and Replicability in Science (National Academies of Sciences, Engineering and Medicine, 2019). Article PubMed PubMed Central Google Scholar Open Science Collaboration.

## Article Content
Subjects
Interdisciplinary studies
Scientific community
Abstract
Pursuing replicability — independent evidence for previous claims — is important for creating generalizable knowledge
1
,
2
. Here we attempted replications of 274 claims of positive results from 164 quantitative papers published from 2009 to 2018 in 54 journals in the social and behavioural sciences. Replications were high powered on average to detect the original effect size (median of 99.6%), used original materials when relevant and available, and were peer reviewed in advance through a standardized internal protocol. Replications showed statistically significant results in the original pattern for 151 of 274 claims (55.1% (95% confidence interval (CI) 49.2–60.9%)) and for 80.8 of 164 papers (49.3% (95% CI 43.8–54.7%)), weighed for replicating multiple claims per paper. We observed modest variation in replication rates across disciplines (42.5–63.1%), although some estimates had high uncertainty. The median Pearson’s
r
effect size was 0.25 (95% CI 0.21–0.27) for original studies and 0.10 (95% CI 0.09–0.13) for replication studies, an 82.4% (95% CI 67.8–88.2%) reduction in shared variance. Thirteen methods for evaluating replication success provided estimates ranging from 28.6% to 74.8% (median of 49.3%). Some decline in effect size and significance is expected based on power to detect original effects and regression to the mean because we replicated only positive results. We observe that challenges for replicability extend across social–behavioural sciences, illustrating the importance of identifying conditions that promote or inhibit replicability
3
,
4
.
Access through your institution
Buy or subscribe
This is a preview of subscription content,
access via your institution
Access options
Access through your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
27,99 €
/ 30 days
cancel any time
Learn more
Subscription info for Korean customers
We have a dedicated website for our Korean customers. Please go to
natureasia.com
to subscribe to this journal.
Go to natureasia.com
Buy this article
Purchase on SpringerLink
Instant access to the full article PDF.
39,95 €
Prices may be subject to local taxes which are calculated during checkout
Fig. 1: Replication success rates across 13 binary assessments for papers.
Fig. 2: Correlation matrix among binary assessments of replication success across papers.
Fig. 3: Scatterplot of Pearson’s
r
effect sizes for original and replication studies.
Fig. 4: Scatterplot of Pearson’s
r
effect sizes for original and replication outcomes for new data and secondary data replication attempts.
Data availability
Data, materials and code associated with this research that can be shared without restriction are publicly available in a living OSF repository (
https://doi.org/10.17605/OSF.IO/G5SNY
)
48
. The living OSF repository represents improvements, fixes and additions that occur post-publication. Readers can also access a registered, archived version of this repository that is precisely the data, code and documentation as they existed upon publication of this paper (
https://doi.org/10.17605/OSF.IO/BZFGY
). The repository includes all available documentation for replication attempts regardless of whether they were completed. This includes most of the data and code from the individual replication attempts, save for any data that is proprietary or protected that will not be made available, or for which analyst teams were uncertain or unable to confirm that they were allowed to share secondary data. It is possible that some data, materials or code that could be shared openly is not available at the time of publication. Readers are encouraged to contact the corresponding author or the authors of the relevant sub-project (Supplementary Table
3
) to see if more research content can be shared in the living repository. This paper is part of a collection of papers reporting on the SCORE program. Documentation, data and code for the entire program are available at
https://doi.org/10.17605/OSF.IO/DTZX4
.
Code availability
Code for individual replication projects is available alongside data and materials for each project in the OSF repository (
https://doi.org/10.17605/OSF.IO/G5SNY
). This includes a push button package with all code and data used to produce all statistics, figures and tables, and code that populates them directly into the manuscript from a template. Also available is a registered, archived version of the repository containing precisely the data, code and documentation used to generate the outcomes reported in this paper (
https://doi.org/10.17605/OSF.IO/BZFGY
).
References
Nosek, B. A. & Errington, T. M. What is replication?
PLoS Biol.
18
, e3000691 (2020).
Article
CAS
PubMed
PubMed Central
Google Scholar
Reproducibility and Replicability in Science
(National Academies of Sciences, Engineering and Medicine, 2019).
Nosek, B. A. et al. Replicabil

---

## Expert Analysis

### Merits
- Subjects Interdisciplinary studies Scientific community Abstract Pursuing replicability — independent evidence for previous claims — is important for creating generalizable knowledge 1 , 2 .
- Replications showed statistically significant results in the original pattern for 151 of 274 claims (55.1% (95% confidence interval (CI) 49.2–60.9%)) and for 80.8 of 164 papers (49.3% (95% CI 43.8–54.7%)), weighed for replicating multiple claims per paper.
- Thirteen methods for evaluating replication success provided estimates ranging from 28.6% to 74.8% (median of 49.3%).
- Go to natureasia.com Buy this article Purchase on SpringerLink Instant access to the full article PDF. 39,95 € Prices may be subject to local taxes which are calculated during checkout Fig. 1: Replication success rates across 13 binary assessments for papers.

### Areas for Consideration
- Power failure: why small sample size undermines the reliability of neuroscience.
- The file drawer problem and tolerance for null results.

### Implications
- Go to natureasia.com Buy this article Purchase on SpringerLink Instant access to the full article PDF. 39,95 € Prices may be subject to local taxes which are calculated during checkout Fig. 1: Replication success rates across 13 binary assessments for papers.
- This includes most of the data and code from the individual replication attempts, save for any data that is proprietary or protected that will not be made available, or for which analyst teams were uncertain or unable to confirm that they were allowed to share secondary data.
- It is possible that some data, materials or code that could be shared openly is not available at the time of publication.
- What should researchers expect when they replicate studies?

### Expert Commentary
This article covers article, google, scholar topics. Notable strengths include discussion of article. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 2184.
article google scholar pubmed replication replicability science psychol

Related Articles