Decision Markets Reveal 83% Accuracy in Predicting Replicable Social Science Studies

New study reveals decision markets effectively predict replicable social science experiments, with 83% success for top-rated studies compared to 33% for the lowest.

Enhancing Scientific Reliability

In an exciting stride toward enhancing scientific reliability, a new study published in Nature Human Behaviour has unveiled a fresh method aimed at improving the reproducibility of findings in social science research.

Led by an international team of social scientists, the research investigates the power of decision markets to predict which social experiments stand a good chance of being replicated.

Methodology of the Study

The study examined 41 experiments conducted with participants sourced from Amazon Mechanical Turk (MTurk), a popular platform connecting researchers with people willing to complete tasks for pay.

These experiments were originally documented in the Proceedings of the National Academy of Sciences between 2015 and 2018, featuring contributions from Felix Holzmeister at the University of Innsbruck’s Department of Economics.

At the heart of the investigation was an innovative decision market, where 162 social scientists participated in buying and selling shares indicative of the likelihood that each study’s results would hold up under replication.

The setup ensured that the experiments with the highest and lowest share prices—specifically the top 12 and bottom 12—were earmarked for replication, alongside two additional studies selected at random.

Key Findings and Implications

The findings revealed a striking effectiveness of the decision market in distinguishing between replicable and non-replicable studies.

An impressive 83% of the studies that boasted the highest share prices yielded statistically significant results aligning closely with the original findings.

Conversely, only 33% of the studies that attracted the lowest prices managed to replicate their results successfully.

This suggests a promising potential for decision markets to help identify reliable research while casting doubt on less trustworthy findings.

Overall, 54% of the 26 studies chosen for replication reported promising outcomes.

Notably, the average effect size observed in these replications was about 45% of what was originally reported, reflecting a trend seen in past attempts to reproduce results in social science experiments.

The ramifications of these findings are profound for the future of scientific exploration.

Holzmeister argues that decision markets—or similar inventive tools that harness the collective wisdom of researchers—could systematically guide priorities in research and improve resource allocation.

This, in turn, could bolster the efficiency and credibility of scientific discoveries, paving the way for a more trustworthy body of knowledge.

Study Details: