In the world of research, getting published is crucial to remaining in good standing with facilities, labs, and even universities that front some of the cost for their faculties. But as several historic, high-profile cases have shown, messing up your data in a published paper can be the end of a career. For the now famous case of the doctor who originally blamed the MMR vaccine for the rise in autism rates while knowing his research was inaccurate, the consequences went far beyond losing a job after he allegedly falsified his data.
Unfortunately, whether accidental or intentional, publishing incorrect statistics in journal articles and research can have such dire consequences that action is often taken against the papers’ authors, at least in the form of reprimand or sanctions, if not all out prosecution. But a new tool is ready to take some of the fact-checking burden off of researchers in order to prevent this kind of error.
Statcheck, a downloadable software created by Sacha Epskamp and Michele Nuijten, is working to fix that issue in the publication of psychology research. According to Nuijten’s site, “Conclusions in experimental psychology often are the result of null hypothesis signicance testing. Unfortunately, there is evidence that roughly half of all published empirical psychology articles contain at least one inconsistent p-value, and around one in seven articles contain a grossly inconsistent p-value that makes a non-significant result seem significant, or vice versa. Often these reporting errors are in line with the researchers’ expectations, which means these errors introduce systematic bias. To get an idea of the prevalence of reporting errors and as a tool to check your own work before submitting, we created the R package statcheck (Epskamp & Nuijten, 2015). This package can be used to automatically extract statistics from articles and recompute p values.”
For the purposes of demonstration, Nature.com reported that the software scanned more than 30,000 publications in under two hours, with worrisome results: “The software found that 16,700 of the papers included tests of statistical significance in a format that it could check. Of these, it discovered that 50% had reported at least one P value — which reflects the likelihood of getting observed results if a null hypothesis is true — that was inconsistent with related statistical parameters in the test. And 13% of the papers contained an error that changed their conclusion: non-significant results were reported as significant, or vice versa.”
Statcheck is available for download from the creators via the website, and requires a PDF-to-txt application in order to conduct the scan.