ABSTRACT
A focus on novel, confirmatory, and statistically significant results by journals that publish experimental audit research may result in substantial bias in the literature. We explore one type of bias known as p-hacking: a practice where researchers, whether knowingly or unknowingly, adjust their collection, analysis, and reporting of data and results, until nonsignificant results become significant. Examining experimental audit literature published in eight accounting and audit journals within the last three decades, we find an overabundance of p-values at or just below the conventional thresholds for statistical significance. The finding of too many “just significant” results is an indication that some of the results published in the experimental audit literature are potentially a consequence of p-hacking. We discuss potential remedies that, if adopted, may to some extent alleviate concerns regarding p-hacking and the publication of false positive results.
JEL Classifications: M40.