We propose and model how a practitioner-based framework aimed at reducing the likelihood of unidentified misstatements in financial statements—the Audit Risk Model—might be adapted and used in the accounting research peer review process to, in part, address the issue of increasing retractions of academic studies. This proposal is intended to engage both practitioners and academics. Practitioners need to know the importance that the academy places on producing high-quality research; and just as practitioners can learn from the results of high-quality research studies, academics can learn from the activities of practitioners. The discussion that follows is intended to create debate. It is not intended to provide definitive answers.

Nobody has a monopoly on the truth and there are all kinds of vested interests and egos that come into play. And while we kind of worship at the altar of, you know, the peer reviewed literature, the fact is that that peer review literature is not always correct. What people have to realize is that the referees, the experts who review a paper that has been sent in for publication, do not redo the work. I mean they have to assume of course that the work was done in the way the paper specifies and the data is [sic] indeed correct. So, I think it is appropriate to be skeptical.

The retraction of academic studies in both the hard and social sciences is skyrocketing. For example, USA Today reported on the scientific journal Nature's retraction of two papers on the production of stem cells, co-authored by a Harvard University professor (Weintraub 2014). Specific to accounting, The Boston Globe reported that data of one of the discipline's most prolific researchers from Bentley University, James E. Hunton, Ph.D., had been falsified. An investigation conducted by Bentley University, likely prompted by the American Accounting Association's (AAA) retraction of one of the professor's co-authored papers (Hunton and Gold 2013), has led to 31 papers co-authored with Hunton being retracted.1 The University's investigation report concluded that the entire body of Hunton's work—approximately 50 papers published while at Bentley—should be considered suspect (Healy 2014).

Grieneisen and Zhang (2012) conducted a review of the 42 largest bibliographic databases for major scholarly fields and publisher websites and found a total of 4,449 retractions during the years 1928 to 2011. Of those, 4,428 (99.5 percent) occurred since 1980. Such retractions were reported as being attributed to a combination of alleged publishing misconduct (e.g., plagiarism of others' or one's own previously published work—47 percent), questionable data or interpretations (42 percent), and to a lesser extent, alleged research misconduct (20 percent). Of those classified as research misconduct, over one-half were attributed to 15 authors.2 Although retractions comprise a very small percentage of the total body of research published each year, even one retraction attributed to misconduct taints the entire body of scholarly investigation.3 Further, it is likely that identified instances of research misconduct are merely a small portion of the total population of misconduct. As described by Uri Simonsohn, “Outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be [sic] detectable” (Jha 2012). In his meta-analysis of survey data, Fanelli (2009) reports that nearly one in 50 scientists admits to fabricating or falsifying his/her work, and seven in 50 report observing research misconduct by their peers.

At one extreme, if the goal is to eliminate all research errors and misconduct, then the obvious response should be that independent peer reviewers would re-perform the data collection and statistical analyses of all studies submitted for publication. Validation at this level would obviously be costly. Publishers would have to employ full-time reviewers and proliferation of the body of knowledge would likely slow to a trickle. Further, re-collection of experimentally based data for replication would, at best, be difficult given the wide variety of experimental settings under which such data are typically gathered.

At the other extreme, since identified retractions comprise only a small proportion of total studies published, the academy could stick with the status quo. We believe that ignoring the problem is untenable.

Reviews of research must be both effective in terms of preventing unsupported studies from being published, and efficient in terms of expanding the base of knowledge in a timely fashion. Said another way, the amount of effort expended during the review process must balance the need to relay knowledge (the reward), with the likelihood of misconduct (the risk).

We liken the peer review process to the auditing process and propose that a practitioner model, the Audit Risk Model (ARM), might be used as a framework in the accounting research peer review process.4 Auditors (peer reviewers) give a “seal of approval” on managers' (authors') financial statements (research studies). Like managers, researchers' interests are biased; and like auditors, peer reviewers have a duty (at least implied) to protect the interests of those that rely on the results of research.5 The ARM is intended to balance the opposing goals of risk (misstatement in financial statements or published research) and reward (timely publication of financial statements or research).6

In the sections that follow, we describe the typical review process of AAA journals. In the next two sections, we describe the ARM and model how it might be used as a framework to assess the risk of research misconduct and to determine the depth of review to be conducted to reduce such risk to an acceptable level. In the article's conclusion, we describe how use of a risk-based framework in the academic research peer review process might have prevented the Hunton retractions.

The AAA provides authors with guidelines on the preparation and submission of manuscripts for consideration for publication (AAA 2014). Instructions and guidelines of The Accounting Review, the premier AAA journal, primarily focus on the style and format of manuscripts and preparation of the files intended for submission. In addition, instructions to authors indicate that it is expected that manuscripts have been critiqued by colleagues before submission, and authors are encouraged, but not required, to make their data available upon request. While these instructions, especially the expectation of prior review, provide some scrutiny of a manuscript, they do not include specific risk management tests to ensure the integrity of the data or the statistical results. Further, there is no explicit obligation that authors make their data and statistical analyses available to other parties; and the sheer number of articles submitted for review makes it impractical to perform tests of the data and results of all, or even a substantial portion of, articles sent out for review.

The AAA has embarked on the “AAA Publications Ethics Policy” (Ethics Policy) to address concerns regarding the integrity and quality of manuscripts published in AAA journals. Specifically, the Ethics Policy “is a framework developed to inform authors, editors, and reviewers of their responsibilities to ensure the quality and integrity of manuscripts published in our journals and presented at AAA conferences” (American Accounting Association Publications Ethics Task Force [AAA-PETF] 2014).7

Currently, steps taken or proposed to minimize research misconduct are primarily aimed at increasing the onus on researchers. For example, the AAA's Publications Ethics Task Force's most recent (February 18, 2015) draft policy suggests requiring “positive assurance from the author(s) of the integrity of the data underlying the research, including whether all authors accept joint responsibility for the integrity of the data, and if not, which authors are taking responsibility for the data.”8

While the current manuscript review process and actions taken or planned are valuable in reducing the possibility that unsupported results may be published and go undetected, they may not be optimally effective or efficient as they are not based on an evaluation of risk. The ARM, which is familiar to most accounting academic researchers, could serve as the basis for incorporating the evaluation of risk of falsification and fabrication into the manuscript review process.

Like the producers of financial statements, researchers' interests in their studies are biased, suggesting that, in addition to mistakes (i.e., errors), there is a possibility of misconduct (i.e., irregularities). As depicted in Figure 1, the ARM proposes that the likelihood of errors and irregularities is a function of inherent risk and control risk. A high level of inherent risk increases the possibility of errors and irregularities. In an audit, inherent risk is expressed as the risk of misstatement, independent of controls intended to reduce or manage such risk.

FIGURE 1

Audit Risk Model

Source: AU Section 312, Audit Risk and Materiality in Conducting an Audit.

a Inherent risk (IR) is the susceptibility of a relevant assertion to a misstatement that could be material, either individually or when aggregated with other misstatements, assuming that there are no related controls.

b Control risk (CR) is the risk that a misstatement that could occur in a relevant assertion and that could be material, either individually or when aggregated with other misstatements, will not be prevented or detected on a timely basis by the entity's internal control.

c Detection risk is the risk that the auditor will not detect a misstatement that exists in a relevant assertion that could be material, either individually or when aggregated with other misstatements.

d Risk of material misstatement (RMM) is the product of inherent risk (IR) and control risk (CR).

FIGURE 1

Audit Risk Model

Source: AU Section 312, Audit Risk and Materiality in Conducting an Audit.

a Inherent risk (IR) is the susceptibility of a relevant assertion to a misstatement that could be material, either individually or when aggregated with other misstatements, assuming that there are no related controls.

b Control risk (CR) is the risk that a misstatement that could occur in a relevant assertion and that could be material, either individually or when aggregated with other misstatements, will not be prevented or detected on a timely basis by the entity's internal control.

c Detection risk is the risk that the auditor will not detect a misstatement that exists in a relevant assertion that could be material, either individually or when aggregated with other misstatements.

d Risk of material misstatement (RMM) is the product of inherent risk (IR) and control risk (CR).

Close modal

Inherent risk increases with change and complexity. In a financial statement audit, newly installed computer systems may not be error-free and operate in the manner intended, derivative transactions may not be properly reflected in the financial statements, and the audit of financial statements of a publicly traded company is higher risk than those of a privately held company. Inherent risk is never none or zero; it is measured as high, medium, or low, in comparison to other companies' risks.

In the manuscript review process, inherent risk may be seen as increasing with the complexity of the research method, novelty of findings (i.e., results suggestive of a paradigm shift versus results implying errors in prior analyses), reputation of the journal, and likelihood of reliance on the study's results. Presence of the “conditions necessary for fraud” also increases inherent risk. The “fraud triangle” purports that for irregularities to occur, individuals must feel pressure to commit misconduct. Individuals must be able to rationalize their actions and, importantly, they must be given the opportunity to act in a fraudulent manner (i.e., operate in an environment with few, or no, controls).

In an audit, controls are identified as those processes and procedures intended to address inherent risks. Adequate testing of newly installed systems (e.g., parallel processing, was-is reports, audit trails) reduces information technology risk. Adequate training on new accounting pronouncements and regulations, and independent reviews, reduce the risk of improper recording, disclosure, and non-compliance.

In the manuscript review process, the presence and relationship of co-authors, and the quality of education and experience of the researchers, may be seen as controls to address the inherent risks. Sole-authored studies may be viewed as having a greater risk of errors, as may studies conducted by authors affiliated with non-research institutions. In terms of the possibility of misconduct, pressure to publish is elevated when tenure is at stake, or when there is a significant gap in years between publications. Researchers may rationalize misconduct as being necessary to secure a livelihood to support their families, or to maintain their standard of living by retaining endowments.

Somewhat ironically, analogous to the financial statement preparation process, this suggests that only the most trusted researchers—those with a reputation of high-tiered publications—will be able to commit research misconduct, as they are the most likely to be unencumbered by “controls.” Those who have already published in, for example, The Accounting Review may be more trusted by their co-authors and may be less likely to be second-guessed in terms of their results. This also suggests, in terms of an individual's career, that the possibility of misconduct is increasing to tenure, then decreasing for a period of time, then rising again due to the pressure of full professorship and endowments.

In the ARM, based on the auditor's assessment of the combination of inherent and control risk (risk of material misstatement—RMM), the auditor determines the audit tests necessary to reduce his or her ability to detect financial statement errors and irregularities to an acceptable level. If RMM is high, then the auditor performs more and higher-reliability tests of transactions and balances.

In the manuscript review process, if the risk of errors or misconduct (REM versus RMM in auditing) is high, then the reviewer should employ more, higher-reliability tests of the data underlying the manuscript. For example, a sole-authored study of an untenured author should be subjected to more extensive tests of the dataset. Of course, the ability to employ the ARM framework in the peer review process is dependent upon the reviewers' (or at least editor's) ability to appropriately assess REM. As it is currently conducted, the review process is intended to be blind—although how blind it actually is, is debatable.

Figure 2 presents an example decision flowchart intended to depict how a risk-based manuscript review process might be adopted by incorporating key concepts of the ARM. Consistent with current practice, the process begins when a manuscript is submitted for consideration. Step one is an evaluation of inherent risk that would be performed by the publication's editor. As depicted, questions appropriate to the assessment of inherent risk include (but are not necessarily limited to): How important is the journal's reputation? Most editors will likely evaluate journal reputation risk as high, although it is clear that some journals are considered “A” journals; ergo, some are likelier than others to suffer the effects of unsubstantiated results (e.g., a reduction in submissions, disassociation by the publisher, and loss of indexing and impact factors). Further, the nature of some journals permits addressing research questions with a narrower focus than others, or allows for broader study limitations.

FIGURE 2

Research Risk Decision Flowchart

FIGURE 2

Research Risk Decision Flowchart

Close modal

Is it likely that individuals or institutions will rely on the study's results? Not all research is equal in terms of impact. Some studies are highly cited, some influence practice, and others may be interesting, but have a low likelihood of ever influencing future research, regulation, or practice. Erik Lie's (2005), “On the Timing of CEO Stock Option Awards,” is an example of a highly influential study that contributed to changes in regulations aimed at insider equity award-granting. As such, the collective inherent risk associated with Lie's manuscript would be seen as increasing due to its potential for influence. Admittedly, this ex ante prediction of a study's impact is difficult; however, there are some studies that are more practice-related, have more “controversial” findings, and/or are published by journals more commonly read by individuals other than academics.

Is the research method complex? The more complex the method and analyses, the higher the likelihood for error. In behavioral research, complex questions, collection methods, and analyses may contribute to misinterpretation by subjects, use of inappropriate data, and erroneous analyses. At worst, they can contribute to manipulating the data in support of the study's hypotheses. In archival research, complex datasets and analyses can contribute to errors when merging different databases, misapplied statistical analyses, theoretically unsupported transformations, and improper elimination of deemed outliers.

Are the findings novel, or nearly unbelievably perfect? Unexpected, paradigm-shifting results increase inherent risk. In behavioral research, it is extremely difficult to design an experiment with perfectly comprised condition cells (e.g., 2-by-3 between-subjects experimental analysis with exactly 25 participants comprising each cell, and with each small-sample comparison achieving statistical significance). Others have found this “red flag” to be indicative of falsified results (Simonsohn 2013; Carlisle 2012; Sternberg and Roberts 2006; E. Gaffan and D. Gaffan 1992; Roberts 1987).9 While they should not be discarded as necessarily untrue, novel or nearly unbelievable findings suggest a heightened review effort may be warranted.

How significant are pressures on the author(s) to publish (e.g., years to tenure, time since last publication, employment by a “Research One” versus a teaching institution, years as an associate professor)? An untenured professor in his or her fifth year at a Research One institution with only one A-rated publication may have significant pressure to obtain another A-level acceptance. Desperate people may do desperate things. In the absence of adequate controls, increased pressure may help individuals rationalize improper actions.

Once inherent risk has been assessed—generally on a continuum from high to low, the editor should attempt to determine whether or not there exist controls sufficient to mitigate the identified risks. As depicted, example questions that may help editors assess control risk include (but are not necessarily limited to): Are there co-authors? If so, then what is the relationship of the co-authors? In general, co-authors—assuming they play an active role in the study and development of the manuscript—act as a control over errors and irregularities that may arise. However, it may be difficult for subordinate co-authors to act as a control in addressing inherent risk. An exception to the control provided by co-authors may be research generated from a student's dissertation. Although frequently published as having a single author, the typical rigor applied by a student's dissertation committee obviously helps to mitigate REM.

What is the quality of the institution of the author(s)? Research One institutions tend to have access to better, more complete databases. Such institutions also typically host workshops where authors' work may be vetted prior to submission to a journal for publication. Researchers at larger, better-known institutions may also have access to funds to attend conferences, and purchased (versus hand-collected) data sources. Just as a lack of controls does not a priori mean that financial statements have errors or irregularities, author affiliation with smaller institutions is not an a priori reason to question a study's results. It does however suggest that controls intended to reduce the assessed level of inherent risk (e.g., access to quality data, likelihood of sufficient vetting) cannot be relied upon.

What is the quality of the education of the author(s)? While there are obvious exceptions, the quality of the author(s)' Ph.D.-granting institution is likely highly correlated with the appropriate application of statistical analyses, particularly when such analyses are complex.

What is the quality and number of the author(s) prior publications? High-quality prior publications suggest the author(s)' work has been previously subjected to rigorous review, reducing the risk of subsequent errors. An exception to this expectation may be “overproduction” of high-quality research. Production and publication of high-quality research is a lengthy process. The credibility of an author's record may be questionable when, given the length of career, the sheer number of his or her publications in high-quality, academic journals is beyond believability.

Has the manuscript been presented to, or reviewed by, others? As currently suggested by the submission guidelines of The Accounting Review, critiques prior to journal submission help to reduce the likelihood of errors and may serve as a control over the possibility of misconduct.

The ARM requires that the auditor (editor) must then make a judgment as to whether the controls in place are adequate to address perceived inherent risk. As an example, if an untenured professor submits a manuscript to The Accounting Review, the results of which are contrary to the results of most prior research and are likely to have high impact, then inherent risk would be viewed as high. Controls might be viewed as adequate to address this high inherent risk if the paper is a derivative of the author's dissertation from a high-quality institution; ergo, REM would be viewed as low.

In auditing, RMM is not a binary decision; it is a scaled judgment. Applied to the peer review process, the editor would make a scaled assessment of the likelihood that a manuscript contains errors or irregularities (REM), then develop a plan to address such risk. In other words, the editor sets detection risk, the risk that a study contains errors or irregularities and those errors or irregularities are not detected during the review process, by establishing a response plan.

As described in Figure 2, the editor's response plan could include one or more of the following: assign multiple, appropriately experienced reviewers; employ plagiarism software; request the dataset and/or code; and attempt to replicate the study's results. An alternative to requesting an author's dataset and/or code post-submission based on judged REM might be to require that all submissions include datasets/code—to be maintained on a confidential basis unless REM is judged to be high. With today's technologies, supplying and securely storing very large datasets is possible.

Some have gone as far as suggesting that authors be required to make the raw data supporting their published results publicly available (Simonsohn 2013). Doing so might deter submission of falsified results, but it also may reduce the number of submissions to an undesirable level due to concerns over intellectual property rights and confidentiality.10

Admittedly, many of these procedures are not new or unique. The novelty we are suggesting is that the nature and level of review performed should be in response to a formalized evaluation of risk. It is likely that, in some cases, just like in audits, required review procedures may be less than are currently conducted.

In order to assess inherent and control risk, along with manuscripts, more extensive author questionnaires, certifications, and, possibly, datasets would have to be completed and submitted. A form documenting the editor's evaluation of REM and risk mitigation plan would also be necessary.

Beyond these mechanics, there are other hurdles that would need to be considered in connection with adopting a risk-based manuscript review process. We next discuss several of the most important considerations. Specifically, the obligation of reviewer time, the “blind” review process, and the necessity of training editors and reviewers.

To the extent that under a risk-based manuscript review process, at least some manuscripts are assessed as having high REM and therefore require expanded review procedures (e.g., review of the dataset, replication of results), the review of manuscripts will require a greater time commitment. Currently, most reviewers and many editors volunteer their time. The volunteer nature of the peer review process may even contribute, or at least hinder, identification of manuscript errors and irregularities.

Facilitating expanded reviews may require that publishers employ some individuals on at least a part-time basis. It may be possible to use Ph.D. students when extensive reviews are necessary (e.g., replications). The advantage of this type of system is that dedication and repetition would likely improve the effectiveness and efficiency of the review process. Temporary reviewer assignments benefit employed individuals by exposing them to a breadth of current research, possibly helping to generate ideas for their own research.

The expense associated with this type of system may need to be borne by publishers, in which case manuscript submission fees and/or journal subscription fees would have to be increased. Alternatively, it may be possible to implement a “visiting reviewer” system whereby reviewers are temporarily relieved of at least a portion of their teaching loads and institutions to continue to pay the “visiting reviewers.” Economically, this type of arrangement would be no different than increasing submission fees as, most commonly, institutions bear the cost of submission fees and subscriptions.

The review process of most manuscripts submitted to academic journals is blind. That is, the assigned reviewers are not provided the name(s) of the author(s), and likewise, authors are not provided the name of reviewers. The intent of the blind referee process is to protect confidentiality and to reduce the possibility of bias. If a risk-based manuscript review process is implemented, then dependent upon the editor's response plan and particularly when REM is assessed as being high, authors and reviewers may be identified to one another. While this may seem to be a significant change from current practice, we posit, consistent with the views of many researchers (Bailey, Hermanson, and Louwers 2008), that the existing peer review process is likely less than strictly blind. The broad availability of manuscript drafts on websites from workshop and conference presentations, and the Social Sciences Research Network make anonymity nearly impossible.

In terms of reviewers, most academic journals publish the names of their editor, senior editors, or associate editors, and many also list those serving on their review boards.

Adoption of the ARM as a tool for assessing the risk of manuscript errors or researcher misconduct will necessitate training of those serving as editors and reviewers. While somewhat burdensome, training is probably one of the lowest hurdles in terms of reviews of accounting academic research. First, the ARM is familiar to many accounting academics. Second, academics are experts in course development. Third, academics generally value training.

Online courses could readily be adapted from existing auditing course material. Important points of emphasis to be included in training materials would include:

  • visual of the model (i.e., Figure 2);

  • description of the interaction of inherent risk, control risk, and detection risk;

  • description of the conditions necessary for misconduct (i.e., pressure, rationalization, opportunity);

  • examples of red flags that suggest misconduct may have occurred;

  • example procedures to reduce the risk of possible errors or misconduct (REM);

  • review of form(s) documenting the editor's assessment of REM and risk management response plan;

  • description of a reporting hierarchy (notification procedures) in the event that misconduct is suspected; and

  • reminders about the obligation of confidentiality—particularly in cases where reviewers are required to examine data or replicate results.

An argument could be made that everyone participating in the manuscript review process could benefit from training covering how to review, or best practices in reviewing, and that Ph.D. programs could better prepare students in the production of research audit trails and logs (see Oler and Pasewark 2014). The likely benefits of such training, and its possible impact on manuscript errors and misconduct, are beyond the scope of this discussion.

If users of accounting research are to have confidence in the results of published studies, then the risk of errors and misconduct must be minimized. We propose that the ARM may be an appropriate framework for application by academic accounting journals in the peer review process to help achieve this goal. While use of the ARM, or any risk-based model, may not be a panacea, it has the potential to balance the competing goals of peer review effectiveness and efficiency.

There are certain benefits in application of the ARM over other risk-based frameworks. The ARM explicitly takes into consideration both unintentional errors and misconduct, including the concepts embedded in the fraud risk triangle. It is intended to be applied using professional skepticism, not under the assumption of guilty until proven innocent, or vice versa. Further, the ARM is familiar to many accounting researchers, reducing start-up time and costs. It is parsimonious, and therefore fairly simple to communicate; and it is intended to balance risk and the scarce resource, time.

To apply the ARM in the context of the manuscript review process a number of changes would be required. These include more expansive author submission forms that include educational background, employment history, and a detail of prior publications. In this sense, submission forms would have to include information commonly included on an individual's curriculum vitae. Forms documenting editors' assessment of risk and plans to address identified risk would also be required. To ease the burden of documentation, largely standardized forms could be adopted that list factors (checklist style) that may contribute to increased inherent or control risk, and an array of steps to be taken in response. It would also be necessary to design and implement training for editors and reviewers, and to address (or reconcile) the notion of a blind review process. Perhaps most importantly, application of the ARM to the manuscript review process would necessitate an increased time commitment for editors and for at least some reviewers. To reduce REM, it may be necessary for publishers to employ some individuals on at least a part-time basis.

The obvious question is, would a risk-based peer review process, such as the ARM, have prevented the Hunton retractions? As in auditing, due to the possibility of collusion among authors, override by editors and reviewers, and simple mistakes, no system of internal control can prevent all errors and misconduct. However, in the Hunton matter, we believe the ARM would have significantly increased the likelihood of detecting misconduct prior to publication. Specifically, factors contributing to heightened inherent risk in most of Hunton's publications included: most of the impacted academic journals were considered “top-tier”; many of the conclusions were based on data collected from participants considered very difficult to access; in some cases, presented analyses were nearly perfect in terms of the number of observations and statistical significance; and many of the studies' results were novel—not merely a “turning of the screw” in terms of knowledge gained.

In terms of control risk, although many of the co-authors at the time of publication were tenured professors, many were also less well published. Over a period of 20 years, Hunton published more than 120 articles,11 at least 11 of which were published in what are generally considered to be one of the “top-five” accounting journals, and another 26 of which are in what are generally considered to be top-in-their-sub-discipline journals. Given the time necessary to develop a research study, gather data, perform analysis, and properly vet results through both informal and formal channels—typically years, not months—this level of productivity is incredible. Had consideration of Hunton's research productivity been required as part of the peer review process, at least some of the editors involved may have suspected the possibility of errors or misconduct, ergo, assessed REM as above average, and modified the peer review process to include a request for, and review of, at least some datasets. The AAA's investigation of Hunton's work found that co-authors of many of the retracted papers were unable to provide supporting data or confirm how the study was conducted.

Beyond adoption of an ARM-like manuscript review process for managing REM in research studies, there is at least one other mechanism that publishers should consider implementing as a way of early detecting misstatements—establishing a hotline for reporting suspected misconduct. Similar to the programs established by issuers as required by the Sarbanes-Oxley Act, and by the SEC as mandated by the Dodd-Frank Act, guidelines could be established that require whistleblowers have a verifiable basis for any reported, suspected misconduct.

There are some that will say implementation of a risk-based manuscript review process would require a cost-prohibitive investment of time and resources. Further, as theorized by Tomkins (2001), adopting some of the suggested procedures—particularly as they relate to dataset production—may erode trust among members of the academy, which may lead to stagnation in the production of knowledge. We suggest that if the academy is to maintain its reputation for quality and independence, then improving the manuscript review process is necessary. To that end, we recommend that as part of the continuing activities of the Task Force, consideration be given to adopting a risk-based manuscript review process for all AAA publications. Adoption of such a process will not prevent all errors and irregularities, but does the fact that smoke detectors do not prevent all fires mean that no one should buy and install smoke detectors?

American Accounting Association (AAA)
.
2014
.
Editorial policy and style information
.
The Accounting Review
.
American Accounting Association Publications Ethics Task Force (AAA-PETF)
.
2014
.
AAA Publications Ethics Policy, Part A: Authorship
.
PUB-004. (April 21).
Bailey
,
C. D
.,
D. R
.
Hermanson
,
and
T. J
.
Louwers
.
2008
.
An examination of the peer review process in accounting journals
.
Journal of Accounting Education
26
(
2
):
55
72
.10.1016/j.jaccedu.2008.04.001
Carlisle
,
J
.
2012
.
The law of anomalous numbers
.
Proceedings of the American Philosophical Society
:
551
572
.
Chen
,
C
.,
Z
.
Hu
,
J
.
Milbank
,
and
T
.
Schultz
.
2013
.
A visual analytic study of retracted articles in scientific literature
.
Journal of the American Society for Information Science and Technology
64
(
2
):
234
253
.10.1002/asi.22755
Fanelli
,
D
.
2009
.
How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data
.
PLOS One
.
Gaffan
,
E
.,
and
D
.
Gaffan
.
1992
.
Less-than-expected variability in evidence for primacy and Von Restorff effects in rats' nonspatial memory
.
Journal of Experimental Psychology: Animal Behavior Processes
18
(
3
):
298
301
.10.1037/0097-7403.18.3.298
Grieneisen
,
M. L
.,
and
M
.
Zhang
.
2012
.
A comprehensive survey of retracted articles from the scholarly literature
.
PLOS One
7
(
10
). 10.1371/journal.pone.0044118
Healy
,
B
.
2014
.
Bentley professor James Hunton's “entire body of work” called into question after investigation
.
The
Boston Globe
(
Hunton
,
J. E
.,
and
A
.
Gold
.
2013
.
RETRACTION
:
A field experiment comparing the outcomes of three fraud brainstorming procedures: Nominal group, round robin, and open discussion
.
The Accounting Review
88
(
1
):
357
. doi: . Originally published in 2010, The Accounting Review 85 (3) 911–935.10.2308/accr-10326
Jha
,
A
.
2012
.
False positives: Fraud and misconduct are threatening scientific research
.
The Guardian
(
Lie
,
E
.
2005
.
On the timing of CEO stock option awards
.
Management Science
51
(
5
):
802
812
.10.1287/mnsc.1050.0365
Nelson
,
L. D
.,
J. P
.
Simmons
,
and
U
.
Simonsohn
.
2012
.
Let's publish fewer papers
.
Psychological Inquiry
23
:
291
293
.10.1080/1047840X.2012.705245
Oler
,
D. K
.,
and
W. R
.
Pasewark
.
2014
.
How to Review a Paper
.
RetractionWatch.com
.
2015
.
Accounting Professor Notches 30 (!) Retractions after Misconduct Finding
.
Roberts
,
S
.
1987
.
Less-than-expected variability in evidence for three stages in memory formation
.
Behavioral Neuroscience
101
(
1
):
120
.10.1037/0735-7044.101.1.120
Schwarcz
,
J
.
2014
.
Skeptic check: Check the skeptics
.
Big Picture Science
(
Simonsohn
,
U
.
2013
.
Just Post It: The Lesson from Two Cases of Fabricated Data Detected by Statistics Alone
.
Sternberg
,
S
.,
and
S
.
Roberts
.
2006
.
Nutritional supplements and infection in the elderly: Why do the findings conflict?
Nutrition Journal
5
(
1
):
30
.10.1186/1475-2891-5-30
Tomkins
,
C
.
2001
.
Interdependencies, trust and information in relationships, alliances and networks
.
Accounting, Organizations and Society
26
(
2
):
161
191
.10.1016/S0361-3682(00)00018-0
Weintraub
,
K
.
2014
.
Science journal retracts paper on stem cell discovery
.
USA Today
(
1

The number of related retractions is as of June 29, 2015 (RetractionWatch.com 2015).

2

As reported by Jha (2012), authors discovered to have falsified data recently include Dirk Smeesters, who studied various aspects of consumer behavior; Diederik Stapel, who studied the impact of environmental factors on behavior; and Naoki Mori, who studied infections and immunity; and, as reported by Healy (2014), James E. Hunton, who studied, among other things, the behavior of independent auditors.

3

Chen, Hu, Milbank, and Schultz (2013) describe how, in spite of the retraction, published studies may have a lasting impact on society (e.g., reports of the association between certain vaccines and autism—retracted by The Lancet in 2010). Obviously, the risk to society of studies in some disciplines is greater than others.

4

The proposed framework may be generalizable to the peer review processes of other disciplines.

5

Most notably, reviewers are not generally sued for poorly conducted manuscript reviews, like auditors may be sued for poorly performed audits. However, it may be possible for reviewers to suffer similar reputational losses for poor work.

6

While there are other risk-based models for assessing and managing risk, chief among them, the Committee of Sponsoring Organizations of the Treadway Commission's (COSO) Enterprise Risk Management—Integrated Framework, and its recently updated, COSO 2013 Framework, these models are arguably more complex and directed at entity-level assessments, and therefore, are less easily adapted to a single process-level assessment. Another practitioner-based framework candidate is Statement on Auditing Standard No. 99, Consideration of Fraud in a Financial Statement Audit (AU 316), designed to detect financial statement misstatement. We believe the ARM is a more holistic framework, contemplating the auditor's response to the risk of both errors and irregularities (misconduct).

7

Copies of the most recent versions of these policies are available in the Publications Ethics Resources section of the AAA Commons.

9

See Simonsohn (2013) for examples of how analysis of raw data may help to identify research misconduct.

10

At least some see the publication of less research as non-problematic, or perhaps even ideal (Nelson, Simmons, and Simonsohn 2012).