SUMMARY
Global stakeholders have expressed interest in increasing the use of data analytics throughout the audit process. While data analytics offer great promise in identifying audit-relevant information, auditors may not use this information to its full potential, resulting in a missed opportunity for possible improvements to audit quality. This article summarizes a study by Koreff (2022) that examines whether conclusions from different types of data analytical models (anomaly versus predictive) and data analyzed (financial versus non-financial) result in different auditor decisions. Findings suggest that when predictive models are used and identify a risk of misstatement, auditors increase budgeted audit hours more when financial data are analyzed than when non-financial data are analyzed. However, when anomaly models are used and identify a risk of misstatement, auditors' budgeted hours do not differ based on the type of data analyzed. These findings provide evidence that different data analytics do not uniformly impact auditors' decisions.
I. INTRODUCTION
In this article, we summarize a study that attempts to explain how auditors' prior experience using different types of analyses impacts how they respond to conclusions drawn from different data analytical models, depending on the type of data analyzed (Koreff 2022).
Despite the advances in technology enabling accounting firms to develop more sophisticated data analytics to identify audit relevant information and potentially improve audit quality, use of these tools by auditors is often inconsistent for a variety of reasons, including concerns over inspection risk (Eilifsen, Kinserdal, Messier, and McKee 2020), the Public Company Accounting Oversight Board (PCAOB) not explicitly requiring the use of these tools (PCAOB 2021b), and the restrictive nature of the technology (Dowling and Leech 2014). Koreff (2022) shows that even when the same output is presented, auditors' experience (familiarity) with the combination of the type of model and data used to arrive at the same conclusion can result in inconsistent decision making.
The interview data in Koreff (2022) shows that auditors report a comparable amount of experience analyzing financial and non-financial data when using anomaly models, which explains why decisions do not differ when auditors use anomaly models that analyze different data. Thus, when firms develop more advanced anomaly-based analytics, the type of data analyzed is not expected to result in inconsistent auditor decision making. However, the same cannot be said for predictive analytics, as interviewees reported that predictive analytics tend to focus on financial data relative to non-financial data. Accordingly, Koreff (2022) demonstrates that when predictive analytics are used, auditors are more likely to incorporate the findings into their decisions when financial data are analyzed as compared to non-financial data. These findings are in line with the PCAOB's data and technology research project expressing a concern that auditor experience and understanding of analytics represent important factors to the effective use of these tools (PCAOB 2021a), and ultimately improvement of audit quality.
Taken together, Koreff (2022) observes that two attributes of analytics, model and data, do not impact auditors' decisions individually. However, the combination of these two attributes impact auditors' decisions.
II. MOTIVATION AND EXPECTATIONS
Advances in technology have resulted in the development of data analytical tools that can perform a list of analyses such as population testing, identifying outliers based on a criteria, predictive modeling, and analysis of non-traditional unstructured data. In fact, the American Institute of Certified Public Accountants (AICPA) Assurance Services Executive Committee (ASEC) has developed an “Audit Data Analytics Guide” that suggests that data analytics are an outgrowth and expansion of analytical procedures (AICPA 2015; Appelbaum, Kogan, and Vasarhelyi 2017; AICPA 2017). Furthermore, Statement on Auditing Standards (SAS) 142 (titled “Audit Evidence”) permits auditors to use automated tools and techniques to enhance the evaluation of audit evidence, including the analysis of non-financial data.1
Although data analytics can be seen as an extension of analytical procedures (Appelbaum et al. 2017), auditors do not always use analytical procedures effectively (PCAOB 2007, 2014, 2008, 2013; Barr-Pulliam, Brazel, McCallen, and Walker 2020; Brazel, Leiby, and Schaefer 2022a; Cao, Duh, Tan, and Xu 2022). As an additional barrier to consistent implementation of data analytics, PCAOB standards do not require the use of data analytics (PCAOB 2021b). Shortcomings of analytics include users not considering risks beyond what the analytics identified (Seow 2011), and not properly evaluating false positives (Koreff, Weisner, and Sutton 2021). Auditors have a preference for simpler analytics, including comparing current year balances to prior year balances, and thus may be reluctant to use more sophisticated analytics (Ameen and Strawser 1994; Trompeter and Wright 2010; Schmidt, Riley, and Church 2020b; Schmidt, Church, and Riley 2020a; Brazel et al. 2022b). Yet, the PCAOB encourages the use of these tools in order to improve the audit process and audit quality (PCAOB 2016, 2018). One way to promote auditors' use of analytics may be to provide auditors with analytics that use familiar analyses.
When auditors use familiar analyses, it is expected to induce cognitive fit. Cognitive fit refers to the congruence between a process used by a decision maker and the decision aiding tool (Vessey and Galletta 1991; Al-Natour, Benbasat, and Cenfetelli 2008). Auditors will experience greater cognitive fit with data analytics that use combinations of data analytic models and data types that they are more familiar with, since cognitive fit is correlated with experience (Dunn and Grabski 2001; Goodhue and Thompson 1995).2 Data analytics can be used to analyze a multitude of data types, but auditors will experience different levels of cognitive fit depending on experience using the analyses utilized by the analytics (i.e., the combination of model and data). Thus, when auditors view the results of an analytic that uses familiar analyses, auditors will experience greater cognitive fit with the analytic and therefore be more likely to incorporate the results of the analytic into their decision making process.
Two analytical models were examined by Koreff (2022): Anomaly and Predictive models. Anomaly models perform a distributional (bell curve) analysis to identify outliers (SAS Institute 2014). Predictive models analyze patterns of previously identified issues and compare them with current patterns (Kuenkaikaew and Vasarhelyi 2013). Koreff (2022) illustrates that auditors' experience using these two types of models does not differ substantially. As a result, Koreff (2022) predicts that the auditors' cognitive fit will depend not only on the analytical model used, but also the data analyzed by the model.
Two types of data were assessed by Koreff (2022): financial data and non-financial data. Predictive models focus primarily on analyzing financial data (Dechow, Ge, Larson, and Sloan 2011; Sinclair 2015; Perols, Bowen, Zimmermann, and Samba 2017), whereas anomaly models are more capable of analyzing both types of data (Glover, Prawitt, and Wilks 2005; Hobson, Mayew, and Venkatachalam 2012; Brazel, Jones, and Prawitt 2014). See Figure 1 for a graphical depiction of auditors' experiences using the four combinations of the different types of analyses. The depiction in Figure 1 suggests that auditors have comparable experience using predictive and anomaly models (hence the two bars rising to the same level), yet they overwhelmingly use predictive analytics to analyze financial data, as compared to non-financial data. The lack of experience using predictive analytics to analyze non-financial data are expected to result in auditors resisting the incorporation of results from this combination of model and data into their decisions. Yet, the same cannot be said for anomaly models as auditors' experience using financial and non-financial data are approximately the same (hence a more balanced amount of time in the bar on the right side of the graph).
As a result, considering only the type of model or type of data individually, rather than a combination of these two factors, used by analytics could paint an incomplete picture of auditors' willingness to use the findings of analytics in their decisions. This difference in experience is expected to impact auditors' cognitive fit and, in turn, decision making. When predictive models identify a risk of misstatement, auditors will increase budgeted audit hours more (and presumably see a greater improvement in audit quality) when financial data are analyzed, as opposed to non-financial data. Yet, when anomaly models are used and identify a risk, no such difference is expected.
III. THE EXPERIMENT
Participants
Koreff (2022) employed an experiment to test the aforementioned expectations, where the participants consisted of 98 auditors of all ranks employed by a variety of sized firms.3 Follow-up interviews were conducted with 26 of the auditors that completed the experiment to obtain insights on their experiences using different types of analytics (described in Figure 1).
Description of Experimental Context
Participants were provided with background information related to their role as an in-charge auditor of a privately held, mid-sized sporting equipment manufacturer. Participants were told that their firm's Central Data Analytics Group had identified a potential misstatement with an estimated range that just exceeded performance materiality of $304,000. The conclusion stated that the use of predictive/anomaly models to analyze journal entries/emails presented a 56 percent risk that revenue was overstated by an amount between $270,000 and $310,000. As such, the risk identified was held constant; however, the process used to arrive at that risk varied.4
Variables
The experiment manipulated the type of analytical model used (predictive or anomaly) and the type of data analyzed (financial or non-financial).5 See Appendix B for specific descriptions of these manipulations. The participants were asked: “Assume 30 hours were initially budgeted to audit revenue. How would you adjust the budgeted hours for the revenue account in percentages (every 5 percent change results in a change of 1.5 hours)?”
Results
Koreff (2022) illustrated that auditors with experience using analytics report comparable experiences using anomaly models and predictive models when answering, “How experienced are you in using data analytics that identify statistical outliers such as unusually high/low fluctuations or ratios (anomaly models) as part of your job function?” and “How experienced are you in using data analytics that compare current data against previously identified issues/occurrences to identify similarities (predictive models) as part of your job function?” Both questions were measured on five-point Likert scales with endpoints of 1 = “Not at all experienced,” and 5 = “Extremely experienced.” No significant difference was identified between these measures with means of 2.590 (for anomaly models) and 2.559 (for predictive models).
Results in Koreff (2022) also showed that the type of model used and the type of data analyzed did not individually impact auditors' determination of budgeted audit hours; however budgeted audit hours were impacted by the combined impact of these two factors. See Figure 2 for a graphical depiction of the results. The results demonstrated that, when employing predictive analytics, auditors increased their budgeted hours more when financial data were used as compared to non-financial data (19.48 percent increase versus 11.38 percent increase, p = 0.01). However, when anomaly models were used, Koreff (2022) observed no statistically significant difference in the responses of auditors to the two data types (18.42 percent versus 14.16 percent increase, p > 0.10).6 Additionally, when financial data were analyzed, auditors increased budgeted audit hours more when predictive models were used (19.48 percent increase versus 14.16 percent increase, p = 0.09). On the other hand, when data analytics used non-financial data, auditors were more likely to increase budgeted audit hours when anomaly models were used (18.42 percent increase versus 11.38 percent increase, p = 0.07).
For additional insights, we conducted additional analyses replicating the primary results presented in Koreff (2022), while adding control variables for auditor age, years of audit experience, years of professional experience, title, and prior experience using data analytics. In all cases, the primary results of Koreff (2022) hold. We also considered the possibility that industry expertise impacted auditors' use of the analytics as we controlled for auditors' percentage of time auditing manufacturing clients and a variable measuring if the auditor audits any manufacturing clients. These variables did not significantly impact results, and the results are consistent with the main results of Koreff (2022). Finally, we conducted analysis including only auditors employed by national and international firms in the sample. The primary results remained supported, consistent with the results reported by Koreff (2022).
IV. FOLLOW-UP INTERVIEWS
Koreff (2022) conducted interviews of auditors that completed the experiment to provide additional insights into auditors' varying levels of experience using different types of analytics. When asked about prior experience using predictive analytics (specifically, “How would you describe the amount of experience you had using predictive analytics that analyzed financial vs. non-financial data?”), interviewees generally reported greater experience analyzing financial versus non-financial data. When asked about prior experience using anomaly analytics (specifically, “How would you describe the amount of experience you had using anomaly analytics that analyzed financial vs. non-financial data?”) auditors generally reported comparable experience analyzing financial and non-financial data.
V. IMPLICATIONS FOR PRACTICE
Despite the promise that data analytics have for improving the audit process, simply providing these tools to decision makers is insufficient to induce adoption (Messier 1995; Venkatesh et al. 2003; Schmidt et al. 2020a; Schmidt et al. 2020b). Although firms are developing more advanced analytics, the results of Koreff (2022) suggest that auditors may not use these tools consistently. Effective implementation of these data analytics should account for auditors' prior experiences related to combinations of analytical models and the data processed by these models.
Koreff (2022) findings suggest that if new analytics are deemed effective by the firm, they still need to be cognizant of auditors' lack of experience using the analysis as a barrier to adoption (and potentially improving audit quality). Although auditors have comparable experience using the two types of analytics examined by Koreff (2022), consideration of the type of data these models tend to analyze revealed a disparity in the amount of time auditors spend analyzing different data by these types of models. While predictive analytics tend to focus on analyzing financial data, auditors reported anomaly models incorporating a more balanced amount of financial and non-financial data. This disparity ultimately impacts auditors' decisions. Therefore, public accounting firms should train their employees on how predictive models can be effective using both financial and non-financial data to encourage consistent decision making. Firms should consider appropriate matching of analytic models to the data being analyzed or determine ways to ensure that auditors' experiences with different model/data combinations employed in practice do not vary substantially (e.g., through training sessions illustrating the use of analytic tools).
REFERENCES
APPENDIX A
Quotes From Interviewees Discussing Prior Experience and Process Familiarity Impacting Cognitive Fit
Appendix A shows the interviewees' prior experience using a technology enabled tool (e.g., data analytics) and process familiarity of the analysis inducing use of that tool. The first column (“Interviewee”) represents the interviewee number. The second column (“Prior experience”) includes the quote that best depicts the interviewee's discussion of how prior experience induces use of a tool. The third column (“process familiarity”) includes the quote that best depicts the interviewee's discussion of how prior experience using a certain analysis induces use of a tool.
APPENDIX B
Manipulation Descriptions
In the predictive models condition, participants were provided a description of the predictive models used that read:
The Central Data Analytics Group employs predictive analytical models to identify patterns that are similar to previously identified issues. Predictive models rely on prior historical data to identify patterns and predict future events. Predictive models compare information in the data collected from clients associated with previously identified events/occurrences to current information. Predictive models may be used in the audit process to identify a pattern over several years associated with a previously identified material misstatement that may be indicative of a current material misstatement.
In the anomaly models condition, participants were provided a description of anomaly models used that read:
The Central Data Analytics Group employs anomaly analytical models to identify statistical outliers. Anomaly models rely only on current year (non-historical) data to identify statistical outliers. Anomaly models compare information in the data collected from your firm's client base to identify very high or low amounts or ratios. Anomaly models may be used in the audit process to identify very high or low ratios (i.e., gross margin, debt to equity, current ratio) that may be indicative of a current material misstatement.
The second variable manipulated between participants was the type of data analyzed. In the financial data condition, participants were told:
The Central Data Analytics Group is capable of identifying journal entries that affect revenue. For the Madison audit, the Central Data Analytics Group used this financial information to identify the number of journal entries that include revenue and were made just below the performance materiality threshold. Although the Central Data Analytics Group has explained what criteria they use for “just below the performance materiality” for the journal entries, this explanation contained substantial statistical jargon and was not well understood by your audit team. Several of your colleagues have reported similar issues with explanations received from the Central Data Analytics Group.
In the non-financial data condition, participants were told:
The Central Data Analytics Group is capable of identifying sentences in the e-mails that discuss revenue. For the Madison audit, the Central Data Analytics Group used this non-financial information to identify optimistic language used in internal and external e-mails for sentences that discuss revenue. Although the Central Data Analytics Group has explained what criteria they use for “optimistic language” in the e-mails, this explanation contained substantial statistical jargon and was not well understood by your audit team. Several of your colleagues have reported similar issues with explanations received from the Central Data Analytics Group.
APPENDIX C
Quotes From Interviewees Discussing Experience Using Anomaly and Predictive Models
Appendix C shows the interviewees' description of their experience using different inputs for different analytics. The first column (“interviewee”) represents the interviewee number. The second column (“predictive”) includes the quote that best depicts the interviewee's experience using predictive analytics to analyze financial versus non-financial data. The third column (“anomaly”) includes the quote that best depicts the interviewee's experience using anomaly analytics to analyze financial versus non-financial data. The fourth column (“difference”) includes the quote that best depicts the interviewee's comparison of the proportion of time predictive versus anomaly analytics use financial versus non-financial data.
For examples on permitting the use of automated tools, see paragraphs A3, A4, A43, A45, A46, A47 and A61. See paragraph A59 for permitting the use of analysis of non-financial data.
While we acknowledge that these studies are not from recent years, these findings are echoed by interviewees in Koreff (2022) that experience using analyses increases the likelihood of using the analysis by stating “it all comes down to experience using it … so I'd say those are probably the largest one [resistance to using analytics] is the lack of experience” and “anytime there's new data, I'm a little bit nervous … If the auditor has experience with the process or with the client I think there can probably be higher willingness to use certain analytics.” See Appendix A for a complete list of quotes from interviewees in Koreff (2022) discussing cognitive fit and prior experience impacting use of analytics.
On average, participants had 9.0 years of audit experience. Sixty of the auditors were employed by national or international firms.
The Central Data Analytics Group was described as consisting of non-CPAs without an accounting background. The likelihood of someone without an accounting background identifying an accounting misstatement is low. To make for a more realistic case, a risk of misstatement (as opposed to an actual misstatement) was said to have been identified by the Central Data Analytics Group.
In both manipulations the background information provided was limited in an effort to keep the case short. Future research may seek to examine the impact of providing additional detailed information.
Statistical analyses (i.e., ANCOVA results) documented in Koreff (2022) confirm that the evidence supports these conclusions.