SUMMARY
This article summarizes “Managing Group Audit Risk in a Multicomponent Audit Setting” (Graham, Bedard, and Dutta 2018), who address a long-standing issue in audit practice that remains a concern of regulators and auditors in the current environment. Specifically, how can auditors demonstrate that sufficient evidence has been gathered to support the auditor's opinion when the entity comprises multiple components and when sufficient evidence cannot be attained by selecting all or just a few large components for examination? The model presented in the original paper conforms to the general principles of multicomponent audits in auditing standards literature. By explicitly documenting assumptions used in applying the model, the auditor articulates the elements of judgment going into scoping decisions regarding the number of components selected and the extent of procedures to be performed at those locations. Here, we show a simple example illustrating the model, while the original paper contains computation details and more complex scenarios.
INTRODUCTION AND BACKGROUND
Increasing attention is being paid by professional groups and regulators, such as the American Institute of Certified Public Accountants (AICPA) and the Public Company Accounting Oversight Board (PCAOB), to the adequacy of evidence gathered to support the audit opinion. A long-standing issue is how to determine the extent of testing of and at remote components when it is impractical to investigate all of them, and it is not practical to select enough components so that the remaining components are not significant.1 The globalization and expansion of businesses have created more situations where to be valid, the auditor's evidential plan needs to consider risks associated with material amounts not examined because certain components were not selected for more detailed audit procedures. These risks might not be addressed through high reliance on analytical procedures alone (Sunderland and Trompeter 2017, 170). This issue is not limited to large international entities, but has implications for entities of moderate and smaller size (Downey and Bedard 2017).
Calls for standard setting in this area (PCAOB Panel on Audit Effectiveness 2000, 21–22) have not resulted in sufficient guidance on how to achieve the desired low-risk objective in multiple component entities. With the issuance of ISA 600 (International Auditing and Assurance Standards Board [IAASB] 2009) and AU-C Section 600 (AICPA 2012) on group audits, the need for the auditor to demonstrate the performance of sufficient procedures on these audits has been clarified, but without a practical model of how to do so in many circumstances. Prior research provides several approaches for allocating audit effort (e.g., materiality or some related measure) to components when all or mostly all of the significant components are examined (Elliott and Rogers 1972; Dutta and Graham 1998; Glover, Prawitt, Liljegren, and Messier 2008; Stewart and Kinney 2013). However, as noted by Sunderland and Trompeter (2017), these approaches do not consider the residual risk resulting from unexamined components. Similarly, Asare et al. (2013, 149) state that “there is currently no generally accepted scoping model” for multilocation (or multinational) audits, and call for research “on model development as well as how materiality and risk relate to the multilocation multicomponent audit environment.”
For multiple component entities, if all records and all supporting documentation for transactions are centrally available, the component auditing issue can be simplified, as some audit procedures can be applied to the aggregate entity without consulting the records at any individual component.2 However, centralization of all entity records and supporting documents is uncommon, and the problem still remains that some physical assets and inventory may need to be independently observed in order to verify their existence. Thus, the model illustrated in the Graham et al. (2018) paper fills an important gap in audit practice. It can be applied at the component or line item level to assess the sufficiency of components selected and of audit procedures performed in that unit toward meeting a desired low audit risk objective. Below, we briefly summarize the nature and value of that model, provide a step-by-step explanation of how the model determines a solution, and conclude with a discussion of key points.
The Graham et al. (2018) Multiple Component Audit Planning Model
As noted above, the goal of this model is to mitigate risk in auditing the residual amount after significant components have been considered. The model has two stages, following the guidance in AICPA Audit Guide: Audit Sampling (AICPA 2014, Appendix E).3 The auditor first considers selection risk, i.e., the risk that the selected components will not reveal a pattern of possible misstatements that could aggregate to a material misstatement. Next, the auditor considers detection risk, i.e., the risk that procedures performed at the selected components will not be sufficient to reveal material misstatements. The combined assurances from the selection and detection decisions can be articulated into a single risk/assurance metric by multiplying the complements of the two assessed risks.4 For planning, the assurance available from inherent, control risk and analytical review assessments and procedures can be considered when determining the desired level of combined selection/detection substantive risk to achieve a low level of audit risk.5
The model determines the minimum number of components to achieve the desired selection risk. The model is robust to incorporating variations in the number and dollar value of components, tolerable misstatement (a required sampling parameter),6 and possible patterns of misstatement.7 Once the components are selected, detection risk is considered to determine the nature and extent of audit procedures at the selected components that are necessary to detect a specific pattern of misstatement. If a component exhibits a misstatement condition that could be problematic, the auditor should generally expand testing for that pattern to additional components not initially selected, to gain assurance that a problematic pattern is not likely to exist in the overall population.
In this summary, we use a simple example involving similar-sized components for illustration purposes. In the published paper, we also illustrate the application of the model to unequal-sized components, as well as an analysis of sensitivity of model results to different assumptions about a “worst-case” misstatement assumption.
Example: Similar-Sized Components
Our example assumes a retail chain with about 250 stores and warehouse facilities. The auditor should first complete an assessment of the inherent and control risks and analytical procedures risks associated with the aggregate components, and determine a tolerable misstatement for the testing at the component level.8 Assume that controls over the existence and valuation objectives for this decentralized company are considered effective, and prior audits have not revealed any significant misstatements or particularly risky audit areas. The model is then used to plan testing for the residual balance, using the following steps:
Identify and separate for individual testing any large or high-risk components.
For the remaining components, which may be somewhat homogeneous, identify their number and values (e.g., based on a variable of focus such as sales, assets, or income).
Identify a “critical event” condition that may exist in the remaining population of components. For example, suppose it were possible that some components were 100 percent “bogus” (a conservative assumption).9
With this critical event in mind, identify the number of components that would have to exhibit this level of misstatement in order to meet the tolerable misstatement threshold for the components.
Using the remaining population of components and the number of components identified in the prior step, compute the number of components that would have to be audited to identify one instance of the critical event.10
As a result of the inputs in the example in the full paper, the model's solution is that the auditor should perform audit procedures at 15 components using a selection risk of 20 percent (80 percent confidence level). Of course, the minimum 15-component result is an estimate based on the auditor's inputs. Diligence in reviewing the results of procedures is necessary to confirm the adequacy of the 15-component selection. For example, indications in the results that there may be patterns of misstatement not considered, violations of auditor assumptions, or a possible extrapolation exceeding tolerable limits (with an allowance for sampling risk) may indicate more audit evidence is necessary. The following table shows the impact of varying some of the assumptions.11
DISCUSSION
The procedure summarized above uses two stages to control selection risk and detection risk, respectively. The first stage uses “discovery sampling” to detect a totally (or partially) misstated component or to find an instance perhaps indicating a pattern of misstatement, not to set a value for the population. At this stage, if even one instance of this condition is found in the sample, auditors have to reevaluate risks, potentially requiring audit action. The auditor should investigate reasons for all discrepancies identified, even if corrected by the client. The key statistical inference at this first stage of the model is that if a discrepancy is not found, the auditor has examined enough components to support the conclusion of the absence of a misstatement at the desired level of selection risk. Designing this phase to detect a possible pattern of 100 percent misstatement that might exist in some components would also likely detect many other possible misstatement patterns, which would also need to be considered by the auditor before concluding that enough components have been identified.12
Separate sample size (number of component) tables can be developed for different population sizes and different risks. Such tables can also illustrate the interactions between population size and other sampling parameters, such as selection risk, and the established threshold. The model can also be expanded to include risk factors that could change the likelihood of a component's selection. Such risk factors could include time since the component was last examined, changes in controls or management, new product risks, etc.
CONCLUSION
The model illustrated here addresses the issue of how many components in a multicomponent entity should be subject to audit procedures and how extensive those procedures must be to reveal a problematic misstatement pattern. Existing methods of allocating audit effort to components do not address the risks that arise when many of the non-trivial components are not examined. Professional auditor judgments are critical in applying any audit approach, including the application illustrated here. However, the nature of those judgments and their effects are more clearly understood when applying the model in real situations. A structured audit approach helps quantify the risks that cannot be otherwise estimated. The published paper includes a conceptual mathematical model of the approach, as well as some suggestions about adding other factors to the approach to match characteristics that may be present in an application.
REFERENCES
The term “significant” is defined as in common usage, meaning important.
However, when segment data are reported, some evidence gathered at the component level may be necessary to support the required disclosures.
The approach suggested in this paper has been applied by several auditing firms over the years and has been useful in clarifying decisions regarding the component selections and depth of audit procedures at the components selected. In earlier versions of the Audit Guide, this appendix was classified as Appendix L. (Note: Appendix E was originally published in the 2008 version of the Audit Guide and has been carried forward to subsequent editions.)
Risk and assurance are complementary: 5 percent risk implies 95 percent assurance. For example, when fairly high assurance is desired (e.g., no more than 10 percent for each risk), the overall assurance would be approximately 81 percent [(1.0 − 0.10 selection risk) * (1.0 − 0.10 detection risk)] = 0.81. This relationship is described in the AICPA Audit Guide: Audit Sampling (AICPA 2014, Appendix E). This calculation suggests that the values of selection and detection risk are statistically independent.
Audit risk is the risk of a misstatement remaining after applying audit judgments and all procedures.
The 2012 clarity revision to the AICPA standards introduced the concept of performance materiality. For simplicity this summary article uses the term tolerable misstatement, which is a sampling concept related to materiality, performance materiality, and the group audit measures of these concepts.
The model can be adapted for application to an account, group of accounts, or components as a whole. Specifications of component materiality, component performance materiality, and tolerable misstatement should be consistent with the approach adopted for the analysis.
AU-C Section 320, Planning and Performance Materiality Issues, introduced the concept of performance materiality and AU-C Section 600, Audits of Group Financial Statements (AICPA 2012), introduced the concepts of group and component materiality and performance materiality. Tolerable misstatement for testing purposes can be set at the level of performance materiality or at a lesser amount.
In situations where such an assumption is too conservative, one might support an assumption that a 50 percent or even a 25 percent misstatement would be detectable by other procedures.
CaseWare IDEA (Audimation Services, Inc.) is a commonly used auditing and sampling tool that uses the hypergeometric distribution to compute sample sizes. See Stewart (2012) for an explanation of the use of the hypergeometric distribution using an Excel function. The confidence level (“Selection Risk”) would be set based on the remaining audit risk after assessing inherent, control, and analytical procedures risk for the aggregate components for possible selection.
Weighting selection probabilities for qualitative issues would be an extension of this formulation.
The use of the “worst case” assumption here reduces the number of “critical events” in the population and increases the number of components that would need to be audited in order to detect one example. Weighting selection probabilities for qualitative issues would be an extension of this formulation. If one can support an assumption that only lesser misstatements (less than 100 percent) could be present, more critical events would be necessary to breach the threshold, and fewer components would need to be selected.