Researchers have anecdotally noticed the decline in response rates from accounting professionals (particularly auditors) across time. We document the extent of this decline and analyze the trends and the correlated situational and demographic variables. We conduct our analysis of articles published in Auditing: A Journal of Practice & Theory and Current Issues in Auditing from 1981 to 2016 and gather data on 75 articles that report response rates. The analysis shows a noticeable and statistically significant decrease in the response rates for auditors. Overall, the analysis illustrates a decline in the response rates from auditors that should be of concern to both the academic and auditing practitioner communities. We examine key drivers of the response rate and offer recommendations for improving the survey response of auditors.

Accounting researchers have observed the decline in survey response rates over time. It appears that 15 percent is an acceptable rate today, but in the 1970s and 1980s the expectation was about 60 percent. The purpose of this paper is to investigate the following research question: How have response rates from auditors declined across time? We seek to alert auditing practitioners to the threat of survey nonresponse and start a dialogue about aligning academic and practitioner research interests. The investigation of this study is important as Current Issues in Auditing encourages submissions from practitioner and academic author teams and the editor of Auditing: A Journal of Practice & Theory has called for more survey research. In fact, one recommendation from The Pathways Commission on Accounting Higher Education (2012, 51) is to “focus more academic research on relevant practice issues.” Thus, we examine all of the articles reporting survey response information in Auditing: A Journal of Practice & Theory and Current Issues in Auditing from 1981 through 2016. As expected, we find a decline in response rates of auditors across time. The main contributions of this study are to document this negative trend, assess its importance, and suggest strategies for increasing future response rates.

According to Dillman (2007, 3), “[i]n the late 1970s, a well-done mail survey was likely to exhibit a series of four carefully timed mailings, laboriously personalized by typing individual names and addresses atop preprinted letters. In combination with other meticulous details, including real signatures and a replacement questionnaire sent by certified mail, this procedure, the Total Design Method (TDM), demonstrated the ability to achieve high response rates.” Dillman goes on to discuss how the TDM takes social exchange theory into consideration. Social exchange theory asserts that social behavior is based on an exchange process in which people weigh how to maximize benefits and minimize costs in social relationships.1 Blau (1964) notes that a reciprocal relationship between two individuals underlies the concept of social exchange. Another view asserts that this theory is instead a frame of reference that assumes “that a resource will continue to flow only if there is a valued return contingent upon it” (Emerson 1976, 359).

Dillman (2007, 5) elaborates on how the TDM considers “how to increase perceived rewards for responding, decrease perceived costs, and promote trust in beneficial outcomes from the survey.” He acknowledges that the TDM has evolved over time and calls this evolution the Tailored Design, which is defined as “the development of survey procedures that create respondent trust and perceptions of increased rewards and reduced costs for being a respondent, that take into account features of the survey situation, and that have as their goal the overall reduction of survey error” (Dillman 2007, 4). Dillman (2007) mentions four sources of survey error: sampling error, coverage error, measurement error, and nonresponse error.2 It is very important to reduce survey error, as it affects the quality of responses and hence the reliability of the findings. Nonresponse error is important because low response rates may lead to poor external validity (in the form of bias) and low statistical power, which is why it is a frequent source of concern.

Recent Pew Research Center and Forbes articles illustrate how nonresponse bias can manifest in research polls. Keeter, Hatley, Kennedy, and Lau (2017) discuss how the mean response rate for telephone surveys has declined from 36 percent in 2009 to a plateau of 9 percent between 2012–2016. The authors observe that there is an over-representation of civic engagement and political engagement among telephone survey participants due to nonresponse bias, which skews the results of the polls (Keeter et al. 2017). Nonresponse bias is also evident in customer satisfaction surveys. Zucker (2017) admits that when responding to airline customer surveys, “I only answer when I have a terrible or great experience—which, admittedly, is a few times a year.” This habit of survey respondents biases survey response in a negative or positive direction.

In order to explore whether low response rates are prevalent in auditing research, we examined all of the articles reporting surveys of auditor populations, for which a response rate could be determined, in Auditing: A Journal of Practice & Theory and Current Issues in Auditing from 1981 through 2016. We collected the citation; year of publication; type of auditor (CPA, chartered accountant, chief audit executive, or internal auditor); number of responses; response percentage; and notes about other factors that may contribute to higher response rates, particularly whether the studies have firm/organization sponsorship or the researchers send follow-up requests (Dillman 2007; Newman 2009). Examples of firm/organization sponsorship include when researchers send surveys to auditing firm partners or accounting organizations to distribute to auditors and members. Surveys were classified as being sponsored if they were distributed to or emailed to participants from: (1) a professional organization (such as the AICPA, Boards of Accountancy, CPA societies, Institute of Chartered Accountants, and the Institute of Internal Auditors); (2) firm contacts (managing partners, partners, HR departments, senior managers, managers, liaison officers); and (3) company contacts (department heads).

Table 1 provides descriptive data. The mean response rate for the entire sample of 75 articles is about 48 percent, while the mean response rate for studies with firm sponsorship is higher than the mean response rate for studies with no sponsorship (54 percent versus 34 percent). Also, the highest response rate for the sample was documented in a 1984 article at 94.0 percent and the lowest response rate of 6.7 percent was noted in a 2009 article.

TABLE 1

Descriptive Statistics

Descriptive Statistics
Descriptive Statistics

Figure 1 shows the decline in response rates over time for auditors. The negative relationship between year of publication and response rates is statistically significant (p = 0.01) but not dramatic, with R2 = 0.17. This analysis is based on 75 articles with response rates descending from the high of 94.0 percent in 1984 to a low of 6.7 percent in 2009.

FIGURE 1

Response Rates

Studies with Firm Sponsorship

Figure 2, Panel A shows the declining response rates across time from auditors for studies with CPA firm or organization sponsorship and year of publication. This analysis is based on 50 articles with response rates declining from a peak of 94.0 percent in 1984 to a low of 6.7 percent in 2009. The negative relationship between year of publication and response rates is statistically significant (p = 0.01).

FIGURE 2

Sponsorship Rates

Panel A: Response Rates with Sponsorship

FIGURE 2

Sponsorship Rates

Panel A: Response Rates with Sponsorship

Close modal
FIGURE 2

(continued)

Panel B: Response Rates with No Sponsorship

FIGURE 2

(continued)

Panel B: Response Rates with No Sponsorship

Close modal

Studies with No Firm Sponsorship

Figure 2, Panel B displays the decrease in response rates over time from auditors for the subset of studies with no CPA firm or organization sponsorship. The negative relationship between year of publication and response rates is statistically significant (p = 0.01). This analysis is based on 25 articles with response rates ranging from as low as 13.4 percent in 2012 to as high as 73.3 percent in 1989.

Since many factors such as sponsorship, response burden, follow-up and question format are known to affect response rates3 (Dillman 2007; Newman 2009), we use the following model to test the relationship between response rates and these other possible predictor variables:

where:

  • ResponseRate = response percentage;

  • Year = year of publication;

  • Sponsorship = dummy variable for firm sponsorship (1 = firm sponsorship, 0 = no firm sponsorship);

  • NumberofItems = number of survey items/questions;

  • FollowUp = 1 if a follow-up request is sent, 0 if a follow-up request is not sent; and

  • QuestionFormat = 1 if the survey questions are in a scale format, 0 if the question format is open response.4

Table 2 shows the pairwise Pearson correlation matrix for the variables of the regression model. None of the independent variables are highly correlated with each other, so it does not appear that multicollinearity is a concern. Table 3 presents the regression results. The significant negative coefficient on Year suggests that response rates decrease by approximately 1.07 percentage points per year. The significant positive coefficient on Sponsorship indicates that firm sponsorship increases response rates by about 21 percentage points. The significant negative coefficient on QuestionFormat indicates that the response for a scale format is about 9 percentage points less, on average, than for an open response format. The coefficients for NumberofItems5 and FollowUp are not statistically significant. It is surprising that the coefficient on FollowUp is not significant because sending follow-up requests is an accepted way to increase response rates (Dillman 2007). However, since a majority of the studies with follow-up requests (12 of 19 or 63 percent) do not have firm sponsorship, we suspect that follow-up requests typically were sent in reaction to weak initial response rates. Such a phenomenon would create a statistical artifact, so that follow-ups are associated with lower response rates.

TABLE 2

Correlation Matrix

Correlation Matrix
Correlation Matrix
TABLE 3

Regression Analysis of Response Rates and Contributing Factors

Regression Analysis of Response Rates and Contributing Factors
Regression Analysis of Response Rates and Contributing Factors

In conclusion, the results of our analysis confirm accounting researchers' observation that auditors' response rates have declined over time. We analyzed the response rates for 75 articles published in Auditing: A Journal of Practice & Theory and Current Issues in Auditing from 1981 through 2016. The observations range from as high as 94.0 percent in 1984 to as low as 6.7 percent in 2009. The overall decline in response rates as analyzed in our bivariate analyses (of response rates and year of publication) and illustrated in Figures 1 and 2 is confirmed by the multivariate analysis (adding contributing factors which include firm sponsorship, follow-up requests, the number of survey items/questions, and question format) in Table 3.

A similar decline in survey response rates is documented in other literature. For example, de Leeuw and de Heer (2002) document the overall decline in response rates for social surveys internationally. Van der Stede, Young, and Chen (2005) mention that response rates in management accounting research have declined. Curtin, Presser, and Singer (2005) report that the response rate to the University of Michigan's Survey of Consumer Attitudes has declined by approximately one percentage point per year. Biemer and Peytchev (2012) note that from 1995 to 2008 the median state response rate for the Behavioral Risk Factor Surveillance System has declined from 68.5 percent to 45.5 percent, while the response rate for the National Immunization Survey fell from 87.1 percent to 63.2 percent.

Oppenheim (1966) notes that the most important point about poor response rates is the possibility of bias, as the returned surveys may not be representative of the originally drawn sample. Cochran (1977) describes one source of error in surveys as failing to measure some of the units in a chosen sample, which may occur if some of the individuals in a population refuse to answer survey questions. Dillman (2007) similarly notes that nonresponse error is relevant to the study if the people who respond to a survey are different from sampled individuals who do not respond. Jessen (1978) also discusses how missing data can cause difficulties in making inferences about samples if there are differences between respondents and nonrespondents. In addition, small sample sizes (evidenced in some auditing studies) also suffer from nonrespondent sampling issues and have an effect on the quality and accuracy of research (Bartlett, Kotrlik, and Higgins 2001). Consequently, survey nonresponse in the auditing profession causes nonresponse bias in auditing studies that threatens the validity of research findings and has practice implications.

One reason for survey nonresponse is respondents' topic saliency or interest in the topic. Sociology research provides evidence that the topic of a survey is a major factor of respondent participation (Heberlein and Baumgartner 1978). Furthermore, marketing research shows that personal interest in the survey topic not only influences respondent behavior but it also affects data quality (Keusch 2013). In order to increase survey response and improve data quality, researchers have to ensure that practitioners are interested in the topic of the surveys. One way for researchers to gain practitioners' interest in survey topics is to involve practitioners, specifically auditors, in the research process. Currently, auditors assist researchers with survey construction and data collection, but researchers should involve auditors in formulating research questions. A more collaborative approach between researchers and auditors at the beginning of the research process should result in research that truly piques auditors' interests and improves their survey response.

A second reason for survey nonresponse is confidentiality and privacy concerns (Tourangeau, Rips, and Rasinski 2000). Psychology research discusses how respondents' concerns about the confidentiality of their identity and disclosure to third parties negatively affect survey response (Tourangeau et al. 2000). Given the decrease in response rates over time in the auditing literature and the increase in litigation risk and companies' data privacy risk, external and internal auditors may have a heightened sense of confidentiality and privacy concerns. Therefore, obtaining sponsorship can be difficult in terms of firms' willingness to sponsor studies. Some firms do offer research assistance through sponsorship but may exercise control over survey content in light of client confidentiality and litigation protection. These concerns adversely affect survey response in two ways: (1) either respondents will not answer survey questions or (2) they will misreport (over-report or under-report their behaviors) (Tourangeau et al. 2000). Researchers must allay confidentiality and privacy concerns because they contribute to nonresponse and measurement error, which both threaten the validity of research findings. Singer, Von Thurn, and Miller (1995) find that confidentiality assurances improve survey response. In order to increase survey response, researchers should offer strong confidentiality assurances and detailed information about whether results will be disclosed to outside parties. A couple of suggestions for researchers to ensure anonymity are to have third parties collect the data or have respondents complete surveys on laptops provided at conferences and meetings.

Last, auditing researchers have to do a better job of marketing research to auditing practitioners. We can look to education and medical research for guidance as there are numerous studies conducted to assess practitioner use of research in these two fields (Hemsley-Brown and Sharp 2003). Medical literature notes that medical practitioners use medical research depending on whether it is relevant and accessible (Hemsley-Brown and Sharp 2003). Researchers have to promote research findings in other outlets besides academic journals to gain auditors' attention. A few suggestions include asking to present research at local CPA firm offices, hosting research forums at local CPA and Institute of Internal Auditors (IIA) conferences, and asking to publish research findings in local CPA and IIA chapter newsletters. Accounting departments can provide support for accounting researchers to publish in these practitioner outlets by allowing such research dissemination to count toward tenure requirements (which could qualify as intellectual contributions under the AACSB's Academically Qualified and Professionally Qualified faculty classifications [AACSB 2009]). In addition to relevance and accessibility, education research finds that the ambiguity of research is another barrier to its use in practice (Hemsley-Brown and Sharp 2003). If auditing researchers want auditors to use research in practice, they must reduce the academic jargon. Decreasing the academic jargon begins as researchers draft survey requests and ends as researchers communicate results to the auditing practice community. By exhibiting energy to improve the relevance of research in practice, researchers can build bridges with auditors to increase access to research findings and enthuse survey response.

One way that auditors can help researchers is by remembering their pivotal role in the behavioral research process. Receiving a survey request from a random researcher may not be enticing but auditors should humanize survey requests and be mindful that researchers are depending on each survey response. As discussed earlier, nonresponse bias is a threat to research. The threat of this bias lessens as each response is received. Auditors must not underestimate their power in being a part of the research process. Auditors' survey responses contribute to research findings that inform and advance accounting knowledge.

Another way that auditors can assist researchers is by continuing to sponsor research. This study's findings illustrate (in Table 1 and Figure 2, Panels A and B) that sponsorship makes a difference in survey response. Based on our analysis, the average response rate for studies with sponsorship is around 54 percent while the average response rate for studies with no sponsorship is 34 percent. This 20 percent difference confirms that the sponsorship of surveys matters. The AICPA, Boards of Accountancy, CPA firms (U.S. and international), local CPA chapters, the Institute of Chartered Accountants, and the IIA represent the sponsors of some of studies examined in this paper. Such collaboration and mutual support between the auditing practice and research communities will continue to foster the use of research in professional practice.

Finally, auditors can hold researchers more accountable (Hemsley-Brown and Sharp 2003). As research findings become available to the auditing practice community, auditors can offer researchers insight as to how the results apply to practice. On the other hand, auditors can question research findings that are counterintuitive to practice. Auditors can also offer suggestions for follow-up studies. Medical research highlights how understanding the role of the relationship between clinicians (i.e., physicians who work directly with patients) and nonclinicians (e.g., professors who do not work directly with patients) can have an impact on shaping health care delivery and quality (Lanham et al. 2009). Similarly, closer relationships between auditing practitioners and auditing researchers can have an effect on shaping auditing practice and research. We contend that the formation of research relationships between auditors and auditing researchers will assist in improving survey nonresponse and auditing research.

Association to Advance Collegiate Schools of Business International (AACSB)
.
2009
.
AQ/PQ Status: Establishing Criteria for Attainment and Maintenance of Faculty Qualifications—An Interpretation of AACSB Accreditation Standards
.
Tampa, FL
:
AACSB
.
Bartlett
,
J. E.
,
J. W.
Kotrlik
, and
C. C.
Higgins
.
2001
.
Organizational research: Determining appropriate sample size in survey research
.
Information Technology, Learning and Performance Journal
19
(
1
):
43
50
.
Biemer
,
P. P.
, and
A.
Peytchev
.
2012
.
Census geocoding for nonresponse bias evaluation in telephone surveys: An assessment of the error properties
.
Public Opinion Quarterly
76
(
3
):
432
452
.
Blau
,
P. M.
1964
.
Exchange & Power in Social Life
.
New York, NY
:
John Wiley & Sons, Inc
.
Cochran
,
W. G.
1977
.
Sampling Techniques
.
3rd edition
.
New York, NY
:
John Wiley & Sons, Inc
.
Curtin
,
R.
,
S.
Presser
, and
E.
Singer
.
2005
.
Changes in telephone survey nonresponse over the past quarter century
.
Public Opinion Quarterly
69
(
1
):
87
98
.
de Leeuw
,
E.
, and
W.
de Heer
.
2002
.
Trends in household survey nonresponse: A longitudinal and international comparison. In Survey Nonresponse, edited by R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little, 41–54. New York, NY: Wiley
.
Dillman
,
D. A.
2007
.
Mail and Internet Surveys: The Tailored Design Method. 2nd edition. Hoboken, NJ: John Wiley & Sons, Inc
.
Emerson
,
R. M.
1976
.
Social exchange theory
.
Annual Review of Sociology
2
(
1
):
335
362
.
Heberlein
,
T. A.
, and
R.
Baumgartner
.
1978
.
Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature
.
American Sociological Review
43
(
4
):
447
462
.
Hemsley-Brown
,
J.
, and
C.
Sharp
.
2003
.
The use of research to improve professional practice: A systematic review of the literature
.
Oxford Review of Education
29
(
4
):
449
471
.
Jessen
,
R. J.
1978
.
Statistical Survey Techniques
.
New York, NY
:
John Wiley & Sons, Inc
.
Keeter
,
S.
,
N.
Hatley
,
C.
Kennedy
, and
A.
Lau
.
2017
.
What Low Response Rates Mean for Telephone Surveys. Pew Research Center Online. Available at: http://www.pewresearch.org/2017/05/15/what-low-response-rates-mean-for-telephone-surveys/
Keusch
,
F.
2013
.
The role of topic interest and topic salience in online panel web surveys
.
International Journal of Market Research
55
(
1
):
59
80
.
Lanham
,
H. J.
,
R. R.
McDaniel
, Jr
.,
B. F.
Crabtree
,
W. L.
Miller
,
K. C.
Stange
,
A. F.
Tallia
, and
P. A.
Nutting
.
2009
.
How improving practice relationships among clinicians and nonclinicians can improve quality in primary care
.
Joint Commission Journal on Quality and Patient Safety
35
(
9
):
457
466
.
Newman
,
D. A.
2009
.
Missing data techniques and low response rates: The role of systematic nonresponse parameters
.
In
Statistical and Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences
,
edited by
C. E.
Lance
and
R. J.
Vandenberg
,
7
36
.
New York, NY
:
Psychology Press
.
Oppenheim
,
A. N.
1966
.
Questionnaire Design and Attitude Measurement
.
New York, NY
:
Basic Books, Inc
.
Pathways Commission on Accounting Higher Education
(
The Pathways Commission
).
2012
.
The Pathways Commission: Charting a National Strategy for the Next Generation of Accountants. Lakewood Ranch, FL and Durham, NC: The American Accounting Association and the American Institute of CPAs
.
Singer
,
E.
,
D.
Von Thurn
, and
E.
Miller
.
1995
.
Confidentiality assurances and response: A quantitative review of the experimental literature
.
Public Opinion Quarterly
59
(
1
):
66
77
.
Tourangeau
,
R.
,
L. J.
Rips
, and
K.
Rasinski
.
2000
.
The Psychology of Survey Response
.
Cambridge, U.K
.:
Cambridge University Press
.
Van der Stede
,
W. A.
,
S. M.
Young
, and
C. X.
Chen
.
2005
.
Assessing the quality of evidence in empirical management accounting research: The case of survey studies
.
Accounting, Organizations and Society
30
(
7-8
):
655
684
.
Zucker
,
M.
2017
.
Customer Surveys, Reviews and Polls: Feast or Fatigue? Available at: https://www.forbes.com/sites/matzucker/2017/07/17/surveys-feast-or-fatigue/#7bf1d4304338
2

Briefly, sampling error is the result of attempting to survey only some of the units in the survey population, coverage error happens when the list from which the sample is drawn does not include all elements of the population, and measurement error occurs when a respondent's answer to a survey question is inaccurate, imprecise, or cannot be compared to other respondents' answers.

3

We considered the inclusion of dummy variables for auditor type, delivery method and incentives in the model, but their p-values are not significant. Regarding incentives, only three surveys have a financial incentive (charitable donation, raffle drawing, or gift certificate); a couple of studies mention a nonfinancial incentive of providing a summary of study results to increase survey participation.

4

We thank an anonymous reviewer for suggesting the inclusion of this variable.

5

The mean of the number of survey items is 22 and the range is 1–102.