The question of how to measure nonprofit performance has vexed financial statement users and accounting researchers for some time. Due to data limitations, evaluation of nonprofit success has been constrained to financial measures, primarily focused on inputs rather than outputs and outcomes. While there has been much effort to improve nonprofit reporting by both the IRS and the FASB (Smith and Shaver 2009; Reck, Lowensohn, and Neely 2019), the search for informative nonprofit performance measures continues.

The primary challenge in identifying useful nonprofit performance measures is determining what stakeholders value in terms of information from charitable organizations.1 Donors want assurance their contributions are being put to good use, lenders want confidence that organizations are financially stable and will survive economic challenges, watchdog agencies (e.g., Better Business Bureau's Wise Giving Alliance) focus on whether resources are directed to beneficiaries rather than executives, and oversight bodies (e.g., IRS, FASB) work to mitigate opportunities for fraud.

To guarantee that donated resources are used to accomplish the mission, the primary assessment measure currently recommended by charity watchdogs and widely used by donors is the program ratio. The program ratio is calculated as the amount of expenses directed toward mission-related activities (rather than fundraising or administrative costs) divided by total expenses. Similar spending ratios separately focus on non-program spending—either administrative or fundraising costs—as a percentage of total expenses, or fundraising cost as a percentage of the contributions received. Nonprofit watchdog agencies such as the Better Business Bureau Wise Giving Alliance and Charity Navigator have established minimum program ratios, 65 percent and 75 percent, respectively, required to receive the highest ratings from their sites.2 The BBB Wise Giving Alliance also recommends that fundraising costs are no more than 35 percent of related contributions.

In terms of donors' use of spending ratios, Weisbrod and Dominguez's (1986) donation demand model was the first in a long line of literature to document a positive relationship between the program ratio and future donations (Parsons 2003). Accounting studies have also expanded the donation demand model (e.g., Tinkelman 1999; M. Yetman and R. Yetman 2013) as well as tested donor response to program ratios in an experimental setting (Parsons 2007; Li, McDowell, and Hu 2012), confirming that donor-giving is associated with higher program ratios. Additionally, Frumkin and Kim (2001) find evidence that lower administrative ratios are associated with higher donations, and Tinkelman and Mankaney (2007) examine factors that influence the size and significance of the association of donations with the administrative ratio. Finally, many grantors limit the amount of grant awards that can be used for administrative purposes (Eckhart-Queenan, Etzel, and Prasad 2016).

Despite the established role of spending ratios in giving decisions, there are several criticisms of these measures. First, literature has identified the program ratio as a relatively poor measure of nonprofit efficiency. Coupet and Berrett (2019) note that utilizing the program ratio as a measure of nonprofit efficiency has serious deficiencies, arguing that the ratio of program expenses to total expenses (or other overhead ratio constructs) focuses solely on inputs and that true nonprofit efficiency is measured by comparing the ratio of inputs to outputs. In support of this assertion, their study finds little correlation between overhead ratios and measures that compare how efficiently inputs convert into positive program outcomes. Put another way, the program ratio only considers inputs (money spent) rather than outputs (goods and services provided) or outcomes (improved condition), and research indicates that how much an organization spends does not necessarily predict mission fulfillment (Coupet and Berrett 2019).

Second, a focus on program spending has led nonprofit managers to emphasize short-term goals (program spending) to the long-term detriment of the organization and its mission (Overhead Myth 2019). Managers may forgo deploying resources to much needed administrative efforts (e.g., upgrading information systems, implementing governance mechanisms) because doing so would increase administrative costs and decrease the program ratio (Lecy and Searing 2015; Parsons, Pryor, and Roberts 2017). Moreover, emphasis on the program and administrative ratios incentivizes “ratio management” through misreporting the classification of certain non-program expenses as program-related (Jones and Roberts 2006; Parsons et al. 2017).

Third, eschewing fundraising activity in order to keep the program ratio high means sacrificing donation opportunities, resulting in fewer available dollars to spend providing services to those in need (Pallotta 2008). Spending a larger portion of a smaller pool of donations on programs may hurt the cause in the long run. Steinberg (1986) proposes that donors should ignore the fundraising ratio, as it represents the average (rather than marginal) spending on fundraising versus programs. He notes that marginal spending on fundraising, the amount raised by the last dollar spent on fundraising, is the relevant measure.

Not only is the program ratio a relatively poor measure of nonprofit efficiency, it also fails to capture key components of nonprofit performance. Willems, Boenigk, and Jegers (2014) note that nonprofit performance encompasses four key areas: (1) financial performance, (2) stakeholder performance, (3) market performance, and (4) mission performance. To date, despite evidence that donors want to know how their contributions affect the lives of those they seek to help (Hyndman 1991; Khumawala and Gordon 1997), much of the accounting literature has focused narrowly on financial performance and utilized relatively incomplete proxies.

Consequently, we advocate for additional performance metrics defined in terms of nonprofit outputs and outcomes, while seeking a more contextual and nuanced interpretation of the important and widely reported spending ratios. We look forward and recommend possible avenues for regulatory disclosure (e.g., Form 990) and research related to our proposed metrics.

Rather than lament the shortcomings of spending ratios and the potential to provide negative incentives to manage ratios, perhaps we can identify ways to influence donor responses to spending ratios. In an experimental setting, van der Heijden (2013) shows that relative, rather than absolute, program spending levels may influence donor assessment of the program ratio. In another experiment, Mercado (2020) suggests that providing an explanation for a relatively lower program ratio may result in increased donations, especially when the explanation uses language (e.g., concrete details versus abstract descriptions) that is congruent with the donor's distance from the nonprofit organization (local versus global). Perhaps providing context and appropriate comparisons could result in less emphasis on arbitrary benchmarks and give nonprofit organizations more flexibility to direct resources to fundraising and overhead without fear of retribution from donors (Parsons et al. 2017). Emphasizing that spending ratios are input measures may provide context to stakeholders and prevent organizations, evaluators, and researchers from overselling the program ratio as a measure of performance. Research, especially experiments, surveys, and interviews, should explore factors that influence whether and how donors respond to information that may not meet their expectations of an appropriate level of program spending.

In an effort to expand measures of nonprofit performance beyond program spending metrics, the Panel on the Nonprofit Sector (2005) and the Overhead Myth (2019) suggest providing measures of program successes. In response to this call, in 2016 GuideStar launched their Platinum Program,3 promoted as allowing “organizations to push past financial metrics to share their actual progress and results with millions—for free.” The Platinum Program captures data on programmatic outcomes such as the number of homeless people fed or animals rescued. While similar performance measures may be available to stakeholders on an organization's website or via electronic or print newsletters and year-end reports, GuideStar's site allows stakeholders to view outcome measures over multiple years for thousands of organizations on a single platform. Moreover, the Platinum Program allows organizations the opportunity to take advantage of both a list of scripted or “common” performance metrics as well as completely unique or self-defined measures. For example, common metrics include items such as how many clients the organization serves, while self-defined metrics include detailed measures such as how many HIV patients were provided medication. Whatever the format, these measures of performance get at true nonprofit programmatic outcomes rather than spending patterns. A recent study by Harris, Neely, and Parsons (2021a) provides some evidence that these newly available measures they dub “mission metrics” provide useful information above and beyond that provided by the program ratio. Additional research using measures of actual performance outcomes is needed so as to understand how donors and other stakeholders use programmatic performance information.

In its 2005 report to Congress, the Panel on the Nonprofit Sector, a consortium of nonprofit experts created at the request of Senator Charles Grassley, made a series of recommendations for “best practices,” including suggestions for reporting and transparency. In response to this report, the IRS significantly revised its Form 990 informational return to provide additional data (Smith and Shaver 2009). Two areas that saw the most change were governance and compensation disclosures.

The revised 990 required nonprofit organizations to identify the existence or lack thereof for 17 governance practices. These included items such as the presence of an audit committee, whistleblower protections, an independent board, and policies for CEO salary and conflicts of interests by management and board members (Boland, Harris, Petrovits, and Yetman 2020). The revised 990 also includes a series of supplemental schedules, including Schedule J requiring details of executive compensation (i.e., base pay, bonuses, and deferred compensation) and Schedule O providing detailed descriptions of unusual circumstances such as asset diversions, related parties, and board relationships, among others. Requirements to provide new disclosures recognize the need for expanded reporting beyond purely financial information.

Recent research provides evidence that these disclosures provide useful information to stakeholders. For example, Boland et al. (2020) find the new governance measures are associated with contributions and thus useful to donors. Also, studies examining nonprofit executive compensation provide evidence of an association with future donations (Balsam and Harris 2014; Corradino and Parsons 2021) and firm performance (Balsam and Harris 2018; Corradino and Parsons 2021), indicating executive compensation disclosures are useful to a variety of stakeholders such as donors and board members. Finally, Harris, Petrovits, and Yetman (2017) find that when nonprofit fraud occurs and is disclosed in the Form 990, future donations vary with the quality of these fraud disclosures (e.g., less transparent disclosures result in greater declines in future donations).

While more information about oversight and governance of nonprofits helps assure monies are directed toward efforts that benefit the mission, it still neglects the issue of outputs and outcomes. Currently Form 990 provides very limited information regarding the organizations' activities other than financial information. What is provided on Form 990 is a statement of the organization's mission as well as total expenses related to the three largest program services (Part III, Line 4). That is, Form 990 does not provide the public with information about organizations' operations in terms of program outcomes. In thinking about the future of nonprofit research and the evolution of the sector, we believe this important information, which is frequently available to grantors, should be required of all filers on an annual basis so that other stakeholders can easily find and use outcome data to aid in their resource allocation decisions.

Results from early work using program data accumulated by GuideStar provide support for the uniform dissemination of this information. Harris et al. (2021a) find that program outcomes are uncorrelated with the program ratio, indicating that these metrics offer information distinct from the most commonly used financial ratio. Harris et al. (2021a) also document a positive association between program outcomes and both donations and bonus compensation, even after controlling for the program ratio. We believe these findings are an indication that program outcome information has the ability to provide stakeholders with information useful to their donation and managerial decisions.

If the 2008 revised Form 990, which incorporated additional compensation and governance data, is version 2.0, we advocate for a version 2.1 that incorporates additional qualitative and quantitative disclosures on program outcomes. This could take a form similar to the GuideStar Platinum project that would ask nonprofits to report the primary performance metrics they track and provide quantitative data for those measures. This information could be provided in a new schedule, or as revision to Part III of the main Form 990.

Additionally, similar to required financial reports from corporations and state and local governments, we propose a section of Form 990 equivalent to the Management's Discussion and Analysis in order to improve readers' understanding of organizational activity. In other words, nonprofit managers could use plain language to “tell their organization's story” regarding nonprofit performance.

Another avenue for potential regulatory intervention that may be fruitful to the sector is the addition of a schedule, similar to the current Schedule O, that provides an explanation, as necessary, for reported figures. This would allow organizations to explain why the program (fundraising) ratio appears low (high) relative to a benchmark (e.g., recommendations by Charity Navigator or BBB) or prior years. Recent research provides evidence of how donors react differently when there is an explanation for a “low” number (Mercado 2020). The ability for organizations to provide rationale for reported figures would potentially alleviate ratio management (Jones, Kitching, Roberts, and Smith 2013) or nonprofit starvation (Lecy and Searing 2015; Coupet and Berrett 2019).

Another alternative to a separate schedule to provide context for overhead spending is to disaggregate reported administrative costs in order to emphasize valuable activities (such as staff training, internal controls, information technology), similar to information currently provided in a Statement of Functional Expenses required by the FASB. Providing organizations with the opportunity to highlight beneficial overhead costs or explain abnormal expenses in a particular period may reduce the incentive to manipulate figures in order to manage ratios.

If managed ratios are a corollary for accrual earnings management in the for-profit sector, nonprofit starvation is a corollary for real earnings management in the for-profit sector. By providing opportunities for organizations to explain reported figures to Form 990 users, organizations would provide stakeholders with better and more useful information with which to make decisions.

Next, as a follow-up to the addition of mission metric information as well as management explanations to the Form 990, we propose IRS dissemination of this information (and all Form 990 data for that matter) in user-friendly digitized format for researchers and sophisticated donors/grantors alike. The current Amazon Web Services (AWS) format utilized by the IRS requires researchers to use sophisticated programming languages to access data, which then take weeks to be downloaded and formatted for research purposes. Indeed, this process for accessing data that is meant to be available to the public is restrictive and likely renders its use unavailable to many donor and granting agencies.

In addition to regulatory changes for U.S. nonprofit disclosure practices, we are also interested in expanding and promoting new research avenues related to nonprofit performance measures and disclosures. We conclude by making several recommendations for avenues for future research in this area.

Just as current reporting and related research assumes performance measures are universally beneficial across all organizations, it also assumes performance measures are equally valued by all users. This is unlikely to be true. Gordon and Khumawala (1999) suggest that the demand for nonprofit reporting varies based on donors' distance from an organization. They note that if donors are (or have been) beneficiaries of the organization (e.g., hospital patient, university alumni, public radio listener) or volunteer at an organization, they can directly assess output. They label nonprofit activity where the resource providers are also the beneficiaries “consumptive philanthropy” and suggest this type of nonprofit activity is judged by direct experience rather than financial reports. Gordon and Khumawala (1999) then suggest that when nonprofits act as a conduit between donors and beneficiaries, there is greater emphasis on nonprofit reporting to reduce information asymmetry. Their Model of Individual Giving hints that perhaps there is not a one-size performance measure. Future research could seek to identify whether the value of performance measures varies across organizational types—commercial versus charitable, religious versus nonreligious, or by sector. Researchers should consider organization type defined various ways in order to better understand the usefulness of performance measures. Scholars investigating nonprofit performance could also consider whether different types of donors focus on or react to different performance measures. Do millennials and GenZs have the same priorities as Baby Boomers and GenXers? Does religiosity influence preferences for different performance information? Could organizations provide a “choose-your-own” information option through websites or social media?

While most research to date relies on financial information reported on Form 990 or in audited financial reports (e.g., Baber, Roberts, and Visvanathan 2001; Tinkelman and Mankaney 2007; Feng 2014), there are alternative information delivery options that expand choices available to donors seeking information. Harris et al.'s (2021a) examination of GuideStar's mission metrics provides evidence that alternative reporting mechanisms provide valuable information about nonprofit performance and can benefit donors. Other studies consider information provided outside regulatory channels, such as on organizations' websites (Saxton and Guo 2011; Saxton, Neely, and Guo 2014; Saxton and Neely 2019) and social media outlets (Harris, Neely, and Saxton 2021b).

Gordon, Knock, and Neely (2009) investigate the role of charity ratings and show that analyses provided indirectly through an intermediary (e.g., Charity Navigator) may provide performance information in an understandable format to donors who are not familiar with accounting information included on tax forms. Consistent with these findings, Charity Navigator has developed a new “Impact & Results” score, which responds to users' feedback identifying impact as the most important factor in donating. Specifically, Charity Navigator4 reports: “To assign impact ratings, we use publicly available information to estimate the actual impact the nonprofit's program has on people's lives.” Future research could study these new types of third- party assessment tools and their ability to provide donors and grantors with information important to their giving decisions.

Building on these studies, academic researchers could also assess the usefulness of entirely new communication avenues available to allow nonprofit organizations to reach current and potential donors. As technology evolves and stakeholders become more sophisticated in their ability to gather and process nonprofit information, better reporting and tracking systems may be of vital importance to the sector. For example, how effective are social media tools for reaching potential donors? Can social media companies use AI to target donors with performance information based on their online profiles, similar to the way they direct advertising to users?

Overall, we feel there are numerous opportunities to improve nonprofit performance measurement and reporting going forward. We have highlighted two paths for improvement: better measurement with a focus on nonprofit outcomes and improved reporting with an enhanced Form 990. We believe both opportunities will move the sector toward providing stakeholders with much improved measures of nonprofit performance.

Baber,
W. R.,
Roberts
A. A.,
and
Visvanathan
G.
2001
.
Charitable organizations' strategies and program-spending ratios
.
Accounting Horizons
15
(
4
):
329
343
.
Balsam,
S.,
and
Harris
E. E.
2014
.
The impact of CEO compensation on nonprofit donations
.
The Accounting Review
89
(
2
):
425
450
.
Balsam,
S.,
and
Harris
E. E.
2018
.
Nonprofit executive incentive pay
.
Review of Accounting Studies
23
(
4
):
1665
1714
.
Boland,
C. M.,
Harris
E. E.,
Petrovits
C. M.,
and
Yetman
M. H.
2020
.
Controlling for corporate governance in nonprofit research
.
Journal of Governmental & Nonprofit Accounting
9
(
1
):
1
44
.
Corradino,
L.
and
Parsons
L. M.
2021
.
Deferred compensation in nonprofit organizations: A path to financial stability
.
Working paper, Colorado State University-Pueblo and University of Alabama.
Coupet,
J.,
and
Berrett
J. L.
2019
.
Toward a valid approach to nonprofit efficiency measurement
.
Nonprofit Management & Leadership
29
(
3
):
299
320
.
Eckhart-Queenan,
J.,
Etzel
M.,
and
Prasad
S.
2016
.
Pay-what-it-takes philanthropy
.
Stanford Social Innovation Review
14
(
3
):
37
41
.
Feng,
N. C.
2014
.
Economic consequences of going concern audit opinions in nonprofit charitable organizations
.
Journal of Governmental & Nonprofit Accounting
3
(
1
):
20
34
.
Frumkin,
P.,
and
Kim
M. T.
2001
.
Strategic positioning and the financing of nonprofit organizations: Is efficiency rewarded in the contributions marketplace?
Public Administration Review
61
(
3
):
266
275
.
Gordon,
T. P.,
and
Khumawala
S. B.
1999
.
The demand for not-for-profit financial statements: A model for individual giving
.
Journal of Accounting Literature
18
:
31
56
.
Gordon,
T. P.,
Knock
C. L.,
and
Neely
D. G.
2009
.
The role of rating agencies in the market for charitable contributions: An empirical test
.
Journal of Accounting and Public Policy
28
(
6
):
469
484
.
Harris,
E.,
Petrovits
C.,
and
Yetman
M. H.
2017
.
Why bad things happen to good organizations: The link between governance and asset diversions in public charities
.
Journal of Business Ethics
146
(
1
):
149
166
.
Harris,
E. E.,
Neely
D.,
and
Parsons
L. M.
2021
a.
Missions metrics: Reexamining the relation between performance, contributions and executive compensation
.
Working paper, Florida International University, University of Wisconsin-Milwaukee, and University of Alabama.
Harris,
E. E.,
Neely
D.,
and
Saxton
G.
2021
b.
Social media, signaling, and donations: Testing the financial returns on nonprofits' social media investment
.
Working paper, Florida International University, University of Wisconsin-Milwaukee, and York University.
Hyndman,
N.
1991
.
Contributions to charities—A comparison of their information needs and the perceptions of such by the providers of information
.
Financial Accountability & Management
7
(
2
):
69
82
.
Jones,
C. L.,
and
Roberts
A. A.
2006
.
Management of financial information in charitable organizations
.
The Accounting Review
81
(
1
):
159
178
.
Jones,
C. L.,
Kitching
K. A.,
Roberts
A. A.,
and
Smith
P. C.
2013
.
The spend-save decision: An analysis of how charities respond to revenue changes
.
Accounting Horizons
27
(
1
):
75
89
.
Khumawala,
S. B.,
and
Gordon
T. P.
1997
.
Bridging the credibility of GAAP: Individual donors and the new accounting standards for nonprofit organizations
.
Accounting Horizons
11
(
3
):
45
68
.
Lecy,
J. D.,
and
Searing
E. A.
2015
.
Anatomy of the nonprofit starvation cycle: An analysis of falling overhead ratios in the nonprofit sector
.
Nonprofit and Voluntary Sector Quarterly
44
(
3
):
539
563
.
Li,
W. E.,
McDowell
E.,
and
Hu
M.
2012
.
Effects of financial efficiency and choice to restrict contributions on individual donations
.
Accounting Horizons
26
(
1
):
111
123
.
Mercado,
J. M.
2020
.
Donors, distance, and the influence of accounting information
.
Doctoral Dissertation, University of Alabama.
Pallotta,
D.
2008
.
Uncharitable: How Restraints on Nonprofits Undermine Their Potential
.
Medford. MA: Tufts University Press.
Panel on the Nonprofit Sector.
2005
.
Strengthening transparency, governance, accountability of charitable organizations: A final report to Congress and the nonprofit sector. Available at: http://philanthropy.org/documents/Panel_Final_Report.pdf
Parsons,
L. M.
2003
.
Is accounting information from nonprofit organizations useful to donors? A review of charitable giving and value-relevance
.
Journal of Accounting Literature
22
:
104
129
.
Parsons,
L. M.
2007
.
The impact of financial information and voluntary disclosures on contributions to not-for-profit organizations
.
Behavioral Research in Accounting
19
(
1
):
179
196
.
Parsons,
L. M.,
Pryor
C.,
and
Roberts
A. A.
2017
.
Pressure to manage ratios and willingness to do so: Evidence from nonprofit managers
.
Nonprofit and Voluntary Sector Quarterly
46
(
4
):
705
724
.
Reck,
J. L.,
Lowensohn
S. L.,
and
Neely
D. G.
2019
.
Accounting for Governmental & Nonprofit Entities
.
18th edition
.
New York, NY
:
McGraw Hill
.
Saxton,
G. D.,
and
Guo
C.
2011
.
Accountability online: Understanding web-based accountability practices of nonprofit organizations
.
Nonprofit and Voluntary Sector Quarterly
40
(
2
):
270
295
.
Saxton,
G. D.,
and
Neely
D. G.
2019
.
The relationship between Sarbanes-Oxley policies and donor advisories in nonprofit organizations
.
Journal of Business Ethics
158
(
2
):
333
351
.
Saxton,
G. D.,
Neely
D. G.,
and
Guo
C.
2014
.
Web disclosure and the market for charitable contributions
.
Journal of Accounting and Public Policy
33
(
2
):
127
144
.
Smith,
P. C.,
and
Shaver
D. J.
2009
.
Transparency, compliance, and filing burden -Principles for the revised Form 990
.
The ATA Journal of Legal Tax Research
7
(
1
):
133
151
.
Steinberg,
R.
1986
.
The revealed objective functions of nonprofit firms
.
The RAND Journal of Economics
17
(
4
):
508
526
.
Overhead Myth.
2019
.
Moving toward an overhead solution. Available at: http://overheadmyth.com/
Tinkelman,
D.
1999
.
Factors affecting the relation between donations to not-for-profit organizations and an efficiency ratio
.
Research in Government and Nonprofit Accounting
10
(
1
):
135
161
.
Tinkelman,
D.,
and
Mankaney
K.
2007
.
When is administrative efficiency associated with charitable donations?
Nonprofit and Voluntary Sector Quarterly
36
(
1
):
41
64
.
van der Heijden,
H.
2013
.
Charities in competition: Effects of accounting information on donating adjustments
.
Behavioral Research in Accounting
25
(
1
):
1
13
.
Weisbrod,
B. A.,
and
Dominguez
N. D.
1986
.
Demand for collective goods in private nonprofit markets: Can fundraising expenditures help overcome free-rider behavior?
Journal of Public Economics
30
(
1
):
83
96
.
Willems,
J.,
Boenigk
S.,
and
Jegers
M.
2014
.
Seven trade-offs in measuring nonprofit performance and effectiveness
.
Voluntas
25
(
6
):
1648
1670
.
Yetman,
M. H.,
and
Yetman
R. J.
2013
.
Do donors discount low-quality accounting information?
The Accounting Review
88
(
3
):
1041
1067
.
1

While volunteers, especially those who go through training and commit to long-term service, are important stakeholders, they are more likely to use their direct experience in the organization, rather than financial and other reports, to evaluate organization performance (Gordon and Khumawala 1999). Therefore, we do not focus on the use of performance information by volunteers.