Sign In / Sign Out
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
ASU Lodestar Center
Welcome to Research Friday! As part of a continuing series, we invite a nonprofit scholar, student, or professional to highlight current research reports or studies and discuss how they can inform and improve day-to-day nonprofit practice.
With the recent scarcity of grant funding, excellent program evaluation practices are becoming a distinguishing element of effective and grant-competitive nonprofit organizations. However, studies reveal that most nonprofits still view evaluation as exclusively about program outputs, and they often perceive data gathering as a resource drain and distraction.1 Even those who are eager to conduct evaluations often lack the funding and knowledge to evaluate their programs appropriately.
There are different approaches to evaluating a program. Some organizations gather data such as program expenditures, customer satisfaction or program outputs (e.g., the number of individuals receiving the services, number of trainings provided, number of animals rescued, etc.). However, increasingly nonprofits are moving to the practice of outcome evaluation. Program outcomes are defined as the expected and/or actual benefits that program participants will receive. Outcomes usually imply changes in behavior, condition, skills, attitudes or knowledge in the individual, community or other target population. Outcome evaluation therefore involves the verification that program goals are being met. Other evaluation forms include: process evaluation, which requires that specific program activities and procedures be reviewed in order to help explain why a program worked or did not work; and program monitoring, which uses in-process measures to track whether a program is being executed as designed. More sophisticated methods of evaluation include random controlled trials (RCTs) or the use of a logic model.6
Grant funding and the proper knowledge to conduct program evaluation are common issues:
The United Way is one grantmaking organization that promotes an outcome-driven culture in nonprofits. Some other foundations provide resources for nonprofits to develop their evaluation knowledge, such as the W.K. Kellogg Foundation Evaluation Handbook. Funders are increasingly conditioning grants upon the delivery of specific measures. But still, outcome evaluation is an area for improvement in the sector. Some of the challenges that nonprofits face are:
Individual donors are increasingly demanding outcomes from the nonprofits they support:
Over 80 percent of charitable donations come from individuals.7 With the large number of nonprofits that offer similar services, individual donors are striving to make more informed decisions before investing their charitable dollars.
For many years, nonprofit watchdogs such as Charity Navigator have rated nonprofit organizations based on financial accountability and transparency. As ASU Professor Mark Hager discussed in his blog post, this led to the popularization of one particular metric – the overhead vs. program costs ratio – as a way of judging the effectiveness and donation-worthiness of a nonprofit organization. As many suspected, and research ultimately confirmed, how much an organization spends on overhead doesn’t describe how well it achieves its mission. What resulted was a sector-wide effort to switch the focus from financial metrics to outcome evaluation. In December 2012 Charity Navigator released the concept note for CN 3.0 – their next version of Charity Navigator. In addition to financial measures, charities will now be rated on the results of their work. While it may take years to align measurement criteria across the sector, Charity Navigator is looking at five evaluation elements: (1) alignment of mission, solicitations and resources, (2) results logic and measures, (3) validators, (4) constituent voice, and (5) published evaluation reports.
Given this trend, individual donors may soon demand information on a nonprofit’s outcomes before investing in it. How are nonprofits getting prepared to respond to the more demanding and informed individual donor if they do not have the knowledge and resources to engage in proper evaluation?
The ASU Lodestar Center would like to find out more about how nonprofits are facing these challenges and what support they need. If you work or have worked for a nonprofit organization, share with us your initial thoughts by answering this three minute survey. And, if you would like to participate in a forthcoming study, please provide your contact information and we will make certain to include you.
Karina Lungo is a graduate student of the Master of Nonprofit Studies degree at Arizona State University. Her focus of study is program evaluation and research for international nonprofit organizations. She currently works for the ASU Lodestar Center as a Research Assistant of Knowledge Resources.
1. Carman, J.G. & Fredericks, K.A. (2008). Nonprofits and evaluation: empirical evidence from the field. Nonprofits and evaluation. New Directions for Evaluation, 119, 51-71.
2. A Guide to Developing an Outcome Logic Model and Measurement Plan. United Way of Greater Richmond & Petersburg.
3. Brock, A., Buteau, E. and Herring, A. (2012). Room for Improvement: Foundation’s Support of Nonprofit Performance Assessment. The Center for Effective Philanthropy.
4. Hoefer, R. (2000). Accountability in Action? Program Evaluation in Nonprofit Human Service Agencies. Nonprofit Management and Leadership, vol. 11, no. 2.
5. CN 3.0 Concept Note. The Evaluation of Reporting Results. http://www.charitynavigator.org/__asset__/_articles_/2012/CN3_Concept_Note_FINAL%2012-15-2012.pdf.
6. McCawley, P. (2001). The Logic Model for Program Planning and Evaluation. University of Idaho. CIS 1097.
7. Arizona Scope of the Sector, ASU Lodestar Center. http://www.asu.edu/copp/nonprofit/scope/giving.html
8. Measuring the Impact of Your Charitable Donations. An Interview with Ken Berger, president and CEO, Charity Navigator. http://www.npr.org/2012/12/26/168084977/measuring-the-impact-of-your-charitable-donations
9. Hager, M. (2012). Can you teach a watchdog new tricks?. Blog post ASU Lodestar Center http://blog.lodestar.asu.edu/2011/05/can-you-teach-watchdog-new-tricks.html
|Like this article? Get another!
Read B. J. Tatro's "When Documenting Your Best Efforts Is No Longer Enough: Stakeholder Involvement in Results-Oriented Program Evaluation."