Hui Wen Chan

Re-evaluating Impact Evaluation: Why solely focusing on financials is flawed and other key points from a recent workshop

The post-financial crisis world has been one of increasing austerity with households, businesses, and governments all practicing belt-tightening. In such a resource-constrained environment, it is not surprising that funders and investors are demanding evidence of impact to guide investment decisions and help them allocate limited capital to the most effective programs. However, a welcomed change is the fact that impact evaluations are increasingly recognized as critical to business success. Rather than a burden imposed by donors and investors, forward-thinking organizations are using impact evaluations as a way to learn, understand those they serve, and guide business strategy.

Many organizations struggle with the challenge of developing robust impact evaluations. If poorly designed and executed, an evaluation can result in misleading conclusions and waste precious resources. To help its members, the Aspen Network of Development Entrepreneurs (ANDE) awarded a Capacity Development Fund grant to the William Davidson Institute (WDI) to provide two interactive impact evaluation workshops focused on outcomes data collection and analysis. The first of those workshops was hosted by the Citi Foundation and took place in April in New York City. The next workshop will take place in this weekend in Johannesburg.

A diverse group of participants representing organizations such as BPeace, TechnoServe, International Finance Corporation, Dalberg Global Development Advisors, and World Wide Hearing gathered to develop customized impact evaluation plans. Ted London and Heather Esper, from the William Davidson Institute; and Andy Grogan-Kaylor, associate professor of social work at the University of Michigan; guided participants in developing a plan to measure their impact – from identifying potential impacts all the way through data analysis. Attendees left the workshop with a customized action plan for impact evaluation. Participants also benefitted from sharing best practices with one another, providing feedback based on their own data collection experiences, and discussing solutions to common challenges. Here are some key points that reflected the workshop’s discussions:

  • Think beyond the financial. A narrow focus solely on economic-related metrics, such as income, might not tell the whole story. If you only measure changes in income and later discover your organization did not have an effect on it, you may miss out on understanding other important aspects of your value proposition. However, had you explored non-financial metrics, you might have learned that people had more free time as a result of buying your product, which in turn increased their productivity. Knowledge of the impacts and benefits of your products or services enable you to market them more effectively to customers, so be sure to consider both the economic and non-economic impacts of your products or services.
  • More is NOT always better.
  1. The sample size you need to calculate impact may be smaller than you think. While it is true that the larger your sample, the more confident you can be about your results, this also adds to costs. At the same time, a sample that is too small may fail to provide you with results that you can be confident are not attributable to chance. Before embarking on an evaluation, think through the statistical significance, statistical power, and effect size you want. Under reasonable assumptions about all of these quantities, an adequate sample size for a test and control group typically is somewhere in the neighborhood of 250 individuals per group.
  2. Limiting your survey or report to the questions that matter most may actually provide you with more useful information. Data collection can be challenging, time-consuming, and expensive; and excessive data gathering in search of impact can add unnecessary burden and costs. Focus on what matters the most and keep it simple; this may make it more likely that people will complete a survey or report and provide you with the data that you need. It also increases the likelihood that you will actually have the resources to analyze the data you collect to glean meaningful insights. So prioritize and only collect what is necessary to understand your core impacts and make informed strategic decisions.
  • The integrity of the data depends both on the questions asked and the manner in which they are asked. You must trust both the questions you are asking, as well as the data you have collected in order to analyze and draw meaningful conclusions. The data collection process – how you manage the administration of a survey and the data after it is collected – is just as important as the questions you ask. You can develop a perfect survey, but if an interviewer pays his nephew to fill out 100 surveys the results will not be very useful. Be sure to carefully think through when, where, and how to deliver a survey because it may impact the honesty, reliability, and quality of responses and people’s willingness to participate.
  • Be wary of proxies – they may be easy to track but they may not always be sufficient. For example, oftentimes the simple metric “number of bed nets distributed” is used as a proxy to measure the number of people sleeping under bed nets, which was expected to reduce malaria. Ultimately, donors care about the outcome (reduced malaria), not the number of bed nets distributed. When they realized that bed nets were not having the impact on malaria that they sought because people did not use them, they moved away from distributing bed nets and explored indoor residual spraying. If you must use a proxy to measure impact, recognize the assumptions you are making and select a proxy that you are reasonably confident will move in lockstep with the impact you are trying to measure.
  • Standardization of outcomes and comparability is the next step. This would enable an apples-to-apples comparison. IRIS provides standardized metrics for output metrics; however, a standardized set of outcome metrics does not yet exist. WikiVois is taking a first step in this by capturing development-related outcome metrics in one place. You can contribute to the field by adding to their growing database on outcomes metrics on their website.

We will be sharing more impact evaluation learnings and best practices through this blog and at the upcoming annual ANDE Metrics Conference on June 12-13 in Washington, D.C.

Hui Wen Chan is the impact analytics and planning officer at the Citi Foundation.

The author would like to thank Heather Esper, of WDI , and Genevieve Edens, of ANDE, for their contributions to this post.

Categories
Education, Impact Assessment, Investing
Tags
impact measurement, research, William Davidson Institute