The field of impact investing turns 12 this year. As any parent knows, 12-year-olds can be precocious, and while they’re still young, they’re also maturing and ready to take on more responsibility and accountability. The same is true of the impact investing field. In particular, now is the time for more maturity and accountability in measuring social impact, and to achieve it, we must bridge the gap between evaluation theory and practice.
The COVID-19 pandemic and the resulting economic recession, and the need to support the growing racial justice movement make it all the more urgent that impact investing be an effective vehicle for social change. To that end, the field must keep high-quality evidence and measurement front of mind as the world endures and eventually recovers. In this article, we explore some of the factors contributing to the mismatch between traditional evaluation methodologies and impact investing, as well as the limitations of current practices. We also present two strategies for strategically using evidence throughout the investment lifecycle.
Between Theory and Practice
What counts as high-quality evidence? How does an investor predict impact before making an investment, and what should they use to measure impact post facto? These questions highlight the gaps between theory and practice when it comes to assessing evidence for impact measurement.
The most common measurement practices in impact investing rely on accounting principles, common sense, and easily quantifiable outputs. Many investors use information straight from enterprise balance sheets to roll up basic descriptive data about outputs. For example, two of the most common outputs reported include the estimated number of jobs created and/or the potential increase in revenue; both are easy to understand and relatively easy to collect. Investors also rely on gut-based assessments to drive decision making. If an investor is thinking about investing in a recycling company, they may assume that the company has an environmental impact without attempting to quantify it—because, of course, recycling is good.
Unfortunately, current practices focused on easily quantifiable outputs can sometimes mask the true story of impact (or lack thereof). If an impact investor is reporting only on jobs created, for example, we don’t know who is employed, what their socioeconomic background is, what the job means to them, or whether the job is better than their last. This practice feels a bit like the early comments about COVID-19 being a “great equalizer” that didn’t discriminate among different strata of people when in reality people have experienced the pandemic in dangerously different ways along lines of socioeconomic status, race, and type of employment.
In academia, randomized controlled trials (RCTs) have long been used by leading researchers and journals to assess evidence. This is because the randomization in RCTs is designed to eliminate the bias ingrained in other types of study designs. Some journals have even gone so far as to reject the term “predictive” to describe a variable unless the evidence is reinforced by an RCT.
Very few impact investors use RCTs to project their impact––and even fewer use them to measure impact after the fact. There are many reasons for this, including the fact that RCTs require significant resources, as well as careful control and manipulation of the study design. RCTs are designed to prove causality of the thing being studied in the trial’s specific context. In social impact and impact investing, so many factors bleed into an intervention. For example, in a curriculum-focused intervention in education, teacher training, leadership buy-in, policy shifts, awareness, and child health may all affect results. How do you isolate them? The more complex the investment, the harder it is to manipulate or control.
Common sense metrics and RCTs can be useful tools in the due diligence and measurement phases of an investment, but both have drawbacks that limit their use among investors.
Emerging Best Practices
Given the current measurement capacity of impact investors and the lack of alignment with traditional evaluation methodologies, what would it take to bridge the gaps between theory and practice? Investors with the most-advanced measurement schema have adopted frameworks that help them weigh different standards of evidence alongside opportunities to improve strategy and advance equity.
Frameworks that consider equity are emerging as common practice, and the best of these use both qualitative and quantitative data. Measurement that strives to make equity central will always prioritize context, including the relationships, identity, and power dynamics surrounding the measurement. By their very design, RCTs remove or limit contextual factors from the evidence. Many underrepresented groups have decried quantitative data without context, insisting that to understand equity, we must incorporate qualitative methods.
As part of our work with investors large and small, we have been developing two promising frameworks to help advance measurement sophistication: horizontal layering and vertical layering. Each of these approaches to using and collecting evidence assumes the investor is making direct investments, as opposed to investing in impact funds. Both enable a better focus on context, and emphasize the relationship between investment strategy and measurement.
Read the rest of Kendall Rathunde, Daniel Hadley & Gwendolyn Reynolds ‘s article here at Stanford Social Innovation Review