Omity 5
Omity 5
Omity 5
I would not give a fig for the simplicity this side of complexity
but I would give my life for the simplicity on the other side of compleity.
—attributed to Oliver Wendell Holmes, Jr.
Over the last two decades, social entrepreneurs have become darlings of the social
sector. From the pioneering work of Ashoka to the global stage presented by the
Skoll World Forum and the inspiring work of their Skoll Fellows, a new breed of
innovative people and enterprising institutions are literally promising to change
the world. Ranging from non-profits with highly scalable models looking to trans-
form national education policy to for-profit businesses looking to serve the “base
of the pyramid” at affordable health clinics, this diverse range of social enterprises
offers alternatives to traditional charity or development assistance. Even big busi-
nesses are taking social enterprise seriously as they explore new ways to tap into
underserved markets in emerging economies.
But for all the innovation, the question remains: So what? Have these new
approaches led to enduring change? Are people drinking cleaner water, living
healthier lives, or moving out of poverty because of the new products designed to
be affordable and accessible to the poor? We think so, but the evidence is nascent
and mostly anecdotal. Our objective at Acumen Fund has been to push hard on
these questions of measurement and impact: if what we are doing is a real innova-
tion in philanthropy and development assistance, then we should have evidence
that what we are doing matters for the lives of millions of people.
This is not a simple problem. This article describes our efforts to get an answer
to those questions within Acumen Fund and dives into the complexity of measur-
ing social change. The article reviews our work to develop a manageable and some-
what simple approach to this complex challenge, and then looks at how the field of
measurement might evolve in the coming years. My aim is to provide some very
practical advice about how to produce or consume claims of social impact (or,
Brian Trelstad is Chief Investment Officer with the Acumen Fund. Before joining
Acumen Fund, Brian Trelstad spent four years at McKinsey & Company as a consult-
ant in the healthcare and non-profit practices and as an editor of the McKinsey
Quarterly.
Acumen Fund is a seven-year-old social investment fund with roughly $35 million
of approved investments in ventures that deliver health, water, housing, and ener-
gy products and services to the poor. Our investors, philanthropic donors, expect
us to use their charitable contributions to invest money—often on slightly less
than commercial terms—to build financially sustainable and growing businesses
that provide measurable social or environmental value. Any of the “patient capi-
tal” returned to Acumen Fund is recycled for new investments. We believe that the
market-based delivery of goods can complement traditional charitable efforts and
often serves as a listening device: if our enterprises are not delivering value—social
or economic—their customers will not return and their businesses will not grow.1
Measuring the expected and actual financial returns of our investments is relative-
ly straightforward for Acumen Fund and for our peers. Like most investors, we
project net income, free cash flows, terminal values, and likely exit multiples; we
then discount these cash flows back to the present day to establish financial value.
Or, if we make a loan, we can expect a future stream of interest and principal
repayments to come back to us over the five to seven years of our average loan.
And once we exit the investments, it is very easy to look retrospectively at our net
financial return.
Measuring the social or environmental returns of our investments is not so
straightforward. The first challenge is defining what specifically we mean by
“social impact.” This can range from a proof of concept of our model—that we
can invest risk capital in social enterprises and see it returned—to knowing that
our investments in malaria are leading to reductions in the incidence of malaria or
that our investments in drip irrigation are moving smallholder farmers out of
poverty. Second, once it is clear what threshold of outcomes we are aiming for, it
turns out that it is actually quite hard and expensive to “prove” anything. Later I’ll
discuss why we count outputs (e.g., bednets sold) instead of seeking to demon-
strate outcomes (e.g., reduction in incidence in malaria), but we are familiar with
the challenge of moving beyond anecdotes and data towards making a rigorous
case for impact.
Finally, we would love to understand, but don’t have the capacity to measure,
the economic multipliers or unintended consequences of our work. If the textile
mill creates 5,000 jobs in Tanzania, what sort of impact does this have on the local
or regional economy or national tax receipts? Conversely, does the change in pat-
terns of water collection in rural India change the social dynamics in a way that
harms rather than helps? Ted London at the University of Michigan writes coher-
ently about the need to account for the positive and negative impact of social ven-
tures working at the base of the pyramid, and has developed a framework that
“drives improvements in a venture’s poverty alleviation performance by enhancing
positive outcomes and mitigating potential negative ones.”2 This kind of compre-
hensive framework is extremely valuable in pushing us to think through the antic-
ipated and unexpected outcomes of our investments, but at this point ir is tricky
to implement at the business level.
Given these challenges, we start from
the premise that to get better results and
improve our work on a continuous basis, Metrics and evaluation
3
we need to measure what we manage. We are to development
also owe it to our donors to ask whether
philanthropic investments into Acumen programs as autopsies
Fund have made a difference relative to
their other charitable options. We owe it to
are to health care: too
ourselves as professionals trying to effect late to help, intrusive,
change to understand what is working and
what is not—and why. We owe it to the and often inconclusive.
entrepreneurs that we invest in to not
impose burdensome reporting on them
but to include them in a system that helps them think about and anticipate key
performance challenges as they grow their business and serve the poor. And, final-
ly, we owe it to the end users of these services to think clearly about how the busi-
nesses we are building will make a difference in their lives, in their children’s lives,
and in their communities.
of the opportunity. Most of the diligence is focused on the business model, the
unit economics, the customer need, the quality of the organization, the integrity of
the leadership team, and the financial plan. A significant part of the diligence,
however, seeks to understand whether or not the business creates meaningful
social value.
To do this, we rely on three separate steps. First we review the literature on the
state of practice to understand if the investment’s main activity “matters.” This
includes discussions with our internal staff portfolio team, advisors, and experts in
the field. Second, we estimate the number of people in the “base of the pyramid”
4
who will be served by the business over the life of the investment. Third, we assess
whether or not the delivery of those “outputs” to our target constituency will com-
pare more or less favorably to the “best alternative charitable option” available to
our donors. Let me take each of these steps in turn.
We start each of our investment discussions with a focus on the customer:
How does this product help the poor? Do people need this product or service? Are
they willing to pay for it? And if they do get it, what evidence exists that their lives
will be measurably better for having used the service? For example, take anti-
malarial bednets. There was considerable debate about whether or not people will
(or should) pay for the long-lasting anti-malarial bednets,5 but there is little debate
that their proper use, particularly throughout an entire village, can lead to signifi-
cant reductions in the incidence of malaria.6
With community clean water systems, the evidence is a bit more ambiguous.
Of course it makes intuitive sense that the proper treatment of surface water for
drinking should lead to improved health outcomes, but a lot of things happen after
customers collect water from a central village water-treatment system: they trans-
port the water, store it for a few days, and drink it out of potentially dirty cups. To
us, the empirical evidence that selling clean drinking water from centralized distri-
bution points in rural India will lead to reductions in waterborne illnesses is not as
robust as the evidence that delivery of bednets to Kenyans can reduce the incidence
of malaria.
Once we have established whether or not an intervention matters—or more
exciting, whether our investment can contribute to the world’s understanding of
what works—we know what outputs to start counting. It is important to pause on
the word outputs for a minute. In evaluation parlance, some combination of
inputs (investment money, technology, staff) helps generate a certain set of out-
puts (bednets delivered, liters of clean water filtered), which might lead to an out-
come (reduction of incidence of malaria, fewer people getting sick from drinking
bad water) that translates into the impact: knowing for certain what would not
have happened were it not for our investment or invention. Defining the outputs
is critical. For us, it is not just “bednets,” but some dimension of “bednets” proper-
ly deployed and used. Moving from narrow definitions of outputs as “products”
toward definitions with some dimension of product service and quality acts as an
important check on a race to least-cost delivery models.
Moving from outputs, however sophisticated the definition, to understanding
outcomes and proving impact is extremely complicated and seems to require ran-
domized control trials that demonstrate the counter-factual.7 We had discussions
with researchers affiliated with MIT’s Poverty Action Lab on how this might be
possible to do with some of our investments and have conducted two rigorous
studies on retail distribution strategies for the anti-malarial bednets and on the
link between delivering clean water and community health. But these studies are
expensive, and it is impractical to spend $250,000 researching the impact of a
$500,000 investment—unless such a study could be used to understand the impact
of similar investments in our portfolio and others for years to come. So our strat-
egy has been to review the literature and consult the experts, to establish the clar-
ity and certainty of a specific output’s links to impacts, and to focus on counting
those outputs.
As an investment moves forward in due diligence, another critical step in the
process is to compare the projected outputs delivered per unit of philanthropic
input (in this case capital) compared with the “best available charitable option” (or
BACO, as we call it). More simply, if a donor who cares about malaria gives
Acumen Fund a million dollars, we want to be able to compare how many bednets
our factory makes over five years (or better, the years of malaria protection that the
nets provide) with what the donor could buy on the “charitable marketplace.” The
charitable marketplace is fragmented and inefficient, but it does exist. For exam-
ple, a moderately curious donor could learn that Malaria No More offers to deliv-
er one long-lasting bednet for $10, so any investment from Acumen Fund would
need to deliver at least one bednet for every $10 our fund receives. We have writ-
ten more extensively about BACO, the methodology, and its limitations on our
website,8 so I won’t go into too much detail here, but let me reflect on how this
“back of the envelope” approach helps us.
The BACO methodology is important for our process in a few ways. First, it
forces the team to think about who is doing the best or prevailing work on solving
the same or similar problems. Second, it forces us to think about the marginal cost
decision a hypothetical donor is making. In the absence of absolute standards of
performance in the social sector, we need to think about how else the donor could
have invested their money. Finally, it is a very practical tool that is easy to use. A
portfolio associate can conduct a BACO exercise using a simple Excel workbook,
some web research, and an expert interview or two. The steps are straightforward,
the assumptions clear, but the analysis doesn’t dictate our final decision. We need
to be more disciplined about repeating the BACO analysis annually for each invest-
ment to see how our forecasts compare against reality, but at the points when we
start and then close out an investment, the BACO methodology offers a useful
benchmark for comparing our work to one prevailing approach in the field.
Management, Raman Nanda, our portfolio teams sit with the entrepreneurs to dis-
cuss what financial, operating, social, and environmental metrics we plan to col-
lect. Our mantra during these meetings has three parts:
• We don’t want to collect anything that is not fundamentally important for the
company to manage their business (which includes serving the poor).
• We don’t want to collect any information that cannot be generated by the com-
pany’s existing management information systems.
• If the current information systems cannot capture the kind of data that is
important to managing business, we will help the business think about medi-
um-term improvements to their systems that strengthen their ability to man-
age (and in at least two cases we have helped build new management informa-
tion systems for our investments).
We don’t want these conversations to be about what “reports” the entrepreneur
must send us on a quarterly basis; instead, we are laying the groundwork for how
we will think about performance management over the life of our investment. For
businesses with hybrid or cross-subsidy models like the 1298 ambulances in
Mumbai, the most difficult metrics to collect are the socioeconomic status of their
customers (20% of whom we expect will be from the base of the pyramid). Rather
than conduct a “wallet biopsy,” the management team of 1298 simply uses the des-
tination hospital as a proxy for income level. If they take you to Breach Candy
Hospital, a private, relatively expensive hospital in Mumbai, you can afford to pay
for the ambulance ride and you are considered middle or upper income. If they
take you to the free government hospital, the assumption is that you can’t afford
the ambulance ride and that you fall into the less than $4/day income segment that
matters to us.
These discussions serve as an important marker to show that we are serious
about performance management, that we are committed to help solve the prob-
lems of data collection and integrity, and that we are serious about serving the
poor as a significant portion of the company’s business. Our term sheets include
the metrics that we want reported quarterly (as mutually agreed on) and some
minimum threshold of clients served from this target population. If an entrepre-
neur fails to report or fails to serve the poor, we have the right to walk away from
the deal.
ware solution that would allow us to track performance over time, performance
against the initial projections, and performance across the portfolio. After an
exhaustive search failed to find an off-the-shelf system that would allow us to track
this blend of financial and social metrics, we decided to try to build it ourselves.
With pro bono engineering support from Google engineers in New York and
Mountain View, California, and under the leadership of portfolio associate Marc
Manara, we built a software tool that allows our portfolio team to track a range of
measures at each investment. The resulting product, the Portfolio Data
Management System (oPDMS), allows us to store quarterly data (financial, oper-
ating, social, environmental) for each investment, qualitative reports from the
portfolio team on highlights (or key risks) of the company’s performance, and an
annual capabilities assessment that rates each company on a range of purely qual-
itative dimensions (quality of their governance, strength of management team).
The software tool, which we have been using since January 2007, has trans-
formed how we think about investment performance. Every quarter, the portfolio
team reaches out to each investment and receives the agreed-upon reports (some
are more responsive than others). We also ask each portfolio manager to look at
the data and think about whether the company is building the capability to grow—
its customers, its team, its financial resources. We then use a simple diagnostic
both to identify major weaknesses and to recommend how we provide our man-
agement assistance: our team’s time, an Acumen Fund Fellow, or a pro bono part-
ner. We enter the data, compare actual performance versus targets, and come back
to the management team with any questions. For example, in fall 2007, in our
India office, Vikram Raman—who is notorious for generating very thoughtful
questions from these reports—noticed a significant rise in the operational expen-
ditures over the prior quarter at VisionSpring (formerly known as Scojo).
Working with the Acumen management team in India, VisionSpring explored
potential causes and solutions. These kinds of questions have stimulated the very
performance dialogues that we are hoping the system will facilitate.
Twice a year, in November and April, the entire portfolio team sits down to
conduct a forced ranking across the portfolio. Against our investment criteria—
financial sustainability, social impact at scale, breakthrough insights, and high-
quality leadership—as well as both actual performance to date and the invest-
ment’s potential for impact in the future, we ask the team to discuss each invest-
ment and rank them from first to last. No ties are allowed. The exercise, which we
have been conducting since 2003, helps us take a step back to identify patterns
across the portfolio, and forces us to admit where things are working and where
they are not. The forced ranking helps us stay aware of which investments we
would literally “drop everything for” and which ones might not get more of our
limited time and energy. Unlike a traditional venture capital fund that might “drop
the losers,” we support all of our investments until we exit, but we have to priori-
tize our scarce resources somehow.
Seven years into building these intertwined social impact and performance man-
agement processes, we are proud of what we have done but daunted by the chal-
lenges ahead. The granular description of what we do should offer a sense of the
very practical nature of the work. It is not particularly complex. Rather, it is a
combination of simple exercises, repeated with discipline over time and in the con-
text of a coherent framework that helps us make real decisions: what to support,
how to adapt the investment model. Taking a step back from the minutiae of our
process, let me offer some overarching observations for anyone interested in build-
ing metrics and performance management systems in the social sector. These are
observations and assumptions, not conclusions or verities. They build on the work
of many people whose work precedes ours, but has been reinforced through our
own experience.
what might be possible. This has given the portfolio team the flexibility to explore
new metrics systems while being held accountable for annual results. Tolerance for
failure within the organization is another essential cultural dimension to this kind
of work. In the venture world, failure is a badge of honor; in the corporate world
it is an unfortunate fact of life. In the social sector, it is usually not an option. At
Acumen Fund, we know that we are taking some very big risks, but we often say
that the only failure will be if something doesn’t work and we didn’t learn anything
from it.
not patient enough to wait for the system to be developed before reporting on cur-
rent program performance. Or, once a system is built, organizations skimp on new
investments in adaptations or further development. Donors, in our experience, do
care about metrics, but they want simple, clear, and meaningful metrics and at the
end of the day still prefer stories about the impact of our work (preferably stories
informed by data).
The final problem is one of collective action. There are now enough organiza-
tions across the social sector that have taken the time to build measurement sys-
tems, with staff committed to solving this problem, with patient boards and lead-
ership supportive of the work, and with donors prepared to reward proof of
impact. The challenge facing those leading organizations is how to take the first
step towards transparency, and how to invest the time in solutions that might ben-
efit the sector and not just our organization. Acumen Fund is pushing ahead to
develop a Portfolio Data Management System in collaboration with Google, the
Skoll Foundation, the Lodestar Foundation, and Salesforce.com, but we are wary
of investing too much time without seeing reciprocal interest and commitment
from our peers.
I remain hopeful, however, that enough practitioners have built partial systems
that we can work together to build a sector-wide solution. By working collabora-
tively, sharing what works and what doesn’t, and defining collective solutions to
our common problems, we might just answer questions about social impact.
Technology and communications innovations in the last decade, coupled with les-
sons learned from early experiments in this arena, can all contribute to the design
of a new sector-wide system. The tricky part will be to build the institutional
arrangements that encourage collaboration and transparency, maintain quality
standards in a world filled with messy data, allow for continuous improvement and
learning, and listen to the feedback of stakeholders, from donors to end users of
the goods and services being provided.
If we can solve the problems of collaboration, I am confident that we can build
tools that are easy to use, inform real decisions, provide meaningful performance
information to donors, shape public policy, and make a real difference in the lives
of the people we are trying to serve. And once we have such a simple system, we
will have finally reached the other side of complexity.
APPENDIX:
PRECEDENTS IN MEASURING SOCIAL RETURNS ON INVESTMENT
As we set about tackling the metrics challenge at Acumen Fund, we had some very
helpful historical precedents, a handful of admirable peers, and some insightful
advisors who helped during our first couple of years of mucking around. The field
of social measurement took a great leap forward in the late 1990’s with the work
of Jed Emerson and Melinda Tuan of the Roberts Enterprise Development
Foundation and Fay Twersky, then at BTW Consulting (now at the Bill & Melinda
Gates Foundation). Their pioneering “social return on investment” (SROI) frame-
work contributed a compelling metaphor for social investors. Much as one can
calculate the financial return (ROI) of any investment, careful analysis of a few
comparable programs might enable a social investor to calculate an SROI as well.
As we tried to apply this thinking to our portfolio, we realized that SROI might be
better as metaphor than as methodology. We had too many diverse investments
(water projects in India, housing projects in Pakistan, health clinics in Kenya) to
quantify in dollar terms the value of the social services being provided in a com-
parable way.
Another methodology that has gained currency in the social sector is balanced
scorecards.12 With the support of the W. K. Kellogg Foundation and the Cisco
Foundation, we engaged McKinsey & Company to think about how to develop
impact scorecards for our investments. Several of our peer social investors, name-
ly, New Profit, used some form of balanced scorecards to drive performance man-
agement within their portfolio companies. The scorecards are helpful in clarifying
the links between inputs and outputs, outputs and outcomes, and presumably the
measures of impact.
Two peer social investors—Robin Hood and Venture Philanthropy Partners—
shared with us how they looked at impact. The Robin Hood Foundation, with its
focus on programs that seek to end poverty in New York City, shared its rigorous
system with us, looking at changes in family income per unit cost of the various
programs that it funds. Venture Philanthropy Partners shared its capacity assess-
ment framework, which seeks to rank institutions on a range of organizational
indicators from quality of governance to integrity of financial systems. These two
examples offered practical reminders for blending quantitative and qualitative
assessments to create a complete picture of enterprise performance. Mark
Kramer’s 2005 essay “Measuring Innovation: Evaluation in the Field of Social
Entrepreneurship,”13 synthesized a range of ideas around ways to develop practical
and balanced measures of impact, and the need to collect timely and relevant
information, to use the data collected to inform real decisions, and to anticipate
continuous improvement of the data structure and metrics.
Finally, some of Acumen Fund’s partners from the private equity and technol-
ogy worlds offered sage advice to keep things simple and focused on the decisions
we were making. Hunter Boll, the chair of our investment committee, encouraged
us not to get too carried away: to think about a handful of metrics that really drive
company performance, and to look at them quarterly. David Keller, formerly at
Cisco Systems, stressed that any data we collect needed to inform a decision. If we
were not using the information to make a decision—to exit an investment, to make
a new investment in the same type of business—then we should rethink the value
of collecting the data.
1. For more on Acumen Fund, see “Meeting Urgent Needs with Patient Capital,” by our founder
Jacqueline Novogratz, in the Winter/Spring 2007 issue of Innovations.
2. Ted London, The Base of the Pyramid Impact Assessment Framework: Enhancing Mutual Value
Creation. Working Paper, the William Davidson Institute, University of Michigan, January 2008.