Omity 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Brian Trelstad

Simple Measures for Social Enterprise

I would not give a fig for the simplicity this side of complexity
but I would give my life for the simplicity on the other side of compleity.
—attributed to Oliver Wendell Holmes, Jr.

Over the last two decades, social entrepreneurs have become darlings of the social
sector. From the pioneering work of Ashoka to the global stage presented by the
Skoll World Forum and the inspiring work of their Skoll Fellows, a new breed of
innovative people and enterprising institutions are literally promising to change
the world. Ranging from non-profits with highly scalable models looking to trans-
form national education policy to for-profit businesses looking to serve the “base
of the pyramid” at affordable health clinics, this diverse range of social enterprises
offers alternatives to traditional charity or development assistance. Even big busi-
nesses are taking social enterprise seriously as they explore new ways to tap into
underserved markets in emerging economies.
But for all the innovation, the question remains: So what? Have these new
approaches led to enduring change? Are people drinking cleaner water, living
healthier lives, or moving out of poverty because of the new products designed to
be affordable and accessible to the poor? We think so, but the evidence is nascent
and mostly anecdotal. Our objective at Acumen Fund has been to push hard on
these questions of measurement and impact: if what we are doing is a real innova-
tion in philanthropy and development assistance, then we should have evidence
that what we are doing matters for the lives of millions of people.
This is not a simple problem. This article describes our efforts to get an answer
to those questions within Acumen Fund and dives into the complexity of measur-
ing social change. The article reviews our work to develop a manageable and some-
what simple approach to this complex challenge, and then looks at how the field of
measurement might evolve in the coming years. My aim is to provide some very
practical advice about how to produce or consume claims of social impact (or,

Brian Trelstad is Chief Investment Officer with the Acumen Fund. Before joining
Acumen Fund, Brian Trelstad spent four years at McKinsey & Company as a consult-
ant in the healthcare and non-profit practices and as an editor of the McKinsey
Quarterly.

© 2008 Brian Trelstad


innovations / summer 2008 105
Brian Trelstad

more appropriately, evidence of social outputs), and to offer some observations


about the barriers to adopting these practices.

THE CHALLENGE OF MEASUREMENT

Acumen Fund is a seven-year-old social investment fund with roughly $35 million
of approved investments in ventures that deliver health, water, housing, and ener-
gy products and services to the poor. Our investors, philanthropic donors, expect
us to use their charitable contributions to invest money—often on slightly less
than commercial terms—to build financially sustainable and growing businesses
that provide measurable social or environmental value. Any of the “patient capi-
tal” returned to Acumen Fund is recycled for new investments. We believe that the
market-based delivery of goods can complement traditional charitable efforts and
often serves as a listening device: if our enterprises are not delivering value—social
or economic—their customers will not return and their businesses will not grow.1
Measuring the expected and actual financial returns of our investments is relative-
ly straightforward for Acumen Fund and for our peers. Like most investors, we
project net income, free cash flows, terminal values, and likely exit multiples; we
then discount these cash flows back to the present day to establish financial value.
Or, if we make a loan, we can expect a future stream of interest and principal
repayments to come back to us over the five to seven years of our average loan.
And once we exit the investments, it is very easy to look retrospectively at our net
financial return.
Measuring the social or environmental returns of our investments is not so
straightforward. The first challenge is defining what specifically we mean by
“social impact.” This can range from a proof of concept of our model—that we
can invest risk capital in social enterprises and see it returned—to knowing that
our investments in malaria are leading to reductions in the incidence of malaria or
that our investments in drip irrigation are moving smallholder farmers out of
poverty. Second, once it is clear what threshold of outcomes we are aiming for, it
turns out that it is actually quite hard and expensive to “prove” anything. Later I’ll
discuss why we count outputs (e.g., bednets sold) instead of seeking to demon-
strate outcomes (e.g., reduction in incidence in malaria), but we are familiar with
the challenge of moving beyond anecdotes and data towards making a rigorous
case for impact.
Finally, we would love to understand, but don’t have the capacity to measure,
the economic multipliers or unintended consequences of our work. If the textile
mill creates 5,000 jobs in Tanzania, what sort of impact does this have on the local
or regional economy or national tax receipts? Conversely, does the change in pat-
terns of water collection in rural India change the social dynamics in a way that
harms rather than helps? Ted London at the University of Michigan writes coher-
ently about the need to account for the positive and negative impact of social ven-
tures working at the base of the pyramid, and has developed a framework that
“drives improvements in a venture’s poverty alleviation performance by enhancing

106 innovations / summer 2008


Simple Measures for Social Enterprise

positive outcomes and mitigating potential negative ones.”2 This kind of compre-
hensive framework is extremely valuable in pushing us to think through the antic-
ipated and unexpected outcomes of our investments, but at this point ir is tricky
to implement at the business level.
Given these challenges, we start from
the premise that to get better results and
improve our work on a continuous basis, Metrics and evaluation
3
we need to measure what we manage. We are to development
also owe it to our donors to ask whether
philanthropic investments into Acumen programs as autopsies
Fund have made a difference relative to
their other charitable options. We owe it to
are to health care: too
ourselves as professionals trying to effect late to help, intrusive,
change to understand what is working and
what is not—and why. We owe it to the and often inconclusive.
entrepreneurs that we invest in to not
impose burdensome reporting on them
but to include them in a system that helps them think about and anticipate key
performance challenges as they grow their business and serve the poor. And, final-
ly, we owe it to the end users of these services to think clearly about how the busi-
nesses we are building will make a difference in their lives, in their children’s lives,
and in their communities.

TAKING THE PULSE OF ACUMEN FUND’S INVESTMENTS


Metrics and evaluation are to development programs as autopsies are to health
care: too late to help, intrusive, and often inconclusive. Equipped with a range of
best practices and precedents (see Appendix), we set out to build a performance
management process that would “take the pulse” of our work: frequent, simple
measures that would allow us to refine our thinking, change our course, and diag-
nose problems before they became too significant.
As a result, our metrics work extends across our entire investment process. It
is a series of loosely connected exercises bound more tightly by our organization-
al values and our team’s curiosity than a well-defined process or an integrated
soup-to-nuts technology system. It is not perfect, and it is continuously evolving,
but we think it reflects the state of the practice in the social investment and non-
profit sectors. It is worth reviewing in some detail how at each stage of our invest-
ment process, along what we call the “chain of accountability,” we think about con-
necting the donors to Acumen Fund, to the investments we make in businesses,
and to the difference those businesses make in the lives of the customers they serve.

During Due Diligence


After we have found a social entrepreneur with a compelling business model, one
that seems to align with our aspirations, we commence with “formal due diligence”

innovations / summer 2008 107


Brian Trelstad

of the opportunity. Most of the diligence is focused on the business model, the
unit economics, the customer need, the quality of the organization, the integrity of
the leadership team, and the financial plan. A significant part of the diligence,
however, seeks to understand whether or not the business creates meaningful
social value.
To do this, we rely on three separate steps. First we review the literature on the
state of practice to understand if the investment’s main activity “matters.” This
includes discussions with our internal staff portfolio team, advisors, and experts in
the field. Second, we estimate the number of people in the “base of the pyramid”
4
who will be served by the business over the life of the investment. Third, we assess
whether or not the delivery of those “outputs” to our target constituency will com-
pare more or less favorably to the “best alternative charitable option” available to
our donors. Let me take each of these steps in turn.
We start each of our investment discussions with a focus on the customer:
How does this product help the poor? Do people need this product or service? Are
they willing to pay for it? And if they do get it, what evidence exists that their lives
will be measurably better for having used the service? For example, take anti-
malarial bednets. There was considerable debate about whether or not people will
(or should) pay for the long-lasting anti-malarial bednets,5 but there is little debate
that their proper use, particularly throughout an entire village, can lead to signifi-
cant reductions in the incidence of malaria.6
With community clean water systems, the evidence is a bit more ambiguous.
Of course it makes intuitive sense that the proper treatment of surface water for
drinking should lead to improved health outcomes, but a lot of things happen after
customers collect water from a central village water-treatment system: they trans-
port the water, store it for a few days, and drink it out of potentially dirty cups. To
us, the empirical evidence that selling clean drinking water from centralized distri-
bution points in rural India will lead to reductions in waterborne illnesses is not as
robust as the evidence that delivery of bednets to Kenyans can reduce the incidence
of malaria.
Once we have established whether or not an intervention matters—or more
exciting, whether our investment can contribute to the world’s understanding of
what works—we know what outputs to start counting. It is important to pause on
the word outputs for a minute. In evaluation parlance, some combination of
inputs (investment money, technology, staff) helps generate a certain set of out-
puts (bednets delivered, liters of clean water filtered), which might lead to an out-
come (reduction of incidence of malaria, fewer people getting sick from drinking
bad water) that translates into the impact: knowing for certain what would not
have happened were it not for our investment or invention. Defining the outputs
is critical. For us, it is not just “bednets,” but some dimension of “bednets” proper-
ly deployed and used. Moving from narrow definitions of outputs as “products”
toward definitions with some dimension of product service and quality acts as an
important check on a race to least-cost delivery models.
Moving from outputs, however sophisticated the definition, to understanding

108 innovations / summer 2008


Simple Measures for Social Enterprise

outcomes and proving impact is extremely complicated and seems to require ran-
domized control trials that demonstrate the counter-factual.7 We had discussions
with researchers affiliated with MIT’s Poverty Action Lab on how this might be
possible to do with some of our investments and have conducted two rigorous
studies on retail distribution strategies for the anti-malarial bednets and on the
link between delivering clean water and community health. But these studies are
expensive, and it is impractical to spend $250,000 researching the impact of a
$500,000 investment—unless such a study could be used to understand the impact
of similar investments in our portfolio and others for years to come. So our strat-
egy has been to review the literature and consult the experts, to establish the clar-
ity and certainty of a specific output’s links to impacts, and to focus on counting
those outputs.
As an investment moves forward in due diligence, another critical step in the
process is to compare the projected outputs delivered per unit of philanthropic
input (in this case capital) compared with the “best available charitable option” (or
BACO, as we call it). More simply, if a donor who cares about malaria gives
Acumen Fund a million dollars, we want to be able to compare how many bednets
our factory makes over five years (or better, the years of malaria protection that the
nets provide) with what the donor could buy on the “charitable marketplace.” The
charitable marketplace is fragmented and inefficient, but it does exist. For exam-
ple, a moderately curious donor could learn that Malaria No More offers to deliv-
er one long-lasting bednet for $10, so any investment from Acumen Fund would
need to deliver at least one bednet for every $10 our fund receives. We have writ-
ten more extensively about BACO, the methodology, and its limitations on our
website,8 so I won’t go into too much detail here, but let me reflect on how this
“back of the envelope” approach helps us.
The BACO methodology is important for our process in a few ways. First, it
forces the team to think about who is doing the best or prevailing work on solving
the same or similar problems. Second, it forces us to think about the marginal cost
decision a hypothetical donor is making. In the absence of absolute standards of
performance in the social sector, we need to think about how else the donor could
have invested their money. Finally, it is a very practical tool that is easy to use. A
portfolio associate can conduct a BACO exercise using a simple Excel workbook,
some web research, and an expert interview or two. The steps are straightforward,
the assumptions clear, but the analysis doesn’t dictate our final decision. We need
to be more disciplined about repeating the BACO analysis annually for each invest-
ment to see how our forecasts compare against reality, but at the points when we
start and then close out an investment, the BACO methodology offers a useful
benchmark for comparing our work to one prevailing approach in the field.

During Deal Structuring


After we have approved the investment but before we disburse funds, we face the
most critical step in the process. Working with our Director of Portfolio

innovations / summer 2008 109


Brian Trelstad

Management, Raman Nanda, our portfolio teams sit with the entrepreneurs to dis-
cuss what financial, operating, social, and environmental metrics we plan to col-
lect. Our mantra during these meetings has three parts:
• We don’t want to collect anything that is not fundamentally important for the
company to manage their business (which includes serving the poor).
• We don’t want to collect any information that cannot be generated by the com-
pany’s existing management information systems.
• If the current information systems cannot capture the kind of data that is
important to managing business, we will help the business think about medi-
um-term improvements to their systems that strengthen their ability to man-
age (and in at least two cases we have helped build new management informa-
tion systems for our investments).
We don’t want these conversations to be about what “reports” the entrepreneur
must send us on a quarterly basis; instead, we are laying the groundwork for how
we will think about performance management over the life of our investment. For
businesses with hybrid or cross-subsidy models like the 1298 ambulances in
Mumbai, the most difficult metrics to collect are the socioeconomic status of their
customers (20% of whom we expect will be from the base of the pyramid). Rather
than conduct a “wallet biopsy,” the management team of 1298 simply uses the des-
tination hospital as a proxy for income level. If they take you to Breach Candy
Hospital, a private, relatively expensive hospital in Mumbai, you can afford to pay
for the ambulance ride and you are considered middle or upper income. If they
take you to the free government hospital, the assumption is that you can’t afford
the ambulance ride and that you fall into the less than $4/day income segment that
matters to us.
These discussions serve as an important marker to show that we are serious
about performance management, that we are committed to help solve the prob-
lems of data collection and integrity, and that we are serious about serving the
poor as a significant portion of the company’s business. Our term sheets include
the metrics that we want reported quarterly (as mutually agreed on) and some
minimum threshold of clients served from this target population. If an entrepre-
neur fails to report or fails to serve the poor, we have the right to walk away from
the deal.

Post-Investment Performance Management


After an investment is disbursed, it’s time to start collecting and analyzing the data
with the primary purpose of supporting and scaling each enterprise. In our initial
days, Acumen Fund collected monthly or quarterly data for different investments
in various forms (spreadsheets, word documents, e-mail), making it hard to do
time-series comparisons across our portfolio. We started to be more consistent by
collecting data quarterly in clearly designed spreadsheets, but as our portfolio team
grew from five people in our New York office to 15 people in four offices (includ-
ing Hyderabad, India; Karachi, Pakistan; and Nairobi, Kenya), we looked for a soft-

110 innovations / summer 2008


Simple Measures for Social Enterprise

ware solution that would allow us to track performance over time, performance
against the initial projections, and performance across the portfolio. After an
exhaustive search failed to find an off-the-shelf system that would allow us to track
this blend of financial and social metrics, we decided to try to build it ourselves.
With pro bono engineering support from Google engineers in New York and
Mountain View, California, and under the leadership of portfolio associate Marc
Manara, we built a software tool that allows our portfolio team to track a range of
measures at each investment. The resulting product, the Portfolio Data
Management System (oPDMS), allows us to store quarterly data (financial, oper-
ating, social, environmental) for each investment, qualitative reports from the
portfolio team on highlights (or key risks) of the company’s performance, and an
annual capabilities assessment that rates each company on a range of purely qual-
itative dimensions (quality of their governance, strength of management team).
The software tool, which we have been using since January 2007, has trans-
formed how we think about investment performance. Every quarter, the portfolio
team reaches out to each investment and receives the agreed-upon reports (some
are more responsive than others). We also ask each portfolio manager to look at
the data and think about whether the company is building the capability to grow—
its customers, its team, its financial resources. We then use a simple diagnostic
both to identify major weaknesses and to recommend how we provide our man-
agement assistance: our team’s time, an Acumen Fund Fellow, or a pro bono part-
ner. We enter the data, compare actual performance versus targets, and come back
to the management team with any questions. For example, in fall 2007, in our
India office, Vikram Raman—who is notorious for generating very thoughtful
questions from these reports—noticed a significant rise in the operational expen-
ditures over the prior quarter at VisionSpring (formerly known as Scojo).
Working with the Acumen management team in India, VisionSpring explored
potential causes and solutions. These kinds of questions have stimulated the very
performance dialogues that we are hoping the system will facilitate.
Twice a year, in November and April, the entire portfolio team sits down to
conduct a forced ranking across the portfolio. Against our investment criteria—
financial sustainability, social impact at scale, breakthrough insights, and high-
quality leadership—as well as both actual performance to date and the invest-
ment’s potential for impact in the future, we ask the team to discuss each invest-
ment and rank them from first to last. No ties are allowed. The exercise, which we
have been conducting since 2003, helps us take a step back to identify patterns
across the portfolio, and forces us to admit where things are working and where
they are not. The forced ranking helps us stay aware of which investments we
would literally “drop everything for” and which ones might not get more of our
limited time and energy. Unlike a traditional venture capital fund that might “drop
the losers,” we support all of our investments until we exit, but we have to priori-
tize our scarce resources somehow.

innovations / summer 2008 111


Brian Trelstad
Closed Investments
After we have completed the financial relationship with an enterprise, we take a
final look backward at whether we think the investment succeeded or failed against
our original expectations. Someone on the portfolio team (often not the person
who managed the investment, and usually portfolio associate Katie Hill) will write
a short “exit memo” that looks at the results generated, our financial return, and
the lessons we have learned. The exit memos are instructive in forcing us to go
back to the original plan, to look at what happened along the way, and to deter-
mine what we could have done differently. Exit memos play an important role in
helping us adapt our investment process. Over the course of time, as we have eval-
uated our closed investments, we have decided not to do early-stage product devel-
opment, to be wary of working with large development finance institutions as co-
investors, to push very hard in the early days on the “ethical fiber” of the entrepre-
neurs and their teams, and to think more realistically about growth projections
(read: it often takes a lot longer, sometimes more than a year, to get things mov-
ing).

FIVE OBSERVATIONS ABOUT PERFORMANCE MANAGEMENT

Seven years into building these intertwined social impact and performance man-
agement processes, we are proud of what we have done but daunted by the chal-
lenges ahead. The granular description of what we do should offer a sense of the
very practical nature of the work. It is not particularly complex. Rather, it is a
combination of simple exercises, repeated with discipline over time and in the con-
text of a coherent framework that helps us make real decisions: what to support,
how to adapt the investment model. Taking a step back from the minutiae of our
process, let me offer some overarching observations for anyone interested in build-
ing metrics and performance management systems in the social sector. These are
observations and assumptions, not conclusions or verities. They build on the work
of many people whose work precedes ours, but has been reinforced through our
own experience.

1. Culture matters far more than systems.


If your organization doesn’t care about metrics, don’t bother to start building sys-
tems to measure performance. This effort needs to start at the top with board and
senior management leadership and extend throughout the staff and stakeholders
of the organization, and into the organizations you fund. Acumen Fund was
founded on the principles that accountability and transparency were fundamen-
tally missing from traditional philanthropy, and that a new institution would do
things differently. Jacqueline Novogratz, our founder and CEO, has consistently
advocated for advances in our metrics work; so has our board of directors. At
times, the expectations for what one can measure and what one can prove diverge
from the reality of practice, but those conversations have helped the team imagine

112 innovations / summer 2008


Simple Measures for Social Enterprise

what might be possible. This has given the portfolio team the flexibility to explore
new metrics systems while being held accountable for annual results. Tolerance for
failure within the organization is another essential cultural dimension to this kind
of work. In the venture world, failure is a badge of honor; in the corporate world
it is an unfortunate fact of life. In the social sector, it is usually not an option. At
Acumen Fund, we know that we are taking some very big risks, but we often say
that the only failure will be if something doesn’t work and we didn’t learn anything
from it.

2. If you build systems, start with a pencil and paper.


Too often, we hear of people looking for a software solution to their performance
management problem when a simpler solution would suffice. In the early days of
Acumen Fund, with a portfolio of fewer than ten investments, all we needed to
measure investment performance was a pencil and paper, consistently used. To
avoid total chaos, you must be clear about both your objectives and the business
process you will use to collect and interpret data before you begin with system
development. For us, after we translated our investment criteria into specific per-
formance metrics for each enterprise, we could then think about when we would
collect metrics, who would type them and where, how often we would look at
them, and what we would do with the information. Only after that is clear and has
been practiced for a few iterations can you build or deploy a more sophisticated
system. Some of our peers, namely E+Co and Robin Hood, were very generous
with their time in our early days. They helped us move along the metrics learning
curve by sharing their experience of what metrics matter, which methods work and
which don’t, what you can expect to collect from your entrepreneurs, and how to
use the information to communicate impact.

3. Think on the margin.


The search for absolute impact or performance measures is elusive and in my mind
irrelevant. Performance is always relative to what you had been doing before
(past), to what your competition did over the same time period (peers), and to
what you should have done (projections). We have built a system that allows us to
look at two of these three P’s: past and projections, but not peers. The next stage
in developing the Portfolio Data Management System with Google and
Salesforce.com is to aggregate the data of our peers in way that facilitates compar-
isons across portfolios and across relevant subsets of enterprises. Until that time,
we will use (and continue to improve) BACO as an inadequate proxy for thinking
about the marginal effectiveness of our investments. The social sector, however,
could benefit from more consistent “marginal” analyses to frame how various
interventions compare with other opportunities to contribute time and money.

innovations / summer 2008 113


Brian Trelstad
4. Count outputs and then worry about outcomes.
The greatest inadequacy of our current system is that we still can’t prove impact.
We add up outputs and compare them with the inputs (costs). We think about
what outcomes might be possible, given what we and the field know about these
interventions. It is not very satisfying, but it is where we are. We have been follow-
ing the work of the Center for Global Development and the Poverty Action Lab,
and will continue to advocate for and participate in more rigorous impact assess-
ments of our work, where appropriate. Until evidence-based policy becomes the
norm and the cost of doing these assessments falls, we think it is our responsibili-
ty to count the outputs as consistently as possible. The conclusions you can draw
from these outputs may not be made with scientific rigor, but they can inform
businesslike decisions and raise important policy questions. Why, for example,
does it cost $5,000 to build a low-income home in Pakistan but twice as much in
Central America? Or, how can we reduce the price of drip irrigation systems in
Pakistan to the $150 to $200 per acre we see in India?

5. Don’t confuse information with judgment.


Even with very robust systems, we will always have an incomplete picture of our
portfolio’s performance. While collecting lots of data, it is essential to balance the
qualitative—observations and anecdotes—with the quantitative—facts, metrics
and trends. There is no substitute for judgment, but judgment not informed by
careful attention to patterns of facts can quickly slip into speculation and intuition.
And it is important to hold oneself accountable to those judgments. One small
step we take to this end was inspired by investment committee member Stuart
Davidson: at the end of our investment committee calls, we often pull out a pre-
diction book and ask members of the committee to predict the results of a specif-
ic investment metric in 18 to 24 months’ time (e.g., number of houses built, num-
ber of patients treated, total revenues).
Not surprisingly, when we revisit the predictions, we are almost always wrong;
in most cases, we were too optimistic, but in some cases, the investments have out-
performed our estimates. What’s important about the process is that it asks peo-
ple to make an informed guess, and then we circle back at some point to see if they
were right. There is nothing magic about this—in fact, this is how stock markets
work. But instead of five investment committee members speculating on 35 pri-
vately held investments, the stock markets allow millions of investors to guess
about the future of tens of thousands of companies. One aspiration for the sector
is that we can achieve a level of information transparency and consistency of data
definition so that a “social stock market” might emerge.9

THE KNOWING VS. DOING GAP IN MEASUREMENT10


As I was drafting the outline for this article, Peter Reiling of the Aspen Institute
mentioned that I should review a chapter from Cost Effectiveness in the Non-Profit
Sector, to which he had contributed.11 In the chapter written by the team from

114 innovations / summer 2008


Simple Measures for Social Enterprise

TechnoServe, I was stunned to see a detailed description of a comprehensive social


impact and performance management system that was very similar to what I
described above: a balance of qualitative and quantitative measures, using reason-
able proxies to assess the poverty impact of business interventions, and clear
spreadsheets for use by a distributed team. The whole works. And the chapter clos-
es with some “lessons learned” that overlapped with some of mine above, includ-
ing the first one: “Pursuing cost effectiveness requires strong commitment from
management.”
Hang on, I thought. If we have
known as a sector what needed to
be done for more than a decade, By working collaboratively,
why haven’t these practices become sharing what works and
the norm? Why does each new
organization in the social sector what doesn’t, and defining
need to reinvent the measurement collective solutions to our
function, when we don’t reinvent
our accounting or technology sys- common problems, we
tems? This double-take on why we
haven’t learned from Technoserve’s
might just answer questions
experience introduces a few obser- about social impact.
vations about the challenges of
building out these systems in the
development sector and why I think
we are at a particular inflection point where this could change.
The starting point is that no system has emerged that has really gotten it right.
We still struggle with putting our investment performance into the right context,
and we get feedback either that we tell great stories without sufficient data, or that
we present too much data without putting it in context. With a few exceptions, no
systems can capture and communicate performance metrics so clearly and com-
pellingly that they have been able to not only satisfy existing donors, but also
attract new ones.
So without a standard to aspire to and a system to replicate, each organization
has been tasked with building its own system, and building it without spending too
much “overhead.” So, in the typical pattern, an over-stretched staff member, like-
ly someone who has no experience in evaluation but does have an MBA or experi-
ence working with Microsoft Excel, starts from scratch, and with no budget.
Another pattern is for a really thoughtful consulting firm (in our case, McKinsey)
to do a reduced-fee or pro bono study to map out the system, but then leave before
it gets implemented. It probably takes 18 to 24 months to conceive of, test, design,
build, and refine any good system, well beyond the typical four-month consulting
engagement. Firms that want to take social impact seriously might think of a
model where they can engage with clients over longer periods of time, less inten-
sively, to help build and deploy measurement systems.
Complicating the system building, management teams and boards are often

innovations / summer 2008 115


Brian Trelstad

not patient enough to wait for the system to be developed before reporting on cur-
rent program performance. Or, once a system is built, organizations skimp on new
investments in adaptations or further development. Donors, in our experience, do
care about metrics, but they want simple, clear, and meaningful metrics and at the
end of the day still prefer stories about the impact of our work (preferably stories
informed by data).
The final problem is one of collective action. There are now enough organiza-
tions across the social sector that have taken the time to build measurement sys-
tems, with staff committed to solving this problem, with patient boards and lead-
ership supportive of the work, and with donors prepared to reward proof of
impact. The challenge facing those leading organizations is how to take the first
step towards transparency, and how to invest the time in solutions that might ben-
efit the sector and not just our organization. Acumen Fund is pushing ahead to
develop a Portfolio Data Management System in collaboration with Google, the
Skoll Foundation, the Lodestar Foundation, and Salesforce.com, but we are wary
of investing too much time without seeing reciprocal interest and commitment
from our peers.
I remain hopeful, however, that enough practitioners have built partial systems
that we can work together to build a sector-wide solution. By working collabora-
tively, sharing what works and what doesn’t, and defining collective solutions to
our common problems, we might just answer questions about social impact.
Technology and communications innovations in the last decade, coupled with les-
sons learned from early experiments in this arena, can all contribute to the design
of a new sector-wide system. The tricky part will be to build the institutional
arrangements that encourage collaboration and transparency, maintain quality
standards in a world filled with messy data, allow for continuous improvement and
learning, and listen to the feedback of stakeholders, from donors to end users of
the goods and services being provided.
If we can solve the problems of collaboration, I am confident that we can build
tools that are easy to use, inform real decisions, provide meaningful performance
information to donors, shape public policy, and make a real difference in the lives
of the people we are trying to serve. And once we have such a simple system, we
will have finally reached the other side of complexity.

APPENDIX:
PRECEDENTS IN MEASURING SOCIAL RETURNS ON INVESTMENT

As we set about tackling the metrics challenge at Acumen Fund, we had some very
helpful historical precedents, a handful of admirable peers, and some insightful
advisors who helped during our first couple of years of mucking around. The field
of social measurement took a great leap forward in the late 1990’s with the work
of Jed Emerson and Melinda Tuan of the Roberts Enterprise Development
Foundation and Fay Twersky, then at BTW Consulting (now at the Bill & Melinda
Gates Foundation). Their pioneering “social return on investment” (SROI) frame-

116 innovations / summer 2008


Simple Measures for Social Enterprise

work contributed a compelling metaphor for social investors. Much as one can
calculate the financial return (ROI) of any investment, careful analysis of a few
comparable programs might enable a social investor to calculate an SROI as well.
As we tried to apply this thinking to our portfolio, we realized that SROI might be
better as metaphor than as methodology. We had too many diverse investments
(water projects in India, housing projects in Pakistan, health clinics in Kenya) to
quantify in dollar terms the value of the social services being provided in a com-
parable way.
Another methodology that has gained currency in the social sector is balanced
scorecards.12 With the support of the W. K. Kellogg Foundation and the Cisco
Foundation, we engaged McKinsey & Company to think about how to develop
impact scorecards for our investments. Several of our peer social investors, name-
ly, New Profit, used some form of balanced scorecards to drive performance man-
agement within their portfolio companies. The scorecards are helpful in clarifying
the links between inputs and outputs, outputs and outcomes, and presumably the
measures of impact.
Two peer social investors—Robin Hood and Venture Philanthropy Partners—
shared with us how they looked at impact. The Robin Hood Foundation, with its
focus on programs that seek to end poverty in New York City, shared its rigorous
system with us, looking at changes in family income per unit cost of the various
programs that it funds. Venture Philanthropy Partners shared its capacity assess-
ment framework, which seeks to rank institutions on a range of organizational
indicators from quality of governance to integrity of financial systems. These two
examples offered practical reminders for blending quantitative and qualitative
assessments to create a complete picture of enterprise performance. Mark
Kramer’s 2005 essay “Measuring Innovation: Evaluation in the Field of Social
Entrepreneurship,”13 synthesized a range of ideas around ways to develop practical
and balanced measures of impact, and the need to collect timely and relevant
information, to use the data collected to inform real decisions, and to anticipate
continuous improvement of the data structure and metrics.
Finally, some of Acumen Fund’s partners from the private equity and technol-
ogy worlds offered sage advice to keep things simple and focused on the decisions
we were making. Hunter Boll, the chair of our investment committee, encouraged
us not to get too carried away: to think about a handful of metrics that really drive
company performance, and to look at them quarterly. David Keller, formerly at
Cisco Systems, stressed that any data we collect needed to inform a decision. If we
were not using the information to make a decision—to exit an investment, to make
a new investment in the same type of business—then we should rethink the value
of collecting the data.

1. For more on Acumen Fund, see “Meeting Urgent Needs with Patient Capital,” by our founder
Jacqueline Novogratz, in the Winter/Spring 2007 issue of Innovations.
2. Ted London, The Base of the Pyramid Impact Assessment Framework: Enhancing Mutual Value
Creation. Working Paper, the William Davidson Institute, University of Michigan, January 2008.

innovations / summer 2008 117


Brian Trelstad
3. We are aware of the trap that we may only manage what we measure, so we are constantly pres-
sure-testing our system.
4. We define the poor as part of the base of the pyramid, or those who live on less than $4 per day.
5. This debate was largely resolved by an August 2007 communication from the World Health
Organization insisting that, for maximum public health benefit, long-lasting anti-malarial bed-
nets should be distributed free, rather than sold.
6. J. E. Gimnig et al., Effect of Permethrin-Treated Bed Nets on the Spatial Distribution of Malaria
Vectors in Western Kenya. American Journal of Tropical Medicine and Hygiene 68 (2003), (suppl 4):
115-120.
7. Again, like all aspects of measuring social change, this point also generates considerable debate.
We were greatly influenced by the May 2006 report from the Center for Global Development,
“When Will We Ever Learn: Improving Lives through Impact Evaluation”
(http://www.cgdev.org/content/publications/detail/7973). It pointed out quite bluntly that the
quality of impact assessments in the development field has been historically poor, and calls for
more rigorous evaluations to inform development assistance programs or policy initiatives.
8. See http://blog.acumenfund.org/2007/01/26/the-method-behind-our-metrics/.
9. Many people have been thinking about the creation of a social stock market, but as yet no stan-
dard or platform exists as a marketplace or clearinghouse.
10. In The Knowing Doing Gap: How Smart Companies Turn Knowledge into Action (Harvard
Business School Press, 1999), Jeffrey Pfeffer and Robert Sutton, both at Stanford’s Graduate
School of Business, explore why the prevailing wisdom in management practice is so often
ignored during implementation.
11. Gerald L. Schmaedick (ed.), Cost-Effectiveness in the Nonprofit Sector, Quorum Books, Westport,
CT, 1993. The other lessons are also worth reading.
12. Robert Kaplan and David Norton, The Balanced Scorecard: Translating Strategy into Action,
Harvard Business School Press, 1996.
13. Prepared for the Skoll Foundation, by the Foundation Strategy Group. Available at
http://www.fsg-impact.org/app/content/ideas/item/353.

118 innovations / summer 2008

You might also like