The New Science of Measuring Impact

A dirty little secret of the NGO world is that funders and grant recipients sometimes silently conspire to agree on statistics that show a program “works,” without rigorous measurement, since both the funders and the recipients want to show results.

Organizations everywhere dedicated to solving some of the world’s most pressing challenges have launched a variety of new efforts to meticulously measure the intended impact of their work. This is nothing less than a revolution in social innovation that is bringing new, scientific rigor to gauging results. These organizations are now eschewing the top-down problem solving that has been the standard in their relative fields for decades.

Consider the clinical trial. Most people probably associate so-called randomized controlled trials with the testing of new drugs. One group of people take a new pill while another group gets a placebo. The participants are monitored to see who is better off.

But in recent years the use of randomized controlled trials has migrated from measuring the effectiveness of drugs to rigorously evaluating policies and programs designed to do social good. Rather than just assuming that a bright idea is changing the world because something is happening, scientists compare how much desired outcomes are evident among people exposed to a program as opposed to people who were not. The results can be remarkably instructive.

_DSC8961 by Global Alliance for Clean Cookstoves http://www.flickr.com/photos/cleancookstoves/7181008715/in/set-72157630051742097/
  1. Member of Sewa cook food on tradional cookstove (R) as well as on clean cookstove (L) at the SEWA Centre in Ganeshpura Village in district Mehsena of Indian State of Gujarat.(Image: Global Alliance for Clean Cookstoves)
  • Poverty Action Lab report: Up in smoke. http://www.povertyactionlab.org/publication/up-in-smoke
  • Sumi Mehta: Director of Programs for the Global Alliance for Clean Cookstoves http://www.cleancookstoves.org/the-alliance/secretariat-and-advisors/bios/sumi_l.jpg
  1. Poverty Action Lab report: Up in smoke.(Report: Poverty Action Lab)
  2. Sumi Mehta: Director of Programs for the Global Alliance for Clean Cookstoves.(Image: Global Alliance for Clean Cookstoves)

Cooking stoves are a good example. At a September 2010 meeting of the Clinton Global Initiative, Secretary of State Hillary Clinton unveiled a public-private partnership led by the United Nations Foundation to put new, efficient, clean, cheap cooking stoves in 100 million homes by 2020. “We can finally envision a future in which open fires and dirty stoves are replaced by clean, efficient and affordable stoves and fuels all over the world,” she said.

As mundane as they may seem, cooking stoves are a very big deal. By the State Department’s count, 3 billion people around the world cook their food above open fires or on old, inefficient stoves in poorly ventilated homes. The smoke and toxins cause pneumonia and respiratory disease in women and their families, and the inefficient stoves contribute to climate change.

But simply handing out new cooking stoves, it turns out, doesn’t work as intended. In July, the Abdul Latif Jameel Poverty Action Lab released the results of a four-year study of families in India who through a separate program had received new, efficient, clean stoves. (As much as 90 percent of homes in poor, rural India burn firewood, cow dung, or crop residue to cook. Pollution in these homes is 20 times the limit set in India.)

Surveyors measured carbon monoxide in the exhaled breath of families who had received the stoves and those that had not. The families with the clean stoves showed “no improvements in measurable health outcomes...nor in self-reported symptoms such as coughs and colds.” Careful measurements also showed that homes with the supposedly more efficient stoves weren’t burning any less fuel than others. The best laid plans had gone astray.

It turned out that the fancy stoves worked well in the laboratory when rigorously maintained and used to specifications. But in the field, they quickly fell into disrepair. Three years after getting the stoves, families mostly were not using them.

“We don’t disagree with the findings of these studies,” said the Alliance’s Sumi Mehta. Mehta said the alliance is taking aggressive steps to make sure those 100 million cookstoves don’t ultimately sit unused and in disrepair.

The cookstove study was an important lesson for the Global Alliance for Clean Cookstoves, which is the public-private partnership Clinton launched to distribute 100 million cookstoves. “We don’t disagree with the findings of these studies,” said the Alliance’s Sumi Mehta. Mehta said the alliance is taking aggressive steps to make sure those 100 million cookstoves don’t ultimately sit unused and in disrepair.

But the alliance might have made the same mistakes that occurred in India if researchers had not carefully studied a treatment group and a comparison group -- people who had received the new-fangled stoves against those that had not. Policy makers are now using similar trials to measure the value of educational programs, back-to-work efforts and health care delivery.

The idea to distribute cooking stoves in India is not the only example of a well-intentioned program falling apart under that kind of scrutiny. Researchers discovered several years ago that so-called “scared straight” initiatives that bring at-risk kids to visit real prisons do more harm than good.

Or perhaps you’ve heard of efforts to tailor education to different “verbal” and “visual” learners? One trial casts serious doubt about whether there are such things.

Similarly, trials can prove that some seemingly Pollyannish ideas are actually highly effective. Researchers in the UK found a novel way to get recalcitrant people to pay their court fines: Send a text message asking for the money.

Trials, however, are not always the answer. They are hugely expensive. Who bears the burden of funding full-blown clinical trials when project budgets are tight?

Even advocates of trials caution that the technique doesn’t always make sense in every case. In an interview for this issue of Editions, Yale economist Dean Karlan warns against employing trials in the early development stages of a product or program. Instead, he urges innovators to “tinker away” relatively uninhibited while developing a program. Karlan also worries that funders could stifle ingenuity if they wrongly insist on trial data prior to investments.

There are also many other ways to measure impact besides trials. In another companion article in this issue of Editions, Water for People’s Ned Breslin favors a more straightforward, long-term, intensive monitoring of development work. Breslin is in the business of bringing clean water sources to the indigent. Clean water must flow continuously — forever. He uses high-tech methods to make sure that is happening. If a pump breaks, find out fast and fix it. No trial needed. For Breslin, monitoring is no afterthought to entice investors, but it is a vital element woven into the DNA of a healthy, dedicated organization.

Sometimes measuring impact without a trial can start earlier than you might think. Human-centered design refers to soliciting feedback from potential users during the design phase of a product or program and altering the design to take that critique into account. Greater impact gets baked in. The Gobee Group’s Jaspal Sandhu and Jenny Stefanotti, a fellow at the Hasso Plattner Institute of Design at Stanford University, are among the many talented writers contributing to this “Editions.” Both incorporate that feedback into the development process. Human-centered design, community based participatory research, design thinking and participatory design are all terms that describe some flavor of adherence to this maxim when developing a product or program: Design with, not for.

Sadly, however, not everybody wants to measure impact with great accuracy and not everybody wants to see the results. “Politicians don’t like their brilliant ideas being put to the test,” explained Ben Goldacre, a researcher in the UK working on public policy trials.

Ben Goldacre
  1. Ben Goldacre

Similarly, a dirty little secret of the NGO world is that funders and grant recipients sometimes silently conspire to agree on statistics that show a program “works,” without rigorous measurement, since both the funders and the recipients want to show results. As Goldacre put it, “There is a strange emotional dynamic where everybody wants to show they are doing something really brilliant.” And it’s hard to be brave enough to look for possible failure.

When innovators can stomach critical evaluations, trials are just one compelling example of the many ways organizations around the world are literally placing new currency into evaluating intended impact. Quickly fading are the days of rolling out solutions and assuming they worked. “The typical evaluation of a project used to be hiring somebody for four weeks to review some documentation and visit a few sites,” explained the Center for Global Development’s Bill Savedoff. Now organizations are testing prototypes with targeted audiences, using technology to evaluate their work for years to come, and even using clinical trials to get objective data on the impact of their work. We’ve come a long way from just testing new pills.

Mark Benjamin

Mark Benjamin is Director of Content at PopTech.