Thursday, December 8, 2022

Washington, DC

RUTH LEVINE: Administrator Power, we welcome you warmly and are eager to hear your insights. Thank you. And over to you.

ADMINISTRATOR POWER: Thank you so much, Ruth. And I can say – for one – that I would have been very happy to keep listening to you on the subject of evidence and development impact. 

Thanks also to Christina for the great leadership from the White House, from the Office of Science and Technology Policy – also the Office of Management and Budget. These evidence forums are incredibly important. I hope many of us can find the bandwidth to take in as much of what there is out there to be learned as possible. Grateful as well to the Center for Global Development for the partnership on this event and from so much partnership. 

I can’t tell you how often – as I roamed the halls here and as people come back into the office, or when I travel in the field – I see on someone's desk, the latest CGD paper on this topic or that topic, and usually rooted in trying to enhance the rigor of what we do here. 

I’m super excited for the panel that’s coming up and don't want to speak too long because we have Michelle Sumilas, who leads our Bureau for Policy, Planning, and Learning – who’s been such a great champion like you were Ruth – in promoting the use of evidence. Michelle has also been leading us to integrate behavioral science, and experimental economics here at USAID. We're thrilled about that. And Dafna Rand, of course, over at the State Department – such a thought leader in every role she's had in the U.S. government – but also coming to the role at F – having played such an important leadership role at Mercy Corps, where they tried to bring a rigorous and evidence-based mindset to bear as well. So, that’s just two of the panelists – much more to come. 

This topic is very close to all of your hearts – and many of you will forget more than I will ever know – in terms of approaches tried and abandoned, and how the field has evolved over time. But, you know, in having the privilege of being the Administrator over here at USAID, I thought what I would do is preview our vision at AID on how we can do better. 

In their recent report, our friends at CGD challenged all of us in the development community to take steps – in their words – to harness the full value of impact evaluation to improve lives. And this is an ambition we at USAID share, and we hope we can count on many of you watching as partners, as we pursue this agenda – as you all know – in trying to integrate impact evaluations or bring a more evidence based approach to development. 

It does require the investment of time and energy as a design feature on the front end. And there are bandwidth issues that everybody faces in this messy world, but your support and your technical help from the outside can be a very ameliorative factor as we try to go further than we have so far, in a hurry. 

The goal, of course – of this entire enterprise, I guess – is to do good, not only feel good. There are a number of legendary stories of tales of good intentions gone awry. President Eisenhower’s establishment of the Food For Peace program, and its earliest days delivering food aid to countries facing extreme hunger, sending emergency food supplies to India, Pakistan, Indonesia, which were going through horrific record levels of hunger. And addressing in the short term, that hunger – in really important ways.  

But then, of course, the perverse effect of the sudden influx of free food, flooding markets, crashing local prices, harming farming communities. There are so many examples like that, of course, of good intentions run amok, but even more mundane examples of doing elaborated measurement – but not necessarily measuring the right thing to actually secure the kind of impact that we seek. So, moving away from development by instinct – or development by intuition – to development by evidence, that’s the shift that has been underway for some time. And then, now really refining what kinds of evidence – and how those – that evidence that’s generated – how the feedback loops get compressed, so that that evidence gets taken on board and gets propagated as widely as is appropriate. So placing evidence front and center can yield even more striking results. 

In the early 2000s, as you know, the pioneering economist Michael Kramer – who we at USAID are very lucky to have as a colleague and an advisor – was working with an organization that wanted to increase the number of children attending school in Kenya. The organization provided textbooks to the school with the notion that the families did not have to spend their own money, or their own labor, to get supplies like textbooks – then their kids will be more likely to show up. But contrary to that intuition – sounds like a fairly reasonable intuition to me – providing free textbooks actually had little impact on student attendance, except for the most high performing students. Another research study though, to help them undercover what did – deworming pills, famously – in a study exploring the link between deworming and education revealed that students with access to the deworming pills were significantly more likely to attend high school, since they were healthier and their families had to spend less, as well, on medical care. 

Over the course of roughly a decade, the gender gap in secondary attendance was also cut in half. Those benefits we now know have kept accruing over the lives of these once children. Now grown adults, when researchers followed up years later the students were more likely to be employed, their salaries were likely to be higher. So again, investing in school supplies, it turned out hadn’t worked. But, the numbers showed that investing in this one public health initiative had profound and cascading economic benefits over time.

So to determine if our work is actually driving the impact that we seek, we need to measure it. That isn’t always possible – but where it is possible, it is necessary. What matters is not, again, how well-meaning our approach, or how confident we are in our approach, but whether our approach is yielding the desired results, or different results.

In fact, as we’ve seen in the Nobel Prize winning work of Esther Duflo and others – including Michael – we see unanticipated benefits as well. When we learn to measure – as a design feature of our programming – whether students can attend school, whether parents can feed their children, whether farmers can make a living, whether communities are going to be equipped by our disaster resilient infrastructure investments – for example, to withstand a disaster. Over the past two decades, the development community has done tremendous work to make that measurement possible, and many of you have played a role in this evolution. The results, as we measure our measurement, are encouraging.

In 1990, those designing and running development programs conducted almost no impact evaluations. Today, the development community is conducting at least 1,600 a year. And thanks to the work of many – but most especially Ruth, who was our first learning and evaluation senior leader at USAID – this Agency has benefited tremendously from this growing trend of evidence-based development. 

In 2010, as part of a suite of modernization reforms, USAID founded Development Innovation Ventures, which funds rigorous evaluation of new ideas, and then scales those ideas that work. DIV has funded 277 grants in 49 countries since it opened shop back in 2010. And its financial benefits, according to a 2019 study, outpaced the initial investment by a ratio of 17 to one. In 2011, Ruth and her team crafted our first ever evaluation policy, which mandated that USAID programs measure the impact of their work in the field when possible. 

And this wasn’t just the first for USAID, it was the first federal government policy on impact evaluation and it quickly became a gold standard for other government agencies, and external development organizations. And it has led our Agency to design and conduct more than 130 impact assessments of our work around the world. Now, given the vast resources that we program, the vast number of contracts and grants we do every year – needless to say, well, growth is important – that this practice needs to be much more widespread. And luckily, we have just brought to USAID a person familiar – I’m sure to all of you – just the person to help us measure more and measure better. 

We have just welcomed Dean Karlan – one of the world’s leading experts on evidence and impact – as USAID’s new chief economist. And under Dean’s leadership, we are going to take a number of steps to expand the use of evidence in our work. We will start by making it easier for USAID staff to use evidence. And to do that requires bridging the last mile problem – to take the evidence in our assessments out of the studies that we are doing, and to put them in the hands of our teams in the field, who design our program and shape our work. And not just studies, of course, done here, or impact evaluations done here at USAID – but there’s an entire community of people, who have a huge amount to offer our program officers – with Dean in the lead, and hopefully, very much expanding our pool of economic expertise here at USAID. A note to people out there, who are interested in coming to USAID, we’re very interested in expanding this expertise. 

We will elevate and expand the use of rigorous testing and examination of evidence and create more opportunities for our frontline staff to access, understand, and incorporate evidence into their work. Another of our priorities will be to expand the practice of cash benchmarking – an evaluation tool that we have used here at USAID – but that, again, can be much more widespread. 

Over the past 15 years, the world has learned that direct cash transfers – literally just giving money to the poor – can help efficiently address everything from food security to education gaps, and can have really sustainable results in terms of lifting people out of poverty – sometimes more effectively than programs that provide particular services or other forms of support. Cash benchmarking allows us to compare the impact of any given intervention we do here at USAID  against the practice of just giving people money, and it can help us determine which path is more effective – especially given the lower overhead, usually, when it comes to catch provision. 

While we have made significant strides, we still have much to do to place evidence-based policymaking at the forefront of USAID development work. We have to produce data that is rigorous and that is equitably-measured – informed by the needs and the desires of the communities we serve. We need to make that data accessible, again, to those who are in the best position to analyze it and to use it to inform their work. We need to provide that evidence to others, being very transparent about what we learn, including policymakers in the countries in which they work – so they can benefit from all of the learning and knowledge that is out there. And maybe that’s the most important point I can make about the value of evidence. 

Our mantra at USAID is that we want to deliver progress beyond our programs. We’re never going to have enough resources to keep up with the spiraling development needs that we are confronting. Our progress has to be conceived of as living outside the four corners of our programs and the impacts we seek – there. We have to be catalytic. We have to bring in the private sector.  We have to bring in – and work more effectively –  with multilateral development organizations. 

But, this is a classic way in which we have to think about progress beyond our programs, because how can we make an impact beyond our budget dollars, and the activities we design and implement. A surefire way we can extend our impact beyond the lifetime of any particular program – make the impact far more sustainable – is helping the development community and policymakers in other countries understand what has worked, what hasn’t, and why. 

What Michael discovered about deworming has gone on to benefit hundreds of millions of children around the world, because the evidence of its primary impacts and its secondary impacts were shared globally. The reason we deliver cash assistance or vouchers in humanitarian emergencies – and you can see the trend lines there – they’re going up, up, up up up – and not just us, but the WFP and other international relief organizations. Why has that trend taken hold? Because we had access to the evidence of the effectiveness of cash interventions, and the effects that they could have on local markets, on individual dignity, and a whole series of collateral effects that had not even been anticipated. 

So, this is one of many reasons – it is so crucial – that we understand the impact generated by our programs. What we learn may end up becoming far more valuable than what we spent. It may lead to impact far more scalable than any particular intervention we undertake. So, by arming ourselves with evidence and sharing that evidence with the world, we won’t just feel good – we will do good. We will do more good, helping others do good as well.

Thank you so much.

Samantha Power Development Innovation Ventures
Share This Page