Evaluation

USAID has made an ambitious commitment to rigorous and quality program evaluation – the systematic collection and analysis of information to improve effectiveness and inform decisions about current and future programming. USAID's Evaluation Policy demonstrates its commitment to the objectives laid out in the Foreign Aid Transparency and Accountability Act (signed into law July 2016). It also works in concert with existing Agency policies, strategies, and operational guidance, including those regarding project design, evaluation-related competencies of staff, performance monitoring, knowledge management, and research management. 

Since the release of the Evaluation Policy, USAID has:

  • Increased the number of evaluations commissioned each year to an average of about 200 per year, totaling more than 1,000 evaluations since 2011.
  • Provided formal training in evaluation to more than 2,600 USAID staff.
  • Improved the quality of evaluations by ensuring that planning happens in advance, using the best methods for answering a focused set of questions and encouraging that evaluations be conducted by external experts.
  • Reported transparently on evaluation findings, particularly by sharing evaluation reports online at the Development Experience Clearinghouse.
  • Used evaluation findings to inform project design, make mid-course corrections, increase knowledge and learning in specific sectors.
  • Strengthened program monitoring so that evaluations can focus on a more complex set of questions beyond whether a project is meeting its targets.

For a summary of USAID evaluations, including a regional and sector breakdown, please click on the appropriate year below.

 


Evaluation Policy Implementation

The reports below summarize evaluation requirements and practices at USAID before and after the Evaluation Policy, major accomplishments during the first five years of implementation, and priority activities to support policy implementation moving forward.


Evaluation Quality and Utilization

Relevant and high-quality evaluation is an important tool to track the progress, results and effectiveness of international development programs. Evaluation can help explain why programs are succeeding or failing, and can provide recommendations for how best to adapt to improve performance. Along with monitoring, evaluation contributes evidence to improve strategic planning, project design and resource decisions, and they are part of a greater body of knowledge and learning.

To better understand whether these and other efforts are working, the Bureau for Policy, Planning and Learning commissioned independent studies to examine evaluation quality (2013) and evaluation utilization (2016) at USAID. These two studies found there has been an increase in the quality and use of evaluations. The recommendations in these studies will inform ongoing evaluation improvement efforts.


Learning

USAID integrates into its work a strong emphasis on strategic collaboration, continuous learning and adaptive management – Collaborating, Learning and Adapting (CLA). CLA can be instrumental in helping to create the conditions for development success by:

  • facilitating collaboration internally and with external stakeholders;
  • feeding new learning, innovations and performance information back into the strategy to inform funding allocations, program design and project management;
  • translating new learning, as well as information about changing conditions, into iterative strategic and programmatic adjustments; and
  • catalyzing collaborative learning, systemic analysis and problem solving among developing country citizens and institutions to develop and implement programs that are more effective at achieving results.

CLA includes systematically generating and sharing knowledge about how best to achieve development outcomes through well-designed and executed projects and using that knowledge to inform decisions, adapt ongoing projects and improve the design of future projects.

USAID explores and implements approaches to intentionally embed learning through its programming. For more information about how USAID approaches learning, please visit USAID's Learning Lab, the Agency’s platform for generating and sharing information, tools and resources on how development practitioners can work together to integrate learning throughout USAID’s Program Cycle. Here, USAID staff and partners jointly create, share, refine and apply practical approaches to more effectively ground programs in evidence and quickly adapt based on new learning and changing contexts, thereby maximizing development outcomes.

For examples of how USAID has used evaluations to learn from and inform its work, please see the following case studies:


Monitoring and Evaluation Resources

USAID’s Monitoring, Evaluation and CLA toolkits complement USAID’s Program Cycle Operational Policy (ADS 201). These regularly updated toolkits curate the latest Program Cycle guidance, tools and templates for planning, managing, using and learning from monitoring and evaluation. They also include resources for USAID staff and partners on how to be more strategic and intentional when collaborating, learning and adapting throughout the Program Cycle.

A complete list of USAID’s publicly available evaluation reports, is available on the Development Experience Clearinghouse.

Last updated: November 03, 2017

Share This Page