top of page

Choosing the Right Evaluation Approach for Complex Programs

  • Jun 1, 2024
  • 2 min read

Updated: Feb 18

Recommended reading


As programs and policies increasingly operate in complex, dynamic environments, evaluation practice has struggled to keep pace. While new methodologies have emerged, debates about evaluation approaches often remain anchored in the traditional positivist–constructivist divide. In this academic article, Dr. Yuval Ofek offers a much-needed reframing of this discussion.


The article argues that the absence of a clear conceptual framework for selecting evaluation approaches has led to their frequent misuse in practice. Rather than asking whether an evaluation should be positivist or constructivist, the paper proposes shifting the focus to levels of nonlinearity and complexity within the intervention being evaluated.


From Theory to Practice

Drawing on three in-depth case studies, the article demonstrates how evaluation approaches can either support or undermine learning and accountability when they are poorly matched to program dynamics. The analysis is grounded in extensive empirical material, including interviews with program managers, evaluation commissioners, heads of evaluation units, and evaluators across multiple countries, complemented by a quantitative survey.


Based on this evidence, the paper expands classic evaluation theory to reflect contemporary managerial demands and the realities of complex interventions. It introduces a practical, operational tool for categorizing evaluations and aligning evaluation approaches with both program characteristics and evaluation objectives.


Why We Recommend This Article

The article’s core contribution lies in its actionable insight: evaluation approaches that are misaligned with program nonlinearity can distort findings and frustrate commissioners, while approaches tailored to complexity can significantly improve both credibility and usefulness. By separating nonlinear dynamics from structural complexity, the paper enables a more nuanced matching of evaluation designs to real-world conditions.


For evaluators, evaluation managers, and funders working in complex settings, this article provides a robust conceptual lens and concrete guidance for making more informed methodological choices. It is an essential read for anyone seeking more fit-for-purpose evaluation practice.


Ofek, Y. (2016). Matching evaluation approaches to levels of complexity. Evaluation Review40(1), 61-84.


Read the paper here.

Comments


bottom of page