Search on EES

By Gregory McGann and Lea Corsetti

Education Evaluation Trends, from the Leader of Khulisa’s Education & Development Division

Education policy has been at the forefront of evaluation since the discipline’s inception and has been among the most disputed and consequential areas of development.  The advent of COVID-19 has moved education policy and its evaluation even further up the agenda.  With entire years of primary, secondary and tertiary education lost, often disproportionately affecting the most vulnerable,  evaluation has struggled to fully encompass the emergency programs deployed haphazardly since 2020.  This blog, which offers a succinct summary of best practice in education evaluation, is therefore a welcome and timely addition.  It highlights the evolving role of behavioural science in understanding change, and Systems Thinking for analysing scaled interventions.  Along with a conventional reiteration of the importance of suitable benchmarks, the blog also suggests a greater role for meta-analysis as a way of contextualising obstacles and achievements in education, although the issue of generalisability will inevitably interfere.

Establishing effective research collaboration: Finding common ground between policymakers and academics | The Abdul Latif Jameel Poverty Action Lab 

No practicing evaluator is unfamiliar with the yawning gulf that frequently emerges between their discipline and the modus vivendi of policymakers.  Indeed, the evaluation profession has often been stuck between, on the one hand, adhering strictly to its mandate and so expending its time and energy purely on deepening the science of evaluation, and, on the other hand, focusing more on accommodating policymakers. With the former, evaluators risk being accused of operating in an echo chamber but the latter has often yielded severe disappointments and wasted opportunities. This blog opts for the path less travelled, in effect proposing a rigorous communications strategy for evaluation and the academia surrounding it. This would involve, in the authors’ estimation, building up a respected ‘research identity’, knowledge sharing of exemplary case-studies and ensuring that policy and academia are considered jointly at the funding stage.

When the financing stops: the World Bank, Chad, and shades of engagement | Independent Evaluation Group 

Speaking to evaluators about the regrets a particular intervention faced, lack of follow-up and long-term support emerges as a consistent theme.  It remains all too common to find the beneficiaries of a successful program bemoan the dramatic tail-off in support, contact and encouragement after the timeline of a project expired. Impact evaluators are cognisant of the paradox that while nearly every project images an impact stretching into the distant future, evaluations must be conducted on the basis of measurable, time-bound goals. There is no easy solution to this conceptual problem but this blog goes some way to proving the heavy cost of a sudden curtailment and proposes a compromise position.  Focusing on Chad – by metrics of human development, conflict and food insecurity among the most disadvantaged countries in the world – the blog summarises a recent in-depth report on the World Bank’s engagement in the country. It finds that successful projects were generally hamstrung by a lack of broad and detailed analytics. More importantly, the blog argues convincingly that, even if direct funding is impossible, the continuation of analytical work means that if and when funding can be resumed, the resulting intervention can be substantially more effective.

Balancing Biases in Evaluation

After extensive reading on Bias, Thomas Aston has authored a blog asking if it is time for the evaluation community to have an honest conversation on the subject. Bias is understood as a systematic error or deviation from the truth in results or inferences, and occurs when systematic flaws or limitations have a distorting effect on the results. This article mentions that there is a large institutional focus on the systematic portion of evaluation, but there is a lack of focus on cognitive biases (small-n). The blog explains three broad types of common biases: (1) selection biases, (2) respondent biases, and (3) evaluator biases. Small-n biases have been used as a reason to critique participatory methods in general. However to reject a method wholesale as evaluators seems unwarranted. Aston also reminds us that quantitative methods are not immune from bias or transparency either. Participatory approaches solicit multiple perspectives, and these tend to enhance triangulation. The author argues that it is not possible to truly eradicate biases from evaluation and research, however it is possible to balance out different biases when we use different methods. One needs to be careful with the choices one makes related to specific tools rather than methods when carrying out research. The author suggests that while there are merits to several methods and tool critiques, lack of integrity remains the bigger problem. We should account for biases in a serious manner regardless of the methods used in the evaluation, but not at the cost of imposing a strict hierarchy of methods which reproduces our own anchoring assumptions.