Search on EES

Disclaimer: the opinions expressed are those of the author and do not necessarily reflect those of the European Evaluation Society

Principles for a healthy relationship between evaluators and experts – Eval Forward

The EES blog and its monthly roundups have frequently featured pieces discussing relationships between evaluators and key stakeholders. In this blog, FAO’s Ibtissem Jouini unpicks the often-forgotten partnership between evaluators and experts.  It’s a predictably unexplored relationship given how often the two roles are rolled into one. Traditionally, evaluators were often domain experts trained at a later career stage in evaluation.  However, as demand for evaluation has outstripped supply and the prerequisite skill set has grown too large for simple training, M&E activities are increasingly undertaken by evaluators without an immediate background in the subject.  As the evaluation profession matures and its body of best practice and research grows, this problem is only likely to increase.  The author suggests, as a partial solution, cultivating “accidental evaluators”: experts empowered by clear and concise M&E documentation to take on simple but important tasks, supporting and informing a more specialised evaluation team leader.

Evaluation as a Pathway to Transformation Lessons from Sustainable Development | Scott G. Chaplowe, Adam Hejnowicz & Marlene Laeubli Loud 

In this chapter of the newly published Palgrave Handbook of Learning for Transformation, Chaplowe, Hejnowicz & Laeubli Loud examine the potential role that evaluation can play in transformational learning and change (TLC) as applied to sustainable development. The chapter is built on the singular premise asking “How can evaluation as a profession inform and accelerate transformation?” Evaluation as a profession straddles both theory and practice involving processes determining the worth of interventions seeking change, therefore making it a perfect candidate for the experimental passageway to TLC. The authors offer insight into this change by offering a quick overview of evaluation’s past and role in achieving the SDGs. The authors also offer an interesting insight into the influence of complex and system thinking in evaluation, including a reflection of the potholes and bridges that these considerations can mean for evaluation’s transformational potential. As the authors remind us, development evaluation is prone to “Evidence Mania Approach(es)” or Obsessive Measurement Disorder (OMD) due to the fixation within the development industry to produce evidence based data and therefore undermining the very interventions they are meant to support. Consequently, considering whether the right thing is being done in the first place becomes a secondary question to that of checking if something is being done right. For this reason the authors call for the adoption of a more adaptive intervention design in evaluation (M as E) to account for a more receptive and nimble approach to change due to its course-correction and flexible nature, as opposed to the arbitrarily eventful approaches taken so far. The chapter concludes with a reflection on the potential for TLC is largely dependent on its ability to transform from within, spurred by some promising signs for transformative evaluation beyond the pothole-ridden approach of conventional evaluation.

What does evaluation say about the achievement of the SDGs? – Oscar A. Garcia, UNDP 

Entering the 2020s, much energy was spent preparing institutions and the professionals employed by them, as well as the international community at large, for a concerted, decade-long push to achieve the Sustainable Development Goals (SDGs) by 2030 as envisioned.  Much of that rhetoric, planning and literature now seems woefully unsuited to the social, economic and fiscal situation the world finds itself in.  This blog from Oscar Garcia, Director of UNDP’s Independent Evaluation Office, provides a very welcome overview of the changed circumstances and these can be adaptively incorporated into a new plan for achieving the SDGs.  This turning point within the international development agenda offers a key opportunity for the evaluation community to provide policymakers with the evidence needed to make decisions looking past 2030. The author emphasises that the lack of evaluation concerning the MDGs going into the SDGs is a mistake that must not be repeated this time around. To ensure the potential for learning from the SDGs going forward, UNDP’s Independent Evaluation Office has joined a coalition that includes 17 UN Agencies to undertake the syntheses of evaluative evidence of SDG achievements and impact. This effort is organised around five pillars: people, planet, prosperity, peace and partnership. Garcia argues that we can only do what is necessary – doing more with less, often in a shorter time – through a determined push into innovation and technology, with tougher decisions guided by rigorous standards of evidence.  The tools and experience of evaluation will play a critical role in this project, with data collection, a pillar of M&E, also an essential component of frontier technologies such as Machine Learning. UNDP has taken a lead on this particular area with its IEO’s Artificial Intelligence for Development Analytics (AIDA) platform.  With needs increasing just as resources struggle, the multiple, overlapping crises that the world is facing at this moment are squeezing the development space, and compromising solidarity and humanitarian sentiment.

Exploring the impact of research: why citations are not enough

Research impact evaluations are among the most in-demand M&E projects and yet, as this blog from practitioners Alix Sara and Nicola Giordano makes clear, the methodology is often inadequate. Capturing the impact of research is often too nebulous and long-term for conventional evaluation horizons and instead relies on primitive metrics such as the number of downloads. In recent times, citation analysis has emerged as more holistic and quantitative alternative, with network science supporting new insights with precise measures of outcome. The blog authors argue that citation analysis alone is no longer sufficient and seek to build on its achievements with even further contextualisation. They describe a ‘chain and ripple effect’ in which a much broader impact was discernible through a longer-term engagement with their evaluation participants.  As the authors admit, their work should serve as a starting point for further inquiries but it does provide a helpful indication of where impact might be located.  In this case, as in many others, a central obstacle is the limited time horizon and remit of evaluations, with research in particular working with extended timeframes.

The skeptical turn in evaluation (and what to do with it) and EES 2022 — It’s not (only) you, it’s me (too) |

The 14th EES Biennial Conference was held in Copenhagen in June and this blog gives Olivier Cossée’s impression of its final keynote, featuring the World Bank’s Estelle Raimondo and Peter Dahler-Larsen of the University of Copenhagen. One of the primary aims of the in-person conference was to tackle issues that could not easily be understood by individual teams.  The keynote dealt with the creeping “bureaucratization of evaluation” and other incremental problems that might not be clearly visible to practitioners, but which become readily apparent in any general survey of the profession. The author coins a useful term, “performative use” of evaluation, to denote the undue legitimacy gained from installing evaluation systems that are only reluctantly put to work. The author ably captures the sense of ambivalence in the hall, and indeed throughout the conference, at having the challenges of evaluation elaborated upon so candidly.

That sentiment can also be found in Thomas Delahais’ account of the conference. He identifies the key theme of the conference, across its sessions and speakers, as a reckoning with the interlocking crises facing societies around the world, and the crucial responsibility of evaluators to remain self-reflective, unassuming and guided by evidence not dogma.  He also picks up on the new undercurrent of scrutiny surrounding the idea of independence, although on balance the idea likely still has more defenders than detractors in the profession at large.  Generally,  both blogs remember the searching questions posed to evaluators at the EES Conference and explicitly aim to build on those questions going forward.

A changing world needs changing methods: six steps for evaluations to assess “what will work” – International Institute for Environment and Development

Following a popular participatory session at the EES Conference, the International Institute for Environment and Development’s Emilie Beauchamp and Stefano D’Errico explain how consideration of climate change is changing M&E practices.  According to the authors, this has particular implications for evidence collection and utilisation of existing data and methods, which may now be unsuited. They argue that, in many cases, existing methodologies prioritise retrospective reporting rather than learning, are myopic in their feedback from stakeholders, and are unable to capture present or future climate risks. In response, they have devised a concise set of steps evaluators can adopt to upgrade their work and update it for the current emergency.  These steps include reassessing the OECD-DAC framework on which so many rely, proactively making connections to climate risks, adjusting data collection and management, and guaranteeing the inclusion of learning strategies as an outcome of the evaluation. The IIED is in the process of proselytising for its suggestions, which were well-received at the EES conference, and we look forward to seeing their implementation in years to come.

Transformational shift to a more futures-focused evaluation – Sitra

Sitra, the “Finnish Innovation Fund”, has built up an impressive reputation for genuinely inventive and unusual thinking in the public sector space.  Their presence at the EES Conference and their impressions of it are therefore very welcome.  In this blog, they provide a summary of their session, a collaboration with Salesforce and the Association of Futurists, on infusing foresight and evaluation. “Futures-thinking” may well become a new addition to the evaluation lexicon if, as they hope, foresight tools and methods are mainstreamed.  As well as providing examples of its use in M&E, the session capitalised on a sense among practitioners that the magnitude of recent, unforeseen events necessitates a new framework privileging prediction, adaption and communication.