It hardly needs restating that ethical structures have come to occupy a central place in evaluation. The Belmont Principles and their descendants have proven to be among the core motivating forces for M&E research. The early life of professional evaluation, at least partially defined by imposing a basic ethical standard on otherwise disparate activities, has long since ended; most major institutions have an established framework and a commitment to regular retraining. The remaining responsibility is to demystify ethics as they evolve in a changing world. At the moment, it is not uncommon for sensitive issues to deter vital research, and deprive highly-charged policy debates of the very evidence required to satisfactorily advance them. That is one reason why this blog, which is far more forensic in its detail than most blogs on the topic, is so welcome. The authors describe a framework for conceptualising the standard of care, with an unusual sensitivity to practical conditions such as budgets, administrative capacity and political disagreement. Building and, in many cases, rebuilding confidence in ethical frameworks as a tool to use, rather than an interference to be feared, is integral to M&E going forward. This blog delivers on that need, with a level of detail deserving of the subject matter, although much more work still needs to be done to give practitioners full confidence.
As the clamour for rigorous, evidence-based evaluation grows ever louder, the introduction and mainstreaming of novel statistical techniques grows with it. In this blog, the Bayesian Confidence Updating technique is explained. Appropriately, its motivating case study is an evaluation commissioned by the UK Government’s development portfolio, whose chief economist was formerly Rachel Glennerster, one of several academics in the vanguard of new evidence-based techniques. The technique is rooted in Bayesian inference, which updates the probability of an event dynamically as more evidence about it emerges. This represents a qualitative shift from the traditional methods of evaluating hypotheses. With strong examples of the applicability of such techniques in industry, there is clearly grounds for broad introduction in M&E. However, practitioners, even those with a background in statistics or computational social science, are often intimidated by an absence of understandable use-cases. This blog does the profession the service of a highly readable guide with real-world examples, peppered with admissions of lessons learnt. A willingness not merely to teach by example of one’s own experience but by example of one’s failures is a rare gift and the author is to be commended for it in such a critical field.
Considering their centrality in M&E practice, Theories of Change have often been criticised as arbitrary and generic. According to these critiques, a relatively limited set of “recipes” are frequently crudely adapted to an astonishingly broad group of topics. When the evaluation takes place, traditionally the generic character of the ToC is ignored in favour of its substantive contents. This blog refocuses the attention on ToC reviews and the process of conducting them in an evaluation lifecycle. Robust ToC reviews are an indispensable tool at a time when projects must have an adaptive capacity: ToC review enables a systematic rather than ad hoc adaptive process, maintaining consistency when the natural consequence of adaptation is too often incoherence and miscommunication with stakeholders. As the blog argues, diverse stakeholder pressures, which may be radically different at the time of review than at the conception of the ToC, are a major obstacle to a sound review process. To complicate matters further, at the time of review, the evaluator may only be able to piece together a “fragmented picture” of project reality. As these caveats make abundantly clear, ToC review is no place for dogmatism; this vital area of M&E must be an example of the flexibility it is intended to inspire in projects.
This blog has covered research topics almost exclusively but this professional advice blog from the J-PAL team will be a rewarding read for both emerging and experienced M&E professionals. Although not specifically tailored to evaluation practitioners, the unusually highly-structured advice is invaluable for those looking for less nebulous career guidance. While opportunities proliferate for practitioners to showcase their work, the very diversity of such platforms can often deter them and hinder the public profile that practitioners increasingly require to gain employment. This blog lays out in detail the principles of a compelling public profile with amusingly granular practical advice alongside links to exemplary profiles and in-depth examination of specific professional websites. While the blog is clearly aimed at economists, most of the lessons can be applied to M&E professionals and evaluation practitioners often benefit from exposure in other fields of social science, particularly when overlapping interests are so apparent.