What we’re reading in March: Remembering the Learning part of MEL, Machine Learning in GIS and Evaluating Family Planning in Egypt
By Greg McGann, EES Blog Editor
Few organisations in the international development sphere attract as much interest and scrutiny as the Bill and Melinda Gates Foundation (BMGF). Those looking for a revealing exposé won’t find it in this blog. It does, however, provide a useful insight into the “culture” of MEL in a range of organisations. The blog is framed as an interview with the independent consultants assessing donor contributions and resource mobilisation at the BMGF’s Global Fund. Their strategy consisted of equipping six organisations with the practices and systems to evaluate their own work. Some of the key recommendations include carving out time for reflection and discussion across teams, recognising the importance of MEL “culture” above and beyond technical responsibilities, and working at “the speed of trust.”
Institutions have, in recent years, started to provide more consistent support to the learning component of Monitoring and Evaluation. This blog provides some much-needed insight into the effectiveness of that learning. It deals in particular with the quantitative skills that are increasingly prized across M&E recruiters but that may not confer the skills needed in the field. This experiment finds that those trained in quantitative methods value said methods more than the untrained without losing sight of the value of more traditional qualitative analysis. Although these conclusions are unlikely to surprise anyone, another robust template for the evaluation of learning is a welcome development and might spur the regularisation of such efforts across the sector.
The M&E profession is not unfamiliar with promises of new time-saving or insight-generating software. However, the prophesied suite of accessible and transparent tools for automation often seems as remote as ever. For a sector rooted in strict best practice, the sheer diversity of software can often seem to be a barrier to adoption. Alas, this blog adds to that range. On the upside, the linkage with GIS means that the blog offers substantial value to many practitioners. The Geographic Information System (GIS) has become an integral part of many major evaluation efforts, especially in the growing field of environmental evaluation. Placing some Machine Learning technologies under the umbrella of GIS might offer some of the structure and accessibility practitioners need for more widespread adoption. This blog offers some insights into how these techniques have expanded the potential of existing monitoring methods, especially in the case of multidisciplinary datasets, and integrated into a successful program.
This blog revisits one of the most vexing issues of development economics and attendant evaluation: the role of population dynamics. It is based on a seminar in Egypt hosted by J-PAL and UNICEF, as part of a series entitled Global Evidence for Egypt Spotlight. Egypt’s population stood at 20 million in 1950 but UN population projections expect it to reach more than 220 million by 2100. To question the wisdom of this rise, given the country’s abiding food insecurity, already-overwhelmed infrastructure and dense urbanisation, would seem to be obvious. However, such conclusions have been controversial and both academia and international efforts have approached the issue with caution. The 1994 Cairo Conference on Population and Development ruled out the use of population targets and shifted institutional focus onto meeting women’s demand for reproductive health. The question for evaluators is therefore less whether the effects of population growth can be evaluated but rather whether they should be evaluated at all.
So long as the correlation between supporting women’s choices and fertility decline persisted, there was little institutional motivation to question the Cairo Conference’s conclusions. However, Egypt is one of many countries in which that relationship has broken down: Egyptian women have had increasing access to reproductive health services but fertility actually rose 14% between 2006 and 2015. New research also suggests that the crucial metric of “unmet need for contraception” is widely misinterpreted and misapplied in program evaluation.
The blog is not entirely representative of this debate, which is unfortunately taking place largely outside the public sphere. It does not break with the prioritisation of reproductive health but it stretches it in a way that may have been less acceptable a decade ago. Even limited explicit mention of the adverse effects of population growth is therefore a quiet revolution. Evaluators may now look forward to a future in which this fraught topic can be analysed more openly and incorporated into study design from the outset. It also details the way in which reproductive health services are innovating, evolving and improving in the face of formidable obstacles.