Search on EES

 

EES’ first webinar in the series Emerging Data Landscapes in M&E, on Geospatial, location and big data: Where have we been and where can we go? began with the session Evaluation and emerging data: what are we learning from early applications? by Estelle Raimondo, Senior Evaluation Officer at the World Bank Independent Evaluation Group.

Session Summary:

COVID-19 has revealed some of the long-standing issues with our evaluation practice and can act as an accelerator of change. The presentation highlighted how COVID-19 can prompt us to explore the emerging data landscapes to overcome the immediate barriers we are facing in conducting evaluations, and how these emerging data can help us more profoundly renew how we envision evaluation in the future. From the early adoptions of emerging data at the World Bank Independent Evaluation Group, we derived a handful of practical lessons and reflected on the institutional implications of more systematically integrating emerging data in our practice. 

The Session’s Key Messages:

  • The pandemic has shed light on ethical, conceptual, and methodological challenges that we face when weighing the risks and rewards of undertaking evaluation, challenging the assumptions of the interventions that we study, and on our over-reliance on standard “fly-by” evaluation approaches.
  • COVID-19 is also prompting us to depart from business as usual and explore the emerging data landscape to find solutions. At the Independent Evaluation Group, we have been experimenting with geospatial and location data in several evaluations: from using geospatial budgetary data in the Mexico Country Program Evaluation to using drone imagery to assess land use patterns in rural Niger. We are also experimenting with text analytics using Artificial Intelligence (AI) and have tapped into social media data to assess the Bank Group online convening on the SDGs
  • We’ve learned four key lessons from these early adoptions: 
    1. Applicability: While “big data” could potentially serve evaluating large portfolios at the global level, we have found that they are well-suited to going deep on a limited set of interventions.
    2. Data access: We have found that the main challenge we faced was less the availability of data, and more the skillset to access and use them. 
    3. Skills and costs: Finding the right partnership and knowledge brokering between data scientists, knowledge experts, and evaluators is fundamental. 
    4. Mitigating biases: Just like with other data sources and methods used, it is critical to diligently assess, mitigate and report the biases of AI. Evaluators should keep abreast of the increasing research on AI biases. They should also realize that emerging data can shed useful light on the biases of conventional data. 
  • We also shared a couple of reflections on the institutional implications of a more systematic use of emerging data in evaluation, and the need to retain a healthy dose of skepticism:
    1. There is a serious risk that relying on emerging data, AI and remote technologies can further enshrine the notion that evaluation is primarily an oversight tool that fuels a top-down control of development interventions.
    2. There is also a risk that we don’t consider the ingrained biases in our standard evaluation approaches seriously enough, and that our use of emerging data exacerbates those biases. Ensuring that AI is not used in evaluation as “algorithms of oppression,” as highlighted by Safiya Noble and realizing that “observing from space isn’t the same as observing from the field,” is necessary. 

Estelle’s Answers to Participants’ Questions: 

1. What big data analytics tools would you recommend for M&E? 

In an ongoing pilot, we have partnered with data scientists and Artificial Intelligence experts to experiment with AI Knowledge Graph. This big data approach has potential because it shares some of the roots of “theories of change” and it seeks to answer the question “why.” 

2. Do you see a role for Citizen Science? What has been the take up?  

In some ways, evaluation should be at the foreground of citizen science because it has the mandate—promoting evidence-based policy making for improved societal well-being—and the tools—a wide range of participatory data collection approaches—to facilitate the involvement of citizens in the inquiry. However, in institutionalized evaluation systems, this agenda hasn’t been particularly prominent yet.