Search on EES

Authors: Lea Corsetti, Linh Hoang, Winne Chu, Francesco Bolzonella, Linh Hoang, Kemi Ayanda and Muhammad Waseem


I want a world where evidence counts | Juha Uitto

In a thought-provoking article, renowned evaluator Juha Uitto reflects on the current state of the world, where emotions and identity politics have overshadowed rational, evidence-based decision-making. Uitto envisions a world where intelligent individuals from diverse backgrounds engage in debates using scientific and evaluative evidence to inform policies and decisions.

Uitto argues that the reduction of complex human identities to simplistic categories based on race and gender is intellectually lazy and insulting. He expresses concern about the politicisation of issues that should be evidence-based, such as the origins of the Covid-19 pandemic, and the cynicism bred by politicians’ actions.

Drawing from his extensive experience in the field of evaluation, Uitto advocates for placing evidence at the forefront of decision-making. He emphasises the importance of using diverse methodologies, including quantitative and qualitative approaches, and engaging claimholders and beneficiaries in evaluating policies and their impact on different groups. Uitto highlights Michael Quinn Patton’s concept of “bricolage,” which involves selecting the most appropriate methods for each question from a diverse set of approaches.

Uitto concludes by expressing his disappointment with the current state of affairs, where trust in science and expertise is at an all-time low, and people are unwilling to hear opinions that contradict their beliefs. As an expert in evaluation, his warning about the lack of reasoned, evidence-based debate and its impact on societal progress is particularly poignant.

This article serves as a timely reminder for evaluators and decision-makers alike to prioritise evidence-based approaches and foster an environment where rational discourse and diverse perspectives are valued. Uitto’s call for a more evidence-based world is a message that resonates strongly within the evaluation community and beyond.


Artificial Intelligence Applications for Social Science Research | Mississippi State University 

A team from the Social Science Research Center at Mississippi State University developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in the database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualisations, or 3) research dissemination. In the database, a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023 were provided. Supporting links were also provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool were documented.

The database also documented when the AI tools were useful for text-based data, such as social media. The database thus offers some of the recently published tools for exploring the application of AI to social science research. At the end, the database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualisations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both.


United Nations Behavioral Science Week 2024

UN Behavioral Science Week 2024 highlighted the powerful role of behavioural science in driving meaningful, cost-effective changes across various sectors. Experts discussed integrating behavioural insights into organisational practices, policy implementation, and evaluation processes. Two sessions stood out for their relevance to evaluation.

In the session Cultivating a Data-Driven Culture, the UN Secretary General’s office introduced the UN 2.0 strategy, which focuses on integrating data, digital tools, and behavioural science to boost organisational efficiency. The UN Evaluation Group showcased an initiative that used behavioural nudges to enhance the effectiveness of evaluation recommendations. Their efforts led to quicker and more effective implementation of these recommendations, showing how minor behavioural changes can result in significant improvements. The session highlighted the importance of prioritising data initiatives, simplifying access, and ensuring relevance and engagement.

In the session Addressing Policy & Programming Challenges, Professor Katie Milkman from the Wharton School explored the impact of behavioural science on policy and programming. Milkman highlighted two main tools: economic insights (such as incentives and fines) and behavioural insights (which address psychological barriers). She introduced the concept of mega studies—large-scale experiments that test multiple behavioural interventions simultaneously. One notable study at Walmart demonstrated that the message “a flu shot is waiting for you” increased vaccination rates by 20%. These types of interventions are highly cost-effective compared to traditional methods.

UN Behavioral Science Week 2024 clearly showed the transformative potential of behavioural science in various sectors. For organisations, integrating behavioural insights can lead to more accurate data collection, better stakeholder engagement, and more effective implementation of recommendations. This is particularly important for the field of evaluation, where behavioural insights can enhance the accuracy and impact of assessments.


USAID Agency Learning and Evidence Month 2024 | USAID

Last month, the United States Agency for International Development (USAID), a pioneer in global development and humanitarian interventions, launched its second annual Agency Learning and Evidence Month. Held in May 2024, this event demonstrated USAID’s commitment to enhancing the impact and sustainability of international development efforts. Themed “What Works and Where to Find It” the month-long series featured over 30 virtual sessions and attracted over 150 attendees per session, creating a forum for expert practitioners and researchers from around the globe to exchange evidence-based approaches and resources that drive global development priorities: from a new emphasis on cost-effectiveness evidence, as in the case of the Improved Activity Cost-Effectiveness (ImpAct) Review on Women’s Agricultural Income, to the designing of cross-sectoral impact evaluations tailored to systemic challenges, as in the case of USAID’s Health, Ecosystems and Agriculture for Resilient, Thriving Societies (HEARTH) program, to combating misinformation, as illustrated in the misinformation intervention database and infographic summary.

The 2024 USAID Agency Learning and Evidence Month provided a comprehensive platform to explore innovative solutions and best practices that could enhance the operational effectiveness of development organisations. These discussions focused on how organisations can not only sustain but also lead impactful projects across various domains, including climate change, social resilience, and anti-corruption.

Central to this year’s event was the presentation of USAID’s highest-level learning agenda, the 2022-2026 Agency Learning Agenda. This agenda sets the standard for monitoring, evaluation, and learning across USAID’s programmatic work, aiming to foster evidence-based decision-making. Aligned with the Department of State policy priorities, the agenda is intricately linked with over 40 Washington-led sectoral learning agendas and is built around nine pivotal learning questions. These questions address critical issues such as how to streamline processes to meet long-term needs efficiently and how to respond swiftly to unforeseen shifts in context

Given the vast breadth of resources on evaluation and learning, USAID’s Learning Lab platform is a great way to remain informed of evolving research and applications on topics of global development and evaluation, such as the case of the community updates of their “Month and Learn”.


Becoming aware of contradictory demands on evaluation systems | Better Evaluation

Evaluation systems often face seemingly contradictory demands. Different stakeholders such as implementers and donors have different expectations of evaluation, creating one among multiple contradictory demands on evaluation systems such as learning vs accountability. There are different ways to approach these competing demands, with differing implications, one of which is known as paradoxes. Derived from the experience of 2SCALE – a multi-stakeholder programme for inclusive agribusiness in Africa, five paradoxes have been identified. These paradoxes aim to provide a language to identify, discuss and act upon competing demands. Insights into how to recognise competing demands and accommodate them together in a single M&E system are given through this case study. Beware that the approach of paradoxes is a temporal solution that leads to new paradoxes and challenges, and there is no ultimate solution to competing demands and they therefore require continued attention. Second, although paradoxes may be universal, the ways of addressing them are highly context-specific.


The Challenges and Constraints of Evaluations | Centre for Effective Services

In their recent blog McGrath and Slane from the Centre for Effective Services (CES), delve into core challenges of evaluations and propose practical solutions. Budget constraints often compel evaluations to strike a balance between affordability and thoroughness, advocating for streamlined designs, prioritisation of critical data, and integration of cost-effective technologies. Addressing time constraints, they recommend simplifying evaluation structures, clarifying timelines, leveraging existing data, and employing efficient data collection methods to meet stringent deadlines. Data limitations, such as incomplete or  poor-quality data, prompt adaptive strategies like reconstructing baseline data and utilising mixed-methods approaches to bolster data reliability. Political and organisational pressures underscore the importance of effective stakeholder management, transparent communication, and maintaining evaluative independence. Ethical considerations are paramount, ensuring participant safety through practices like informed consent and confidentiality protections. These strategies are exemplified in practical applications across various evaluations, demonstrating their efficacy in scenarios like community healthcare networks and quality development services.