Search on EES

Date/Time
Date(s) - 06/09/2021 - 10/09/2021
All Day

Categories


Recordings of ALL 90 CONFERENCE SESSIONS are now available!

Conference attendees can access the recordings by logging onto the Conference Platform Whova. All recordings can then be found by clicking on a specific session in the Agenda.

You can watch these sessions until March 2022!

 

For more videos and interviews with the speakers please visit this webpage.

We hope all attendees enjoyed EES’ Online Conference, and would like to thank the following groups for helping to make the Conference a success:

First, our sponsors for making this conference possible, and for allowing us to extend bursaries to 60 people working in the global south.

Second, all of our speakers, moderators, and panelists, who helped to bring the Conference’s rich programme to life through engaging presentations and debates.

Third, our PDW facilitators and organizers, whose hard work led to six extremely engaging and successful small-group workshops.

Fourth, our volunteers who helped to ensure the PDWs and Conference ran smoothly. Learn more about our volunteers here.

And lastly, our EES Board Members who helped to chair sessions and provide support throughout the Conference!

—————-
Conference Themes:

Theme 1: The Anthropocene and its complex problems: The role of Evaluation

In this theme, presenters will address the bigger conceptual questions, by moving away from a purely philosophical or very pessimistic view, and rather focus on providing constructive perspectives or solutions.

Questions this theme’s presentations will address include:

  • What makes evaluation valuable and how can evaluation maintain legitimacy in a world characterized by new challenges (complex problems)?
  • How can evaluation provide credible and insightful answers to global issues?
  • What can evaluation contribute to the debate on the Anthropocene?
  • How can evaluation support positive and forward-looking change?
  • What is the role of evaluators in solving complex problems?
  • What role can evaluators play in supporting a return to an evidence based or evidence supported environment?

Theme 2: Adapting the toolbox: Methodological Challenges

This theme aims to allow presenters to focus on practical tools (share tools, discuss tools, etc.) and on the practicalities of using these (can they be used, do they deliver the expected data, what are the ethical concerns). Presenters will discuss an interesting way of using well-known tools to address uncertainty or suggest new tools, and draw the link between tools and challenges, identify main problems of using specific tools, and examine how these can be resolved.

Questions this theme’s presentations will address include:

  • What new ways of using tools and approaches (e.g. new ways of collecting, managing and understanding data) exist which can help us respond to the complex challenges we face?
  • How can new available data (e.g. big data; contextual/framing data; supporting qualitative data, etc.) be used?
  • What methodological challenges do evaluators face and can expect to face which may not have been there before?
  • What are the practical and ethical implications of new tools for data collection, and analysis/interpretation and storage ?
  • What does the data that can now be collected allow evaluations to say/do?

Theme 3: Propelling and provoking the agenda: The role and responsibility of evaluators

This theme is mainly about approaches to evaluation rather than about practical tools to conduct evaluations. Presenters will focus on a single (or limited) number of approaches, try to highlight shortcomings and advantages, and make suggestions or explore options that lead to practical solutions. This theme is about the responsibility of evaluators, and will focus on real world discussions.

Questions this theme’s presentations will address include:

  • Are the approaches and criteria used in evaluation sufficient? Are they up to date? Are they accurate and comprehensive?
  • If they are not sufficient, current, comprehensive and/or accurate what is the responsibility of evaluators to contribute to their further development, refinement and or replacement?
  • Does the current agenda effectively respond to the challenges that evaluators/evaluations face?
  • Do evaluators have a responsibility to re-frame the way evaluations are conducted in view of the uncertain world faced? What space do evaluators have to do so? Which resources (including organized pressure from evaluation associations) can evaluation muster to act vis-à-vis evaluation commissioners, regulators, and users? Must evaluation move towards post-normal? And if so, what does this mean? Which other actors are there? How can evaluators interact with them in exploring these questions?

Theme 4: Greasing the wheels of evaluation: the role of evaluators, evaluation commissioners and evaluators funders (donors) in ensuring that knowledge changes practice

This theme is about the relationships between evaluators and other key actors. Presenters will focus on specific aspects of the different roles and how these can better respond to current challenges (uncertain world). Good discussion will ensue from a focus on successes (evaluations that were widely used/impactful) and from comparison with experiences where evaluations were not useful or impactful. Presenters will explore what made the difference and what factors can support the improved use of evaluations in future.

Questions this theme’s presentations will address include:

  • How can evaluations become an integral part of responding to the challenges of an uncertain world?
  • What role do evaluators, evaluation commissioners, and evaluation users have to ensure that criteria, approaches and tools are suited to the questions being answered?
  • Who has what responsibility? Who decides how and what data should be collected and how data should be analysed? How much of the process should be a negotiation between parties? Which other actors should be involved and how?
  • What role does budget, time constraints play in determining the robust nature of results?
  • How to ensure that Monitoring, Evaluation and Learning (MEL) has a complementary role, rather than take over? Where is the balance?
  • What about cross cutting issues (criteria), what does their inclusion in evaluation mean for evaluators (affect evaluation questions, tools, other criteria and approaches)? What does inclusion of cross-cutting issues imply for donors and implementers? How does evaluation address the inclusion of these issues at a programmatic level? Which are competing interests? How can they be addressed