Search on EES

Date/Time
Date(s) - 13/12/2024
13:00 - 17:00

Categories


13 December 2024

13:00 – 17:00 CET

Facilitators : Linda Raftree and Zach Tilton

This interactive workshop builds on the EES’s Sparking Discussion on AI in Evaluation Practice that took place on November 4. We will begin with an engaging discussion on AI in our personal lives and our work followed by a more in-depth overview of Gen AI in Evaluation landscape, including examples of different use cases. Following that, participants will work in teams to develop shared principles for using GenAI in evaluation processes. Then we’ll take a closer look at a few easily accessible GenAI tools and how they could be applied to evaluation tasks. We’ll practice prompt engineering and we’ll build a custom GPT. Throughout the workshop, we’ll create space for discussion and experience sharing among participants to both ask questions and share how they are using different kinds of AI tools and approaches in their own work.

Participants will leave the session with a better understanding of AI and GenAI overall, hands-on experience with some GenAI tools for research and evaluation, and an overarching set of questions and orientation for assessing if and when to use GenAI in their own work.

Rough outline:

  •  Intro to AI, Gen AI, and AI ethics
  •  Creating shared principles for the use of GenA
  •  Hands-on use of GenAI tools
  •  10-minute break
  •  Effective prompt engineering
  •  Building a custom GPT
  •  Round robin of questions and sharing

Linda Raftree is an independent consultant focused on the ethical use of technology and digital data in development, humanitarian, and human rights programs. She is currently working with the Asia Foundation to support organization-wide data responsibility policy and practice. She is also supporting UNHCR to explore the potential and risks of digital mental health support for adolescents in forced displacement, Amnesty International to research children and youth digital rights and well-being, and iMedia to understand the use and impact of digital platforms on social and behavior change communications, and Mastercard Foundation to conduct a landscape study of partners, initiatives and solutions for technology-enabled impact measurement on the African continent.

Linda is the founder of MERL Tech, an initiative working at the intersection of technology and monitoring, evaluation, research and learning (MERL). In the past, she supported the WHO, CARE, Humanity United, Civic Hall, Farm Radio, Girl Effect, Catholic Relief Services, Girls Who Code, and USAID to develop responsible data principles, policies, guidelines, and practices. She also advised The Rockefeller Foundation on innovation and ICTs in evaluation. Prior to becoming an independent consultant, Linda worked in various roles at Plan International, including child rights and child protection, youth engagement, digital development, and transparency & governance. Linda is a Certified Information Privacy Professional (CIPP) and Certified Information Privacy Manager (CIPM).

Zach Tilton is an Evaluation Specialist with The MERL Tech Initiative who specializes in conducting technology-enabled evaluation, meta-evaluation, research on evaluation, and artificial intelligence (AI)-enabled evaluation capacity development for practitioners and organizations integrating AI in their evaluation work and function. He is the Co-chair of the Sandbox working group in The MERL Tech Initiative Natural Language Processing Community of Practice. He is an outgoing Eval Youth Management Group member and editorial board member of the American Evaluation Association Publication New Directions for Evaluation. Zach’s field experience spans North and West Africa, Southeast Asia, and the Pacific, with more than two years of international experience in rural communities. Mr. Tilton is a Rotary Peace Fellow with an undergraduate degree in Peacebuilding from BYU-Hawall and a Master’s in Peace Studies from Bradford University. He is a PhD candidate in the Interdisciplinary PhD in Evaluation program at Western Michigan University, where his doctoral research is focused on AI-enabled meta-evaluation.

Fees :
€180 EES Member
€230 Non-Member

Why not sign up to become an EES member and benefit from member rates on this and future EES events.
Participants experiencing financial hardship – please contact us for special rates. We wish to be as inclusive as possible.

Register Here

EES Thanks ICF for its sponsorship