Search on EES

There is a particular disorientation that comes from leaving your own context, your city, your colleagues, your familiar debates, and walking into a room full of people doing the same work, just somewhere else entirely.

I experienced it twice in 2025. In May, at the Evaluation Conclave in Sri Lanka, organized by the Community of Evaluators South Asia. And again, in November, in Japan, at the 5th Asia Pacific Evaluation Association (APEA) Conference. Both gatherings circled the same questions I carry from Tbilisi, Georgia: How do we make evaluation genuinely useful – and genuinely better? How do we build it into a real profession? 

What I found was not reassurance. It was something harder and more useful.

The utilization question, still unanswered

In the opening plenary in Colombo, Michael Quinn Patton returned to the question central to his work:  what does it actually take for evaluation findings to reach a decision that changes something? His utilization-focused framework has shaped how a generation of practitioners thinks about why this work matters. What struck me, hearing it in that room, was not the argument itself, familiar to anyone in the field, but the collective acknowledgment that followed: that we still haven’t solved it.

Michael Quinn Patton at the Evaluation Conclave 2025. Photo: Community of Evaluators South Asia.

That sounds obvious until you sit in a room full of experienced evaluators and realize it is still the most contested sentence in the field. The Conclave kept returning to this tension, what makes use more or less likely, what it takes for findings to travel from an evaluator’s desk to a decision that actually changes something. The conversation wasn’t theoretical. It was about plumbing.

The utilization problem is stubborn because it is structural, and structures differ everywhere. A good evaluation system in Tokyo or Geneva may not be transplanted to Tbilisi or Nairobi. But the framework for thinking about it travels just fine.

Participants at the Evaluation Conclave 2025.

Methodology as a way of thinking

Perhaps the utilization problem does not exist in isolation, and part of the answer may also lies in  in how evaluations are designed in the first place – the methodological choices made long before any report lands on anyone’s desk. Both conferences made space to question not just which methods we use, but how we think about methodology itself.

In Colombo the conversation ran from AI applications to indigenous and feminist frameworks, in Tokyo from value for money approaches to EvalIndigenous and what emerging technologies might mean for evaluation quality. I brought my own practice into both, including the gap between what evaluation frameworks promise and what they produce, the confusion between monitoring and evaluation, the tension between compliance and genuine learning. What I had been carrying as personal professional experience turned out to be something much wider – which pointed me toward the question both conferences were also wrestling with: not just how individual evaluators work, but what the field has built, or might still need to build.

The profession question, which is really the existential one

In the closing session at the United Nations University, at the APEA conference adopted the Tokyo Call for Action for the Institutionalization of Evaluation – its commitments shaped in large part by the findings of Prof. Reinhard Stockmann’s Evaluation Globe project: 50 country case studies that tell a consistent story. Where evaluation takes root, someone deliberately built the infrastructure. Where it doesn’t, the field remains fragmented and invisible. The Call commits to changing that: defining professional competencies, building university-association partnerships to develop curricula, creating real pathways for emerging evaluators rather than token participation.

I came home thinking about what it means to sign something like that in a region where the profession it describes is still being built.

Group photo at the 5th Asia Pacific Evaluation Association Conference, November 2025.

What this means when you come home

In the South Caucasus and across much of Eastern Europe, evaluation is still finding its footing as a profession and that gap goes deeper than a shortage of academic programmes. Even where M&E language has entered institutions, it often appears as a minor component of broader courses rather than a subject in its own right and it is often taught by professionals whose core competencies lie elsewhere: lawyers, policy analysts, administrators, doing their best with frameworks that were never quite designed for the questions they are being asked to answer. 

Genuinely adopting evaluation as a profession, with the competency standards, intellectual rigour, and accountability that implies, is a different and harder thing. Setting a course is one step. Building a culture and a profession is another. The University of Georgia has been working on both. In 2019, it introduced one of the first dedicated, standalone M&E courses within a Public Administration curriculum in the region – recognising evaluation as a subject that deserves its own academic space, not just a minor component of other courses. At the APEA conference, it became the first institution in Eastern Europe to officially sign the Tokyo Call for Action, committing to professionalization and institutionalization beyond the classroom.

That commitment has a concrete form – HubEVAL – the region’s first evaluation hub rooted in a university , built within the Public Administration Department at the University of Georgia. Its work spans three things the region actually needs: building evaluation competence through education and training, generating context-relevant knowledge through research and academic publications, and fostering a genuine community of practice across the region. And it is already connecting outward. Together with the Monitoring, Evaluation and Learning Professionals Association (MELPA), the University of Georgia is co-organising a technical working group on evaluation professionalization bringing practitioners and academics together to build the standards the field still needs.

We came back with something harder to name than inspiration – more like a clearer map: what needs to be built, what already works elsewhere, and what would be a mistake to reinvent. 

The questions don’t stop at regional borders

The questions being explored in Colombo and Tokyo about institutionalization, about who gets to be a professional and how, do not stop at regional borders.  They are questions the European evaluation community knows well, and they will be at the heart of conversations at the EES Evaluation Conference in Lille in 2026. 

The potential for connection is still largely unrealized. I came back with more questions than answers, but also with connections, concrete ideas, and a clearer sense that the challenges we face in Tbilisi are not ours alone.  The distance between evidence produced and evidence used is not a regional problem. It is the field’s shared unfinished business – and that, at least, is something we can work on together.

Short Bio

Dea Tsartsidze is a faculty member in the Public Administration Department at the University of Georgia (Tbilisi) and an international MEL practitioner with over 15 years of experience across more than 100 countries. She believes that better evidence leads to better decisions – and that rigour is not a luxury reserved for well-resourced contexts, but something that can be built deliberately wherever evaluation is being done. Her work spans governance, media, and development, with particular depth in complexity-aware methodologies, theory-based evaluation, and building evaluation culture in contexts where the professional infrastructure is still being constructed. She developed Georgia’s first policy monitoring and evaluation methodology and was a finalist for the Molly Hageboeck Award for MEL Innovation. She is co-founder of HubEVAL, the first university-anchored evaluation hub for the South Caucasus, and co-founding partner of Solution Alternatives International (SAI).