Today we share part 2 of the interview with Thomas Delahais. This first part focused on understanding his background and perspective on evaluation, while part 2 offers recommendations for evaluators looking to grow in the field.
This blog interview series was developed and is curated by Cristina Repede. It is an initiative of the Emerging Evaluators Thematic Working Group (TWG) of the European Evaluation Society (EES), published in partnership with EvalYouth Europe, and reflects EES’s ongoing commitment to spotlight diverse voices and experiences within the evaluation community.
In his 20 years of experience, Thomas Delahais has been committed to developing pragmatic ways to evaluate complex interventions and learning from them. He has helped to operationalise theory-based approaches like contribution analysis, applying it to a range of interventions, including sustainable transition initiatives, transport infrastructure, research, and governance. In 2013, he cofounded Quadrant Conseil, a cooperative company specialised in policy evaluation and design.
1. What does it take to become a truly good evaluator beyond technical skills, and how long does this process typically take?
Young evaluators often believe that being technically competent, knowing how to design surveys, conduct interviews, or write reports, is what they should aim for. Those things matter, of course. If you’re not good at them, you won’t last in the profession. But being technically strong only makes you a good analyst, not necessarily a good evaluator.
Good evaluators go beyond analysis. They make useful judgments, bring perspective, and help others reflect. It’s a mindset, not just a skill set. In the recent years, I’ve had the chance to work with Canadian researcher Marthe Hurteau on the idea of practical wisdom, which is the ability to make good decisions in complex, uncertain situations, where “good” means both effective and ethical. That means that we need to ask ourselves questions such as “Can I be useful here? How? And is this the right way of being useful?”.
An experience early in my career illustrates this. I was evaluating an economic policy, and I realised I was uncomfortable with its underlying philosophy of fierce competition and support for the “winners” only. And because of this I couldn’t really do my job – because I didn’t want this policy to be better. I don’t think we have to agree with every policy we evaluate, but we need to be OK with it. We must believe it can and should be improved. So with my colleagues, when we answer calls for proposals, we ask ourselves: Do we believe in this policy? Is there room for manoeuvre, methodologically and politically, to do something useful? Is there a real chance to be heard and make a difference? If not, we don’t see the point in making a bid.
I insist that there’s no single path to becoming a good evaluator. Some excel by mastering methods and approaches. Others are brilliant at sparking reflection and helping people rethink their work. What matters most is identifying and embracing your own strengths, and that takes time, experience, dialogue with others, and self-reflection.
2. How can evaluators play a meaningful role in shaping policy design, and not just in assessing its outcomes?
Our legitimacy as evaluators is in assessing and explaining outcomes, but that’s only part of the story, and my hope is that evaluators, particularly young professionals, can move upstream and downstream the policy cycle– bringing their evaluative thinking into the early stages of policy design or in the implementation process, for instance.
We tend to forget that we know a lot about policies and their pitfalls. Actually, when evaluators are brought in early, they sometimes don’t even need to conduct broad surveys or interviews. It’s enough to share what we know from similar cases: patterns, risks, or conditions for success. In these cases, evaluation becomes a practical tool for foresight, helping people prevent problems rather than diagnose them later. This kind of involvement can be extraordinarily valuable, especially in innovative or pilot projects where the design phase is often rushed and where decision-makers may not have the time or distance needed to evaluate their own plans critically.
Of course, this shift requires us to change how we think about the role of an evaluator and how we explain that role to others. Many people still see evaluators as critics who come in late to point out what went wrong. It’s definitely something we do, but that’s not the only thing. We evaluators have a potential to be much more helpful if people realise that.
3. What are the biggest changes you’ve observed in the evaluation field over the past 20 years, especially in terms of training and professional opportunities?
From my window, as a French evaluator, I’ve seen three major shifts in the field of evaluation over the past 20 years.
First, the rise of formal education programs. When I started, there were no university degrees in evaluation in most EU countries. Today, there are several good master’s programs in France alone. This is a real advantage: we now have young professionals who enter the field already familiar with evaluation theory and methods. However, I still believe that a good evaluator needs more than just academic training, they should also bring curiosity and grounding in fields like public policy, economics, psychology, or geography. Combining these perspectives is what makes the work truly rich.
Second, the growth of evaluation outside of the public sector. Today, there are real opportunities for evaluators to work inside NGOs, foundations, and social enterprises. These roles can be highly rewarding because these evaluators are often closer to the field than public sector evaluators. They can see firsthand how programmes affect people’s lives and they can find a purpose in helping smaller organisations learn and adapt. That is a helpful addition to the usual opposition between being in the public sector or being a consultant.
Third, the rise of pilot projects and experiments. Increasingly, evaluation is being integrated directly into the lifecycle of innovative policies and projects. These assignments are longer than the typical evaluation, e.g. three years or more, and they offer evaluators a chance to help shape ongoing work, not just deliver a report at the end. However, this kind of role can be challenging, especially when you’re the only evaluator in the team. If you’re not careful, you might either be absorbed into the project and lose your perspective, or be seen as an outsider or a spy, and face the distrust of the others. That’s why it’s so important to have professional networks and colleagues you can talk to so you can reflect and stay grounded in the role.
Alongside those changes, there’s a problem we’re still struggling with: evaluators aren’t staying in the field long enough. Many young evaluators stay for 3–5 years and then move into other professions. While it’s great if they bring evaluative thinking into new roles, it’s a loss for the field, because of the time it takes to become truly skilled as an evaluator. When people leave just when they’re getting good, we miss out on experienced voices, and clients miss out on seeing how evaluation can be useful.
I would add that evaluation in France can sometimes feel like a Potemkin village. It looks good from the outside, with plenty of activity in certain sectors, but it lacks stability. Interest in evaluation rises and falls in cycles. Every few years, we have to re-explain why evaluations matter. That instability can be discouraging and contributes to people leaving the field. That is also why I believe evaluators need to move more into policy design and management, not just endline assessment.
4. What advice would you give young evaluators on how to build their own professional identity and confidence while collaborating in teams or organisations without strict hierarchies?
My main advice is to be both patient and proactive in shaping your own path. Evaluation is a field where things don’t always move quickly, your work isn’t always used the way you hoped, and there isn’t always a clear career path laid out for you. So you need to develop your own agenda over time and stay committed to it, even when things don’t go as planned.
And you don’t have to do that alone. In fact, you can’t do it in isolation. Make it a habit to talk to colleagues about your work, not just what you’re doing, but why you’re doing it and how it could be improved. Reflection becomes much more powerful when it’s shared. In our cooperative, for instance, one of the core values is being part of a professional community where people support each other’s growth. That’s what helps you build not just skills, but confidence and purpose.
So, look for environments such as teams, networks, or even informal groups where you can learn from others, explore your strengths, and gradually define your own way of being an evaluator. That’s where professional identity really takes shape.
5. Looking ahead to the 2026 European Evaluation Conference in Lille, what themes do you believe should be at the centre of the conversation for the future of evaluation in Europe?
As William Gibson said, “The future is already here – it’s just not evenly distributed.” What the EES conference could do is highlight those evaluators and practices that have been pioneering, sometimes for years, what we should do more in the future.
First, I think we need to reframe how we see ourselves – not only as evaluators but as experts in evaluative thinking, supportive of policy making wherever policy is made – in government offices or in the streets. And tied to that: how do we help all these people become good evaluators when they need to be?
A second facet is hybridisation. Let’s be honest, evaluators alone will not address the super-wicked problems we are facing now – but neither will other professions. What could evaluators-designers, or engineer-evaluators, do? We have a lot to learn from the ways other professions are addressing challenges, and they have a lot to learn from us.
A third focus would be learning. It sounds like an old problem, but it’s certainly not one that has been resolved. Evaluators are still very much focused on how we can learn from the single evaluation, but the real challenge is in learning from the accumulation of knowledge, in navigating it. This isn’t just a technical, and it’s not something that AI will solve miraculously: it is about curating knowledge, maintaining a conversation around it, creating habits of using it… We have a role to play in this!
And finally, there is this discrepancy between the wealth of methods and approaches that we have and what’s actually being used in the field. The challenge is to make these approaches more accessible to everyone, and find ways to improve practices, not just in theory but in action.
