Search on EES

This blog provides further discussion and elaboration of themes discussed in the EES EvalEdge Podcast on this topic, along with a few resources[1] for additional reading.

Burt Perrin, independent consultant, Burt@BurtPerrin.com

 

Part I – What does one mean by ‘innovation’? What implications does this have for how to approach evaluation of innovation?

 

1.    What is special about innovation, and what does this mean for evaluation of innovation?

 Let’s first consider what one means by innovation. For example, what are its key characteristics? A discussion about implications of these for evaluation will then follow.

Most basically, innovation means doing something new or different – often by quantum leaps vs. small incremental gains. By definition, innovation means exploring the unknown. To otherwise undertake something already tried is not innovative.

There are some important corollaries of this. A most basic characteristic that follows from the above is uncertainty, which means unpredictability about what may work out or not, and when. And this in turn implies ‘failure’. When exploring the unknown, one cannot know in advance what might succeed or not. Potentially one may need to try lots of possible approaches, with the expectation that very few of these would be expected to succeed.

This has some profound implications for evaluation of innovation that I will shortly consider.

Innovations are frequently long term in nature. It can take some time, indeed sometimes many years, before the value of some innovations become evident. And change rarely is incremental or linear. Often there may be a tipping point, with a quantum leap from what appears to be nothing to a grand success. This also has profound implications for evaluation.

There can be different types or degrees of innovation. Radical or deep innovation involves something completely new, that has never been done anywhere. But, perhaps to a lesser degree, what is sometimes referred to as incremental innovation can apply to something that has been done elsewhere but not in a particular domain or organisation. An example of this might be an organisation finally computerising its operations, long after others.

Innovation can be on many different scales. For example, when thinking about innovation, arguably the first thing that comes to mind may be high-tech or R&D discoveries. But there can be innovation with respect to any domain – culture, health, education, science, employment, any aspect of development…

Frequently innovation may represent something disruptive, leading to major changes unimaginable beforehand, for example VOIP (Voice Over Internet Protocols, such as Skype), permitting calls at no cost almost anywhere in the world. Going back a bit further, one might think of airplane travel, in particular the rise of jet travel that has made getting around the world in just a short period of time, and international collaborations, possible. Or consider online teaching, which to be sure has been around for some time in various forms, but the COVID-19 crisis has led to widespread experimentation and use in novel forms.

Or consider the example of the Childbirth Emergency Phone Programme in the Milne Bay area of Papua New Guinea (PNG), a very remote location with limited medical care and one of the world’s’highest rates of child and maternal mortality at birth. The programme arranged for remote, high-level medical advice regarding difficult childbirths via mobile phones to nurses. Following evaluation documenting the impact of this PNG project, there was interest in adapting similar approaches in other countries and regarding other health issues, an example of a secondary innovation. This story of this project, and its evaluation, is included in the Stories Project: Evaluations That Make a Difference (please see the Resources list at end of this blog).

Innovation can also be on a much smaller scale. For example, this could apply to a teacher trying out many different approaches with a student who is having difficulty reading. Some of these attempts may have been tried elsewhere, some may be something new, to see what happens. Likely most of these attempts did not really help. But if one or two have indeed led to a breakthrough, then that surely should be considered a success.

As a word of caution, with innovation becoming a buzzword, not everything depicted as such may really be so. And just because something is ‘new or different’ does not necessarily mean that it would be desirable or positive. This is something I come back to a bit later.

 

2.    What do these characteristics of innovation mean for the evaluation of innovation?

The nature of innovation, as outlined above, presents various challenges to evaluation:

Most basically, one needs to develop and use an approach to evaluation that is compatible with the characteristics of innovation, such as I have just touched upon. This invariably will require an approach to evaluation that is different, in at least some ways, from traditional approaches to evaluation.

In particular, one should employ an approach to evaluation that can identify what has worked, or at least has shown potential of working, even if this represents just a small portion of what has been tried. This contrasts with more typical evaluation approaches that look largely at mean scores, or correspondence of what took place or with objectives or targets set in advance.

More specifically when evaluating innovation, it generally is more useful to look at outliers or exceptions, rather than at mean scores or average performance, the main focus of most traditional evaluation approaches which can have the unintended consequence of rewarding mediocrity rather than improvements.

Unlike with most traditional approaches to evaluation, ‘failure’ should not be taken as a negative, as long as learning has been identified, such as what did not work, and, perhaps, why. Indeed, If there has been no ‘failure’ in a programme claiming to be innovative, this is a warning signal that it probably is not really so.

 

3.    Tips for evaluators evaluating innovation

Below are five key tips for evaluation of innovation.

  1. Look at key exceptions/outliers approach to innovation searching for those few situations that do seem to work out

As I have already indicated above, innovation by definition is unpredictable. Thus, it is not meaningful to set specific objectives, targets or even indicators in advance and to evaluate progress against these. These are rarely helpful, and indeed often can get in the way. There are alternative strategies that one might use, such as the following:

  • Try to identify good practices, to identify what does One might use one of the various possible approaches to evaluation that support positive thinking and action. Nicoletta Stame, a former EES President, in a pair of articles (see Resources List) has identified a number of such methods, including two in particular that may be well suited for evaluation of innovation: the Success Case Method, where one explores in depth the outliers where something seems to have worked well; and Appreciative Inquiry, a technique that is intended to identify what is working and why.
  • Avoid assessments based upon averages. For example, if 99 out of 100 research projects failed, at first glance – and in accordance with most typical approaches to evaluation – this would appear to be just a one percent success rate. But if the hundredth project comes up with a way of wiping out, say, malaria or dengue fever, then surely the research programme might be viewed more favourably. But such an approach often goes against the grain. Most typically, evaluators might say that ‘only’ 30-50-60% of projects or efforts ‘worked’ – without considering those few but potentially important successes. One can succeed through ‘failure’.
  • Assess conditions in place where innovations are most likely to be able to emerge. This is a prerequisite to guide further thinking and action.

 

  1. Use an approach open to identification of unexpected, unintended effects

Throughout history, some of the greater discoveries have been unintended or unexpected. For example, penicillin was discovered by Alexander Fleming, when he noted the formation of a mould in a forgotten petri dish. More recently, during the COVID-19 lockdown, a cheesemaker in France forgot about some milk that had been put aside; when he eventually checked it out, he discovered that a new quite tasty cheese. Such accidental discoveries do require, however, a mindset, for both innovation and evaluation, that is open to noticing, and as appropriate appreciating, the unexpected.

Evaluation designs that are open to considerations of complexity as well as to serendipity should be used. In recent years, more and more approaches have been identified.

Both quantitative and qualitative approaches potentially can be used, depending upon the situation – provided that they include some means of identifying the unexpected. In contrast, one would be well advised to avoid evaluation approaches that rely upon checking for conformity with objectives and indicators that have been fixed in advance.

 

  1. Get the timing right

A common error is to attempt to assess ’progress’ prematurely. Perhaps my favourite bad example is of a project working to reduce teen pregnancies that was required by its funder to identify progress against its expected outcomes just six months after start up.

Probably few interventions, and even fewer significant innovations, progress incrementally. As I indicated earlier there may be no progress for some time, and then, eureka! – a tipping point has been reached or there has been an ‘accidental’ discovery where a solution suddenly emerges. This timeframe is often unpredictable, and can require a long-term perspective. Expecting ‘success’ in just a short period of time rewards mediocrity and inhibits more ambitious and innovative attempts.

Michael Woolcock, of the World Bank, has emphasised the importance of considering the outcome trajectory of an imitative when undertaking evaluation. As he indicates, ‘progress’ for some undertakings may not be apparent for some time, whereas for others, there may be an initial exponential increase that then levels off. In many cases, there is a ‘J’ curve, where there is an initial drop, until later progress becomes evident. Evaluating an initiative at the wrong moment, even using ‘rigorous’ evaluation designs may produce impressive looking, but meaningless and misleading, findings.

One needs to understand outcome trajectories and context in order to time evaluation right. For example, Woolcock compares the outcome trajectories of sunflowers with that of oak trees. Plant a sunflower seed and an acorn at the same time, and come back later in three months to evaluate progress. If all goes well, one would have a two-meter high sunflower. But one would be hard pressed to find any observable change, at this point, in the acorn.

Evaluation of development initiatives, for example, undertaken at the wrong time or in the wrong way can punish ambitious programmes or projects and set them up for failure.

At what point would it be appropriate to pull the plug on innovative initiatives that have not (perhaps as of yet) demonstrated any noticeable benefits? There is no easy answer to this question. For example, Alan Turing during the Second World War developed a machine to crack the Enigma code that was credited with playing a major role in ending the war. His work also led to the first computers. However, success did not come easy, or fast. His superiors threatened to withdraw funding and support and were on the verge of doing so when, eureka! Turing and his colleagues came up with the solution.

As well, history is replete of examples where people, including often very informed people, make take a while before appreciating the utility of an innovation. For example, when the telephone was first introduced, it took some time for its value for business to be appreciated. There is a natural tendency is to stay with the tried and true.

 

  1. Learning is critical!

Learning about what seems to work or not, and why, is absolutely critical, in particular with respect to systems and programmes that wish to foster innovation. As I have already suggested, one should identify learnings or potential learnings – both from what has worked, and from ‘failures’. Sometimes one can learn as much from what has not worked as from what has done so.

This, however, requires an approach to evaluation that recognises the importance of learning, and the critical, and positive, role, that ‘failure’ can play in this. Evaluation should assess openness to learning – and the extent that learnings are extracted, both from what has not worked as well as from did work.

 

  1. Be flexible and adaptable

As I have suggested, innovations often arise from something very different from what was expected or initially intended. Overly fixed approaches to evaluation might miss this.

This has significant methodological implications for evaluation. Generally, one should use evaluation approaches that can be modified, as need be, during the course of the intervention and the evaluation. This contrasts with perhaps more common evaluation designs where the methodology, often including questions to be explored and instrumentation, in essence is set in stone at the beginning, assuming that implementation will proceed as expected. Evaluation approaches should be flexible enough, for example, so that, like a detective, they can follow up possible leads that had not been known, or indeed knowable, earlier. Again, evaluation approaches should be open to identifying unintended or unexpected effects that might not have formed part of original plans or priorities.

Most traditional evaluation approaches fail on these criteria. In contrast, one might consider a systems approach of one form or another, or an approach to evaluation based upon complexity thinking. Process evaluation may be very useful, along or in combination, in order to maximise learning.

On the other hand, some claims of innovations may need to be checked out – to be evaluated – to see how appropriate, practical, effective, or indeed innovative they might be. In these circumstances, some traditional evaluation approaches might be considered for evaluation of implementation of a proposed innovation. The Papua New Guinea Milne Bay Childbirth Emergency Phone Programme that I referred to above is a good example of evaluation being used in this context, in documenting the actual benefits of the innovative approach, and the factors that contributed to this.

 

Part II – Responses to some questions about the evaluation of innovation[2]

 

1.    How might one distinguish between innovation as a thing or event (a new diagnostic tool is put into practice at scale) from the importance of innovation-friendly systems (the regulatory/accountability/leadership/ cultural context that makes innovation more likely and failure tolerated)? This is linked to the concern identified that we should not use simplistic models of impact.

This is a question that gets beyond evaluation of innovation. Any project or programme is influenced by systems. Projects may not work – not because of their own failings, but, perhaps, due to lack of appropriate support from the larger organisation / system. Or, perhaps, because of other competing pressures that may have undermined what was tried. Much of the time, evaluation pays scant attention to considerations such as these. It is much easier just to look at a programme more narrowly.

In Changing Bureaucracies: Adapting to Uncertainty and How Evaluation Can Help[3], myself and colleagues indicate how bureaucracies are often loath to look at their own systems, and indeed may indeed be hostile to evaluations that try to do so. As this book discusses, there is a need to explore considerations such as these, for examine assessing the extent to which many bureaucratic processes and procedures add value, or indeed are fit for purpose. Too often, such considerations are considered off limits. For example, in one large international organisation where the contract for the head of the nominally independent evaluation function was terminated, it emerged that a reason for this was an evaluation that looked at internal systems and processes such as human resources, which was not viewed as appropriate.

Innovation can indeed be disruptive, and thus challenging to established bureaucracies. By its very nature, innovation can rock the boat, it can call into question established ways of doing things that have been assumed to be ‘how things are done’ and not subject to evaluation. Dalziel’s chapter in the Changing Bureaucracies book further discusses the complex intersection between innovation and bureaucracies. Evaluation does need to be more forceful in insisting on being able to ask the right questions. So, yes, there is a need to look at systems to the extent that they can facilitate innovation – or, perhaps, stand in the way and perpetuate mediocrity.

As I touch upon a bit later, innovation-friendly systems support a learning culture, and reward rather than punish risk-taking and ‘failure’, in contrast to just playing it safe. As well, rarely does significant impact follow just form a single intervention. Invariably, many different things may work together in combination. Thus, there is a need for an approach to evaluation that can take into account context and complexity, and that go beyond looking at individual interventions in isolation.

 

2.    How important is adaptation in a process of innovation? Innovation is very rarely a single product or thing but involves considerable adaptation.

Yes. As I’ve indicated, innovations most often do not emerge as expected. Things may change, often considerably, from the initial plan. Indeed, this most likely is more the rule than the exception – even though traditional approaches to evaluation tend not to acknowledge this.

In Dalziel’s chapter referred to above, she discusses an innovation by Demine Robotics that initially set out to develop robotic means of deactivating mines in Cambodia, where even many years following its devastating war, many landmines remain. However, after initial explorations, Demine Robotics changed its model instead to safely removing mines and transporting them to another location where they could be deactivated safely.

Another example can be found in the Evaluations that Make a Difference publication, the story of a community sanitation programme in a rural village in Kenya: (Tumekataa kula mavi tena! We refuse to eat shit!). Rather than let public health ‘experts’ lead the project, the community itself took over, indeed integrating evaluation with other project activities, with considerable success.

These examples highlight the need for evaluation approaches open to adaptation, including too often significant changes from how an initiative may originally have been initially planned or expected to be implemented, as well as corresponding changes to the evaluation approach. Otherwise, important innovations can be missed, or even viewed negatively.

 

3.    Is evaluation (and audit) part of the problem rather than the solution?

Unfortunately, this can be so. Consider, for example the title of my original article on this topic: ‘How to – and How Not to – Evaluate Innovation’ This article highlights ways in which the wrong approaches to evaluation can act as strong disincentives to innovation. The wrong approach to evaluation can punish, rather than reward, those who try to be innovative or ambitious, and thus act as a disincentive to trying anything new. In contrast, the right approach to evaluation can be a great support, such as in identifying learning, and helping to facilitate the innovation process.

A toxic target culture that forms the basis of most performance measurement processes assumes that results can be anticipated in advance, and thus is incompatible with the nature of innovation. Such an approach has the incentives all wrong, and as a result, actively discourages rather than encourages, supports, or rewards attempts at risk taking and innovation.

 

4.    Are we in danger of privileging innovation over other features of good government? Accountability, safety, responsiveness, integration all matter to me. In particular relevance matters and in my experience of interviewing innovators in the healthcare system, innovators are often unhinged, that is they are so obsessed and excited about their innovation that they cannot see how it is (not) connected to adding value. I love innovators but I don’t want public services to be run by them….

This is an important question. Indeed, there are many parts to this question. Let me address it in a couple of different ways. First, I’m going to challenge the assumptions underlying this question, at least a bit. In order to respond to changing needs and circumstances, to improve, indeed, to remain relevant and meaningful, in order to survive, one needs to constantly strive to do better. That means to be ambitious, to try out possible new approaches or techniques, to innovate. Innovation, most basically, implies acting differently in order to better address needs, as well as adapting to current and future changes. Disruptive innovations can result in new needs, which bring along with them new ways of doing things, such as the industrial revolution of the age of computers. Surely, everyone should be trying to improve!

In contrast, not to innovate means a failure to adapt. It means to stand still and to be stuck in the status quo and current preoccupations and ways of acting – and, at some point, it inevitably means becomes less and less relevant. Most private organisations that have failed to innovate and adapt have disappeared. For example, few Fortune companies remain on the list even some years later. As the Changing Bureaucracy book discusses, one of the reasons that public institutions are held in such low regard by the public is their inflexibility, perceived lack of responsiveness and preoccupation with their own internal rules and procedures rather than responding or adapting, to changing needs.

So, what really is needed are for the various systems that have been mentioned to adapt themselves, so that they support rather inhibit innovation. As an example, let me touch upon accountability that, quite correctly, is identified as something important. But what do we mean by ‘accountability’? What approaches to accountability support greater confidence in government, and which may have unintended consequences? Questions of this nature are considered in Making Accountability Work: Dilemmas for Evaluation and for Audit, As this book discusses, and as I have already indicated, traditional approaches to accountability based upon checking for compliance with predetermined objectives act as a disincentive to innovation.

What is needed, instead, is a new model of accountability that this book presents, a model of accountability that looks at the extent to which organisations and programmes are trying to innovate, to improve, and to learn. As discussed above, the wrong approach to evaluation – as well as to auditing and to accountability, can indeed inhibit innovation – and progress.

Traditional approaches to accountability most generally assume the appropriateness of stated purposes and objectives. But as I have already discussed, innovation, by its very nature, can challenge how things are currently done along with their rationale. The result is that innovations sometimes can be perceived as a threat, resulting in resistance.

Following are a couple of examples:

  • Innovations in sales and deliveries, such as online sales, are often resisted by those engaged in more traditional sales mechanisms. Some adjust, and with changes, may even prosper. But those who continue to fight change may go out of business.
  • Thomas Kuhn, in his seminal The Structure of Scientific Revolutions, has indicated that major scientific changes, contrary to what is commonly believed, does not arise from ‘normal science’ but instead through disruptive revolutions, where one scientific paradigm is replaced by another.
  • The COVID-19 crisis has highlighted the need to act quickly. But sometimes traditional bureaucratic systems and rules stand in the way. One large international organisation identified the need for real-time, or at least quick evaluation to assess its response to the pandemic and to inform practices. Nevertheless, organisational systems dictated that in order to undertake this ‘quick’ evaluation, it was first necessary to go through lengthy competitive processes, just as with traditional evaluation. This meant that more than a year after the beginning of the response to COVID-19, it still was not possible to start the actual evaluation, meaning that an opportunity for potentially useful insights to improve the process had been lost.

HOWEVER, let me qualify my observations above. Firstly, something that may be new or different is not necessarily good or desirable. Dalziel defines innovation as ‘valued novelty’. But whose values should be taken into consideration? Hopefully not just those of the advocates of an approach. It is important to consider in particular the values of those expected to benefit. A case in point are devices and adaptations for people with disabilities. Many of these do indeed have the potential to greatly improve the lives of people with disabilities. But, too often, intended beneficiaries have not been consulted, let alone engaged, in the designs, resulting in devices that in practice were not very useful or practical.

Secondly, attempts at innovations, as other activities, still need to be managed in some way, even though traditional indicators and measures of performance may not be appropriate. There definitely is a need for assessment and evaluation, ex ante about possible approaches that seem most promising, process evaluation, and potentially later evaluations about the actual applicability and benefits of a new innovation.

Also, there is a need for balance between stability and change. Constant change and turmoil may make it difficult both for those within and outside the organisation to understand what it is about. The Changing Bureaucracies book, for example, identifies the important stability that bureaucracies can bring. In practice, however, the problem most often is the reverse, with too little opportunity for trying new approaches. Donald T. Campbell’s model of evolutionary epistemology identifies three necessary elements: variation, selection, and retention. There is a clear role for evaluation, along with a strong potential – indeed major need – for it to be transformative.

 

5.    Venture capitalists peruse innovations and profits. They invest their capital in many projects to discover 1-2 projects that would help regain and multiply this capital (80/20 rule). When we speak about investing in improving lives of most in need, how appropriate would it be to engage in risky projects, 80% of which may not bring even marginal improvements in lives? As evaluators, 1) are we suggesting to shift the focus from impact to innovation, regardless the fact that 80% of projects may not bring impact to those in need; or are we suggesting to include additional criteria for innovation in the quantification of impact?

The reality, well documented in the literature, is that for many of the most important development challenges such as widespread extreme poverty, climate change, youth employment, reintegrating child soldiers, to name just a very few, we simply do not know the best ways of acting in all circumstances.

There is a need to try, and to be more ambitious, rather than to continue doing the same old, same old thing that we know may only be of limited effectiveness. Surely we should try out new approaches in such circumstances? Otherwise, we are tied to mediocrity and problems that are never properly addressed.

But, by definition, when we are ambitious and try new things, as I have highlighted, not everything can be expected to succeed. The solution, as I have suggested, is to acknowledge this, to take measured risks and learn from ‘failure’, and to identify those few situations that may potentially have transformative effects.

This forms the basis of venture capitalist thinking – to support promising ambitious initiatives, recognizing that even after a careful due diligence and selection process for every ten investments, a couple are likely to fail, most will produce unexciting results – but even if just one or two hits it big, this can make the exercise worthwhile. Again, one is most likely to find innovative solutions among the outliers. I feel that this model does have a lot to offer when we are trying to address challenging development md social problems – and to evaluate them.[4]

Again, innovation can be on a small scale as well. If a programme that tries ten different ways of aiding women struggling to escape from domestic violence comes up with even one or two good ideas that might help other women or be adapted to other settings, surely that should be encouraged and supported. The Community and Progress Youth Empowerment Institute (CAP-YEI) programme in Nairobi assists orphans and other extremely disadvantaged youth in gaining employment, using a combination of innovative approaches and evaluation. The programme initially consulted with potential employers and then tried a variety of approaches, and is following up with both the youth and with the employers, seeking to identify what worked and what went wrong, trying to identify what could be learned from these experiences, and constantly trying out new approaches.

To be sure, there can be other situations where we do, or should know, what approaches are most likely to be effective. But this requires making best use of existing knowledge, from previous evaluations as well as other sources of information.

For example, an ALNAP evaluation on the humanitarian response to the 2004 Indian Ocean tsunami disaster found that there were indeed many helpful lessons from responses to previous disasters and humanitarian crises about which strategies would most likely be effective in given situations – but that these lessons too often were unknown and went unheeded. We need to do a better job of learning from previous experience and acting upon this knowledge. This suggests an important role for evaluation – although evaluators often are reticent about advocating for use of evaluation information.

Part of this question asks if it is appropriate to shift focus from impact to innovation. Instead, I would argue that there is a need to shift focus from outputs, which remain the main focus of traditional approaches to evaluation, to a true focus on impact, a focus that goes beyond assessing attainment of pre-set targets and that takes into account complexity and the inevitable interactions among various initiatives and other factors. This could represent a focus on how evaluation can contribute to changes in people’s lives Innovation has the potential to represent one tool to aid in this quest.

 

6.    The article proposes a criterion to evaluate innovative programmes. Many development programmes aim to solve developmental problems but not necessarily do it in innovative ways. How do we recognise and evaluate unplanned innovations within the programmes that did not plan to be innovative but resulted in having innovative elements?

This is an important observation. As I have identified, innovation sometimes can arise when least expected. Some of the greatest innovations fit into this category, such as the discovery of penicillin. And as I also have indicated, innovation and ambition are closely related. Again, one can innovate in little as well as in big ways, such as a teacher trying out new or untried approaches to aid a struggling student.

These realities have implications for approaches to evaluation, even of programmes that are not explicitly identified as innovative. Many of the suggestions that I have made would also apply here. To reiterate, perhaps the most pertinent consideration is that one should use an evaluation design that, at least in some respects, is open to identification of unintended effects. Indeed, if possible, one should use a design that can be proactive in this regard. Again, take a closer look at outliers rather than dismissing them automatically as ‘noise’. And do not use an evaluation approach that is restricted just to assessing performance against predetermined indicators or objectives.

A Theory of Change (ToC) can potentially be a useful tool to help both programmes and evaluations alike with planning. But the wrong form and wrong use of a ToC can instead create more problems than it solves. In particular, some overly rigid ToC approaches are not open to serendipity and to adaptation, and these could act as blinders to identifying what is really important.[5]

 

7.    How can big developmental agencies nurture an innovation culture vs. risk aversion culture? It seems that the public sector could benefit from applying ideas for innovation generation from the private sector.

As I indicated earlier in this blog, there is often talk about ‘being innovative’, but without much thought to what this really means or how it could be supported. If an organisation is truly interested in innovation, then it should create and nurture an appropriate organisational culture where people are encouraged to explore new ideas and will not be punished if these do not all work out.

A learning oriented culture in particular can also support innovation, with some important corollaries, such as the following:

  • Public recognition of those who try, who are ambitious, even if not everything works out as expected.
  • Reward ‘failure’ in different ways, as long as learning can be identified from what has been tried but did not work out.
  • Support and encourage champions at all levels within the organisation. A common theme in the stories identified through the project resulting in the publication: Evaluations That Make a Difference: Stories from Around the World, is that invariably a leader emerged who took the initiative, who was not afraid to try something different, even if it means challenging the system.
  • Again, get away from approaches to accountability that are preoccupied with performance against objectives and targets fixed in advance that can inhibit learning and can sabotage ambition and attempts at innovation. Instead, adopt a model of accountability that includes accountability for taking risks and for learning.

 

Conclusion

In summary, innovation means doing something new or different. It is challenging to programme innovation; very often innovative discoveries are unexpected and arise through serendipity. To be truly innovative means exploring the unknown. This in turn means that most attempts at innovation should fail, otherwise they are not truly innovative.

This has important implications for approaches to evaluation that take into account the nature of innovation. Evaluations that can do so can play a major role in supporting identification and use of innovation – and in turn, potentially major improvements in the lives of people. In contrast, inappropriate use of traditional evaluation approaches may end up inhibiting innovation, punishing those who are ambitious or who attempt to do things differently.

 

Resources and references

Following are just a few resources that might be of interest for those wishing to further pursue evaluation of innovation. Detailed reference for other resources referred to can be found in the first entry, below.

  • Perrin, Burt. “How to – and How Not to – Evaluate Innovation.” Evaluation, Vol. 8(1), 2002, pp. 13-28.

The podcast, and this blog posting, has drawn heavily from this article.

  • Dalziel, Margaret. “Public Support of Radical Innovation.” Chapter in Perrin and Tyrrell (eds.). Changing Bureaucracies, Adapting to Uncertainty, AND How Evaluation Can Help. Routledge. 2021.

This chapter focuses specifically on public support for innovation.

  • Bemelmans-Videc, Marie-Louse, Lonsdale, Jeremy, and Perrin, Burt): Making Accountability Work: Dilemmas for Evaluation and for Audit. Transaction Publishing, 2007.
  • Perrin, Burt (co-editor with Tony Tyrrell). Changing Bureaucracies, Adapting to Uncertainty, and How Evaluation Can Help. Routledge, 2021.

These two books focus, respectively, on accountability and bureaucracy, and how evaluation can help, or hinder, effective practice in both domains. They also discuss ways In which bureaucratic systems, including approaches to accountability, might better facilitate learning and innovation.

This publication presents stories of evaluations that have made a difference, including innovative approaches to evaluation.

  • Perrin, Burt. “Think Positively! And Make a Difference through Evaluation.” Canadian Journal of Program Evaluation, 29(2), 2014, pp. 48-66.
  • Stame, Nicoletta. “Positive Thinking Approaches to Evaluation and Program Perspectives.” 2014   Canadian Journal of Program Evaluation, 29 (2), 67-86.
  • Stame, Nicoletta and Lo Presti, V. “Positive thinking and learning from evaluation.” In S. Bohni-Nielsen, R. Turksema, & P. van der Knaap (Eds.), Success in evaluation: New Brunswick, NJ: Transaction. 2017

These resources identify the benefits of approaches to evaluation that can support positive thinking approaches and in turn maximising learning, with the two Stame articles identifying a variety of different evaluation approaches that might be best suited to this.

 

[1] References to articles and other publications referred to here can be found in the list of resources in the last section of this article.

[2] These questions were identified by Tom Ling and Mariana Branco.

[3] See in particular Chapter 12 by Perrin: ‘The Problematique of Bureaucracy and What This Means for Evaluation – and for Public Sector Leaders.’

[4] Indeed, the title of presentations that fed into my original article on this topic was ‘A venture capitalist approach to evaluation.”

[5] This is a topic of discussion that I expect to be discussing as part of a panel on uncertainty and evaluation at the September 2021 virtual EES conference.