(Photo by Pixabay.com)
When asked what I do for a living, the reply that I work in the field of “development” often evokes a question with a slightly sceptical undertone: do you think development aid is effective? As there is no across-the-board response, it usually generates a perhaps rather dull answer à la there are good, mediocre and bad projects and the development of a country depends on factors much more significant than external aid.
One cannot help wondering if those interrogating the effectiveness of development aid raise similar questions on what closer-to-home public programmes and projects, whether funded locally, nationally or cross-nationally (EU), generate in terms of results and impact. For example, what do we know about the performance of the nearby business incubator – set up a few years ago with a mix of country level and EU public resources – in terms of the number of start-up enterprises groomed and the public budget injected per job created so far? Or about the performance of a similarly funded regional centre aimed at stimulating R&D? It takes quite some effort to find solid performance data regarding such public initiatives let alone independent evaluations of their performance.
Development projects in third countries (be it loans or grants) tend to be subject to strict design and appraisal formats and procedures as well as compulsory independent mid-term and final evaluations. The latter are publicly accessible, providing insight into how taxpayers’ money has been spent. Somehow one does not often hear about this type of practice regarding developmental initiatives in our part of the world. Do we assume that public funding for development chez nous is overall well prepared and well implemented, not requiring the type of performance measurement rigour as when providing public resources to other, specifically developing, countries?
Occasionally one reads in the press about the findings of Audit Offices. As their name indicates (similarly Audit Court; Bundesrechnungshof; Cour des Comptes; Rekenkamer; Corte dei Comti; Tribunal de Cuentas; Court of Auditors), the focus of the work of these public bodies is auditing. This covers in particular controlling and monitoring if public funds were implemented correctly and reporting on irregularities. The mission of these offices includes a priori evaluating if public policies and programmes were implemented effectively, i.e. to what extent the objectives were reached, and if the associated resources were efficiently spent, but this is done on a selective/thematic basis and not in an all-embracing manner.
In fact, beyond information on the budget spent and rather anecdotal evidence of achievements, we often have no clear idea what the multiple publicly funded programmes, instruments and facilities developed for and implemented in our own territories (aimed at stimulating, for example, rural development, innovation or entrepreneurship) generated in terms of results and impact. One sometimes gets the impression that glossy communication on such efforts gets more attention than looking for facts and figures on their effectiveness and efficiency. Moreover, successor initiatives with the same or similar objectives are launched, without necessarily having drawn lessons from comparable past efforts.
This brings us to the gigantic amounts of money already spent to address the COVID-19 crisis and its consequences. And more is to come! This situation is reminiscent of the aforementioned issue, calling for attention to ensure that money is spent in a just, justified and justifiable manner. It implies that public budget allocations and their ultimate use follow transparent programme/project management principles and procedures and are subject to robust performance measurement. In this regard there are challenges of which three are highlighted below.
Firstly, there is the issue of complementarity. By way of illustration, the recovery plan for Europe (Next Generation EU) covers a variety of instruments, funded through loans or grants. It is not easy to grasp which are the internal and external synergies, particularly how the new programmes and measures will complement existing ones at both national and regional levels. For example, while almost €70 billion is earmarked for Horizon Europe, InvestEU and EU Solvency, do we know what results prior programmes for health, climate-related research and innovation generated and how the fresh funding will build on prior/ongoing efforts? Do we know how planned Next Generation EU investments in infrastructure, R&D, SMEs and skill development across the EU will complement funding already in place or earmarked nationally or regionally for the same purpose? Do we know how EU support to companies to address their solvency concerns and their green and digital transformation will complement the fiscal and financial incentives in place at national level and is this EU support to enterprises in line with the principle of subsidiarity? Similarly, if the Just Transition Fund of €40 million is to support the transition towards climate neutrality, how will it complement existing green measures at country and regional level and what does it mean for the targets already set? How have lessons from prior efforts in these fields been taken on board in these new endeavours? Details of country-specific recovery and resilience plans to be submitted to the Commission are expected to throw light on this issue of complementarity.
Secondly, there is the speed factor, especially in the case of intra-EU funding. While conceived and negotiated in a rush and under pressure, it is expected to take a substantial amount of time for the first instalments of the funding envisaged under Next Generation EU, be it in the form of loans or grants, to be “on the ground”. The EU Member States need to submit their plans, with targets and milestones according to a standard format followed by an objective appraisal and approval process. In the case of development projects of much smaller size, this process can take quite some time, even if the funder is known ex ante. For example, it can take easily more than a year to prepare and conclude a typical World Bank loan. In the short run, beneficiary EU Member States will therefore need to rely on existing resources rather than counting on newly planned initiatives the operationalization of which will take time.
Thirdly, project management, including monitoring and evaluation needs special attention given the multitude and complexity of the stimulus packages and instruments. This big flow of public funding, be it national or regional, is expected to generate multifaceted results and to be implemented in a cost-effective manner. In this regard, there is room for better understanding of how performance measurement is dealt with in the case of both national and intra-EU recovery spending. One has the impression that the EU and its Member States are stricter on measuring the performance of their development cooperation (money spent on other countries) than of their developmental programmes and projects at home. Communications from the European Commission on the recovery plan focused on the instruments and thematic areas, the budget requirements and funding modalities. However, they remained silent on how it will all be monitored in order to keep interventions on track and how it will be evaluated if the budgets made available indeed made the expected difference.
At a time when the sky seems to be the limit in terms of mobilizing public budgets to address the pandemic crisis and its consequences, stepping up attention to performance measurement as an integral part of design and implementation is urgently required for appropriate accountability. There is no doubt that the big money flows can be spent at country and regional level, but merely a high volume of spending alone will not be a convincing indication of achievements.
* The author is former UN staff (UNIDO) and since 2002 engaged as independent development consultant in project design and – evaluation work (agriculture; industry; trade) for different clients. Email: email@example.com
The opinions expressed in this article are the author’s own and do not necessarily reflect those of the European Evaluation Society.