Search on EES

Influencing government decisions is an important component of policy change in most political systems. Advocates of structural change must convince policymakers of the worthiness of the changes advocated for. Most advocacy organizations face growing pressure to provide proof of accountability towards their members and towards stakeholders operating in their area of advocacy. To strengthen impact on policy, dissemination strategies require principles of evidence-based advocacy, combined with rigorous approaches of monitoring and evaluation.

If advocacy is broadly understood as the wide range of activities conducted to influence decision-makers at different levels, the quest to evaluate advocacy efforts is rooted in the measurement of ‘influence’, analyzing behaviors that impact decision-making. The adoption of impact measurement tools has the advantage of both enhancing the capacity to advocate and adding value to the credibility of the dissemination activities.  This allows us to understand whether the advocacy efforts are having the desired result.

Impact measurement can track the organization’s demonstrable outcomes and can lay out the foundations needed for evidence-based interventions. Impact measurement is part of the management of organizational performance, in which data is actively used to revise an ongoing program to improve efficiency or results. Drawing from the business world, Performance Measurement Systems (PMS) are used to quantify the value proposition of actions and strategies undertaken by an organization. PMSs are equally suited for organizations oriented to social and policy objectives. Hence, evidence-based advocacy acquires a two-fold perspective.  On the one hand, it involves sharpening the core delivery of dissemination strategies aimed at key policymakers.  On the other, it entails enriching the advocating actor’s ability to exert policy influence according to an increasingly refined awareness of its strategic performance.

Much research has been dedicated to identifying the most appropriate way to measure advocacy. Policy and academic literature confirmed the challenges of measuring the performance and outcomes of advocacy, due to its intangible nature, which defies traditional measurement approaches. Policy influence happens through informal interactions, which are often relational and political: this makes it hard to document and monitor progress towards designated outcomes. Recent literature on advocacy evaluation highlighted two central analytical problems.

The first challenge concerns the determination of causality. The target group of advocacy strategies moves, develops and changes depending on the interaction with other actors. Moreover, the act of influencing changes involves multiple layers of power relations, which diminishes the ability to correctly assess contribution and attribution.

While attribution indicates how much change is caused by (attributed to) an organization’s specific effort, contribution assesses how much the organization contributed to the outcomes of change, without explicitly indicating the relative size of the contribution. Advocate organizations can demonstrate how they contributed to policy success rather than how the policy change was attributed exclusively to their efforts. Contribution analysis has much to offer the theory-based evaluation landscape, as it bypasses the methodological complexity of establishing causality without denying its importance.

The second challenge is control over the results of policy influence actions, including sustainable capacity and managerial control over the strategy and objectives of advocacy. Advocate organizations can move closer to this objective by defining the organization’s boundary and operational strategy. An evaluation framework allows the organization to self-assess and recalibrate resources accordingly. The implementation of most evaluations requires a logic model, also known as Theory of Change (ToC), which maps the logical progression of indicators across dimension intended to be measured. Given the context of advocacy evaluation where cause-effect logic is not predictable, Teles & Schmitt (2011) argue for the need for a Practice of Change (PoC) complementing the standard ToC approach. There are noticeable differences between the cause-effect logic in the ToC approach and the observed practices of doing advocacy. PoC recognizes that advocacy evolves through recursive interactions where outcomes are ‘emergent’ rather than predictable, where the emphasis is placed on human interactions and the recognition that the planning process is not static. In addition, it takes into account the need for flexibility and adaptation around those outcomes in terms of the advocacy capacity of the organization as a whole.

An organization’s ability to reach its desired outcomes is reflected in a ToC that is realistic, since effective advocacy capacity depends on an organization’s ability to stay on track with its self-defined objectives. If a policy change is outside the control of the measurement, advocates can still assess their progress in building internal capacity to influence change. The advocacy capacity is only a means to a policy improvement, but it remains meaningful for the organization implementing it. Advocacy capacity is measurable because it can be identified, and is manageable because it falls under the control of the advocates who can increase it and use it within the broader framework of their organizational strategy. In doing so, nurturing advocacy capacity becomes a way to bound the scope of evaluation inquiry and to focus on the targets of assessment. Concurrently, pursuing contribution rather than attribution in the scope of evaluation becomes more manageable.

The evaluation literature in the social sector has been under-theorized and in need of further conceptual framing. Advocacy evaluation is an emerging field that has developed a separate theory and practice within the context of evaluation research. Meanwhile, policy advocacy has remained a vast field encompassing various dimensions. These have been summarized according to common distinct strategies based on whether advocacy is: enhancing a democratic environment, applying public pressure, influencing decision-makers, pushing for direct reform, or monitoring implementation change.

In line with focusing on advocacy as an act of influencing decision-makers, Jones (2011) summaries four categories, building on Start & Hovland’s (2004) research on methods to assess policy influence: advising, advocacy, lobbying, and activism. Each category represents a specific shade of ‘influence’; when translating it operationally, each entails a distinct set of activities, along with a different approach to policy influence. Such categorization has also been adopted by governmental agencies and has been subject to some variations while still retaining the core idea that policy influence is a multifaceted phenomenon worth dissecting. The decomposition of policy influence allows us to see not only the different manners through which an organization can exert policy influence towards its stakeholders but also renders the planning of an evaluation more feasible.

Conclusion

The connection between establishing causality and securing managerial control of the impact process is especially relevant in advocacy evaluation, for two reasons.

On the one side, the content of evaluation. Cause-effect logic in advocacy does not operate in the same way as other domains.  Instead, it depends much more on informal networks and human interaction, which are greatly influenced by the organizational design of the advocating organization.

On the other side, contribution rather than attribution: the definition of success and impact cannot be directly captured through observable analysis.  Instead, it greatly depends on the consensus of internal stakeholders (members) to define when and what impact has been achieved, in its smallest day-to-day instances.