New Approaches
to Evaluating
Community
Initiatives

Volume 2
Theory, Measurement, and Analysis


Shaping the Evaluator’s Role in a Theory of Change Evaluation:
Practitioner Reflections
Prudence Brown

Introduction

This essay describes efforts to implement a modified theory of change approach in two comprehensive community initiatives (CCIs) and one education reform initiative. Although the initiatives are at different stages of development, this discussion focuses on the initial phase, the period during which our evaluation team worked with each site to help articulate a theory of change and specify goals, outcomes, and benchmarks. During this phase, we typically confront interesting questions about the role of the evaluator.

We call our evaluation method a modified theory of change approach because we had neither the charge from our funders nor the resources to implement the approach in its full depth and detail. For example, knowing that our subsequent ability to collect data would be limited, we did not specify and benchmark alternative theories of change at each site, nor did we spend much time identifying an extensive list of indicators for each outcome. Further, for one initiative, the funder was more interested in the theory of change approach as a way to help sites make linkages between their strategies and outcomes during the planning period than as the primary vehicle for evaluating the initiative later. As a consequence, we are not well positioned to comment on the utility of the approach as a method of evaluation. Rather, we focus here on early implementation, sensing that some of the issues we have confronted will resonate with the real world experiences of others. The examples we cite have been modified in some cases to preserve the anonymity of an initiative or its sites.

The Evaluator’s Role: Four Examples

Like the Cleveland Community-Building Initiative evaluation described in this volume by Sharon Milligan and colleagues, our evaluations began at each site by helping participants to "articulate the theories of change as viewed by multiple stakeholders" and "identify important concepts and key benchmarks along the change pathway." Confirming the experiences of other evaluators, we found during this early planning period that engaging with sites requires broad participation and has significant implications for relations among stakeholders, including the evaluator. Every stakeholder’s voice must be heard, divergent needs addressed, and different agendas surfaced. All theories have equal weight at this stage of an initiative’s development.

During this process, the role of the evaluator undergoes an important shift, from that of an outside appraiser to that of a collaborator. As the theories of the various stakeholders are clarified, the initial balance of power is altered, as well. Participants gain new understanding of their own goals and those of others, while important issues such as the proper locus of authority and responsibility for implementing the work become clarified. This understanding often comes through the efforts of the evaluator, whose task is to foster open and clear communication among stakeholders and surface underlying assumptions and cherished beliefs. Having permission to ask certain questions at a certain level of specificity allows the evaluator to stimulate sharper, more defined thinking by all stakeholders. The following example illustrates this dynamic.

The Evaluator and an Initiative with an Emerging Theory of Change

In an initiative aimed at public education reform, the evaluator worked with one of the sites to help participants identify the pathways between their advocacy activities and the systemic changes they hoped to achieve. At first this was a very difficult conversation. The group had for many years used a political framework through which it explained its actions to its current membership and to the parents it was trying to recruit. Developing the specifics of a theory of change approach required the group’s leadership to make explicit certain assumptions about the links between actions and desired outcomes. This involved the group in actual debate about those links and uncovered some conflicting views about future strategy. For example, the group often sponsored "actions" to bring attention to its cause, such as making a surprise visit with a busload of parents to an important official’s home. The debate involved several key questions. How would such a strategy lead to the desired outcome—by threatening damage to the official’s public image, influencing a school board vote on a particular issue, communicating widespread parent opposition to the official? What other strategies might accomplish the same ends? What benchmarks are fair indicators of progress? Resolving these questions entailed a great deal of time and not a little conflict, but the group was strong enough to try to use the debate to become more effective. As a positive secondary consequence, one of the funders reported that the written evaluation framework (developed collaboratively by the site and the evaluator) allowed her to see clearly for the first time what the group actually did and how the impact of its work could be assessed. In this case, this new understanding strengthened her commitment to the group and her ability to speak to other funders on its behalf.

A second example demonstrates the shifting relations among stakeholders that often result from engagement in a theory of change approach. It also signals the multiple roles the evaluator can assume in this process.
The Evaluator and an Initiative with Diverse Constituencies

One CCI site involved a collaboration among partners from four different ethnic groups and geographic areas of the community. Although each partner had theoretically affirmed the notion of collaboration, they struggled constantly about focus and methods, their misunderstandings exacerbated by language, religious, and economic differences. The work of the evaluator was among the factors that brought them together and helped them produce a strategic plan and a related evaluation framework. The evaluator presented the theory of change approach in a series of brainstorming sessions, inviting each group to be explicit about its interests while not guaranteeing that all those interests could be addressed. At first the partners were hesitant to put their agendas on the table, but they became more forthcoming as they realized that the strategic plan would drive decisions about resource allocation and that the evaluation framework would define interim and long-term success. The evaluator played an active role in this process: she met with each party alone on a regular basis, helping to identify and frame priorities; she challenged members when they moved away from the agreed-upon goals of the group; and she helped provide focus and momentum when local political differences seemed to overwhelm the conversation.

The next two case examples illustrate some of the complexities for evaluators of assuming new roles. They also raise questions about how to implement theory of change evaluations in less than ideal circumstances.
The Evaluator and an Initiative with Competing Political Agendas

The evaluator at one CCI site tried to engage members of the governance committee individually and collectively in developing an evaluation framework. At this site, a strong political agenda worked against achieving specificity about outcomes and benchmarks. Appointments with the evaluator were often canceled; partners expressed one view when alone with the evaluator and another in governance committee meetings; emergency issues were allowed to bump the evaluator’s work from meeting agendas. Further, the initiative’s staff director lacked the leadership skills and support to move ahead with developing a framework independently. With much persistence, the evaluator assembled a draft framework, which was discussed and approved at a governance committee meeting. Although all parties agreed about the major outcomes and strategies, the resulting framework risked being mechanistic or irrelevant to what was driving the site as full-scale implementation began.

The Evaluator and an Initiative with a Weak Theory of Change

In one CCI, a foundation put forth the beginning parameters of a theory of change, then selected seven sites that responded positively to the opportunity to participate. Partly because the foundation’s theory was not developed or communicated clearly, the sites were drawn to the initiative as much by the promise of resources for their neighborhoods as by the initiative’s ideas and goals. Further, the funder did not direct the technical assistance provider to reinforce the theory of change approach or to integrate it into his strategic planning assistance. After the sites received a weak directive from the funder to participate in the development of an evaluation framework, the evaluator was left to champion theory development and the theory of change approach with the sites. Over time, the evaluator was able to establish collaborative relationships and produce, with input from the sites, an evaluation framework for the implementation phase. Yet without a well-articulated, initiative-wide theory of change that was owned by the funder and technical assistance providers, the individual site frameworks were too diverse to form the basis for significant cross-site testing and analysis.

Lessons for Establishing an Effective Role

These examples illustrate some of the complexities of the evaluator’s role during the planning process. They are derived from a limited number of quite different initiatives, none of which has been in existence for more than two years. Despite these limitations, we suggest some initial lessons about using a modified theory of change approach to help develop an evaluation framework during the CCI planning process.

Establishing a theory of change framework in the planning period involves multiple tasks and can be very time consuming. This conclusion may be obvious but needs to be underscored. The evaluator charged with helping participants articulate their theories of change and establish appropriate outcomes and benchmarks takes on a wide variety of tasks that tend to be evolutionary, iterative, and diverse in their requirements. The process demands that the evaluator learn enough about the participants and establish strong enough relationships that he or she can help construct the framework collaboratively. This involves such tasks as:

If these tasks are to be accomplished within a reasonable amount of time, the evaluator needs the full engagement and support of the sites and the funder. In two of the examples described earlier, the process turned out to take much longer than expected because the sites needed to spend time on tasks that had little to do with their theories of change. In one initiative, the planning process was extended from 12 to 18 months after the first 6 months were spent building an effective collaborative body to guide the initiative. Although the evaluation team was present from the beginning, the sites were not ready to engage either with the team or with the substantive aspects of the planning process until certain organizational issues had been resolved. Early work had to be discarded once the sites turned to strategic planning in earnest, since they had developed quite different ideas and assumptions about how change could be stimulated. In another site, unanticipated political disputes had to be resolved before effective strategic planning could begin.

An evaluator using a theory of change approach needs to draw upon a wide variety of skills during the planning period. As the examples illustrate, establishing a theory of change approach is a process that is substantive, political, and methodological. In helping the sites articulate their assumptions about change or identify benchmarks to assess progress, an evaluator well versed in the substance of the initiative is better able to stimulate participants’ thinking and challenge it constructively. If participants select a strategy and specify interim outcome measures that seem unrealistic, a knowledgeable evaluator can refer them to existing findings or programs that might inform their decisions. If we imagine evaluators distributed along a continuum of substantive expertise, evaluators who consider themselves experts in the particular field of the initiative are at one end, while at the other end are evaluators who see themselves as facilitators who translate the site’s goals and strategies into an agreed-upon format, regardless of content. Our experience suggests that simply being knowledgeable about a field can help the evaluator probe assumptions and benchmarks more deeply, build credibility with the sites, and accelerate the process of creating a framework that receives the support of all stakeholders.

Group process and political skills are also valuable assets for the evaluator. Developing a framework that reflects the investment and approval of multiple and diverse groups of stakeholders requires the evaluator to work closely with all parties, appreciate the dynamics among them, identify common ground, and address differences in perspective. The example of the initiative with diverse constituencies illustrates the need for these tasks and skills. In a collaborative venture like a CCI, it is especially important for the evaluator to surface disagreements or differences in perspective among participants early in the planning process so that these differences do not undermine the ability of the site to work as a unified force.

Finally, to resolve the methodological issues that arise in constructing an evaluation framework, an evaluator should be knowledgeable about quantitative and qualitative sources of data and the use of administrative data records. Because well-established measures do not exist for many of the relevant indicators, an evaluator may need to combine an appreciation for the value of psychometrically established measures with a creative sense of how to develop new ones. In the example of the initiative with an emerging theory of change, the long-term outcomes are changes in policy. As the group worked to develop its theory of change framework, the pathways they articulated included a variety of measurement points—fear of negative media attention, increased public awareness of the grantee’s agenda, increased parental leadership—that were difficult to assess reliably, especially within the evaluation’s limited resources.

As an active participant during the planning period, an evaluator can improve the quality of the process and its product. This is not a traditional evaluation role; rather, it requires the evaluator to engage in an often messy process and become part of the action. Simply helping sites identify and specify in measurable terms their outcomes and benchmarks can be viewed as a technical assistance activity. Once the line between evaluation and technical assistance is crossed, however, an evaluator may face a range of dilemmas associated with the new role for which there are few models or agreed-upon standards.

Theoretically, the evaluator’s technical assistance can be limited to helping the site construct the evaluation framework. For example, if the site has trouble expressing the precise pathways it anticipates between a particular strategy and set of outcomes, the evaluator can draft possible scenarios and use them as the basis for discussion with the group. For some groups, much of the work gets done in this iterative fashion, with the evaluator taking the lead in constructing aspects of the framework and then getting feedback from the site. This is clearly a delicate process, one into which the evaluator’s knowledge and biases cannot help but enter, ideally in a constructive fashion. Yet there are also dangers in being too passive, especially if the stakeholders have an interest in distancing themselves from the framework (as in the initiative with competing political agendas), the theory of change approach plays a marginal role (as in the initiative with a weak theory of change), or the site’s capacity is very weak. Either way, the framework can end up belonging more to the evaluator than to the site, or it can prove to be inadequate as the evaluation moves into full-scale implementation. Although updating the frameworks along the way will be normative in most CCI evaluations, the frameworks need to be sufficiently well constructed at the outset to require only updating, not wholesale transformation.

The evaluator’s substantive influence might be reduced by separating the development of the evaluation framework in the first phase from the subsequent use of that framework to evaluate CCI implementation. Two different individuals or teams could carry out these functions. The first might be considered responsible for the "pre-evaluation" phase; the second for the actual evaluation. While such an arrangement may present some advantages in terms of bounding the role of the evaluator, it could also create an artificial discontinuity between planning and implementation and reduce the evaluator’s overall understanding of and ability to provide informed feedback to the CCI. Much more experience with these roles is needed before such questions can be resolved.

Conditions for a Productive Evaluation

In thinking through the dilemmas of the theory of change approach, we have identified at least three conditions that enable an evaluator to work most successfully. Although not necessarily prerequisites for successful engagement between evaluator and initiative, these conditions may contribute to such engagement.

An overall theory of change should be both strong and responsive to input from the participating sites. An initiative needs a strong overall theory, able to encompass the contributions of different sites, funders, and other stakeholder groups. Without such a theory, each site may develop a theory and evaluation framework that works locally but does not fit within a larger, multisite framework. Under these circumstances, evaluators may feel as if they are working on a set of case studies, not a single initiative. In one collaboratively supported initiative, the funders shared some overall goals and principles but chose not to develop these further, partly in recognition that doing so would surface significant disagreements among them. Their view that keeping the collaboration of funders together was more important than elaborating and testing a particular theory was a legitimate determination of priorities, but it limited the potential learning yield of the approach. Even in a single-site initiative, it helps to begin with the strongest possible theory about how the initiative expects to achieve its goals, while recognizing that this theory will evolve over time and in response to local factors and experience.

A support structure can reinforce the theory of change approach and the evaluation framework. Funders are increasingly recognizing the importance of effective technical assistance or coaching to help CCIs with a range of tasks at the outset of an initiative. When funders and technical assistance providers communicate the value of a theory of change approach from the very beginning of an initiative, the evaluation is likely to become an effective means for maintaining focus and momentum. The opposite is also the case. If the technical assistance provider portrays the theory of change approach as irrelevant or marginal, participants at the sites will not feel committed to the evaluation. It should be relatively easy, however, to demonstrate the value of the evaluation perspective to CCI technical assistance providers, whose emphasis on strategic planning and capacity building connects well with the principles of the theory of change approach.

Cultivating good working relations between the funder and the sites is essential. The relationship between the funder and the CCI sites provides an important context for the development of an initiative’s theory of change. The funder and the sites must engage in honest dialogue about their own theories of change and agree about how those theories should inform the evaluation strategy. Such a discussion can also clarify roles, responsibilities, and locus of authority. In one case, the funder’s theory of change was not well developed at the beginning of the initiative. Later, the funder did not hesitate to make its theory more explicit when the site’s developing theory was seen to be inconsistent with the funder’s evolving understanding of the initiative. Such differences in perspective can become problematic if the relationship between the funder and the site is characterized by lack of trust or struggles over expectations and accountability.

Under the right conditions, an evaluator using a modified theory of change approach can play a constructive role that strengthens the planning process of an initiative. Over the next few years, experience should begin to yield specific lessons about whether and how such an approach can shape an evaluation framework that retains its effectiveness throughout the life of an initiative.


Back to New Approaches to Evaluating Community Initiatives index.


Copyright © 1999 by The Aspen Institute
Comments, questions or suggestions? E-mail webmaster@aspenroundtable.org.
This page designed, hosted, and maintained by Change Communications.