New Approaches
to Evaluating
Community
Initiatives

Volume 2
Theory, Measurement, and Analysis


Applying a Theory of Change Approach to Two National, Multisite Comprehensive Community Initiatives:
Practitioner Reflections
Scott Hebert and Andrea Anderson

Introduction

In recent years, several very significant national initiatives have aimed to address the chronic poverty afflicting some of our nation’s communities, particularly inner-city neighborhoods with high concentrations of people of color. Among the most ambitious of these are the Annie E. Casey Foundation’s Jobs Initiative, originally funded in 1995, and the Empowerment Zones and Enterprise Communities Program, launched by the U. S. Department of Housing and Urban Development in 1994.

The Annie E. Casey Foundation’s Jobs Initiative is an eight-year, six-site demonstration designed to improve access to family-supporting jobs for disadvantaged young adults residing in inner cities. The selected sites are Denver, Milwaukee, New Orleans, Philadelphia, Seattle, and Saint Louis. Seed money is being provided by the Annie E. Casey Foundation in annual increments to help groups of local actors in these communities pursue systems reform agendas to promote better connections between disadvantaged job seekers and good jobs in the regional economy. In each site, a "development intermediary" is responsible for mobilizing the civic infrastructure, facilitating the process of defining regional strategies, making investment decisions regarding prototype jobs projects that will be used to test job access mechanisms, establishing a jobs policy network, and initiating a systems reform agenda. In addition, an "impact community" of 50,000-100,000 residents within the inner city has been designated at each site. The impact community is expected to provide a framework for the regional Jobs Initiative to understand the barriers faced by disadvantaged job seekers, as well as to contribute at least half the participants in the jobs projects.

The Empowerment Zones and Enterprise Communities (EZ/EC) Program of the U.S. Department of Housing and Urban Development (HUD) is designed to encourage comprehensive planning and investment aimed at the economic, physical, and social development of the neediest urban and rural areas in the United States. As such, the HUD initiative represents a major element in the federal government’s community revitalization strategy. Thus far, HUD has made EZ/EC program awards to a total of 72 urban communities. Individual communities design their own strategies, but each local effort is expected to incorporate four key principles in its strategic plan: economic opportunity, sustainable community development, community-based partnerships, and a strategic vision for change. In addition, the target community and its residents are expected to be full partners in the process of developing and implementing the strategic plan. Federal financial assistance and support for the local EZ/EC efforts are provided in a variety of forms, including flexible social services funds, wage tax credits and tax deductions for participating businesses, tax-exempt bond financing, and special Economic Development Initiative (EDI) grants. The EZ/EC program also recognizes that communities cannot succeed with public resources alone, and therefore emphasizes the leveraging of additional private and nonprofit support.

Like many other CCIs, both the Jobs Initiative and the EZ/EC program are intended to improve the conditions and outcomes of disadvantaged residents, largely by expanding economic opportunities. The Jobs Initiative clearly reflects a targeted approach to economic development through activities to improve employment connections for disadvantaged job seekers.1 The EZ/EC program aims for more general transformation of economic conditions in the specified zone areas. All EZ/EC activities are concentrated within the specified zones, while Jobs Initiative sites attempt to improve employment connections throughout the region.

Within the general guidelines established by their funders, both the Jobs Initiative and the EZ/EC program exhibit broad variations among sites in the strategies and activities being pursued. In all cases, however, the local efforts represent complex, multifaceted interventions. All sites are pursuing saturation models, in that all residents of a zone (in case of the EZ/EC program) or all disadvantaged job seekers in the region (in the Jobs Initiative sites) are expected to realize benefits from the initiative activities over the course of the interventions.

Abt Associates was selected as the prime evaluation contractor by the Annie E. Casey Foundation for the Jobs Initiative evaluation and by HUD for the EZ/EC assessment; Scott Hebert is serving as project director for both studies.2 For the Jobs Initiative evaluation, Abt is teaming with the New School for Social Research. For both evaluations, Abt has also contracted with local research affiliates in each of the intensive study sites to provide ongoing data collection capacity and local insight.

The Jobs Initiative evaluation, scheduled to run for the duration of the eight-year initiative, is intended to assess the Jobs Initiative’s effects at each site and across sites in the following areas:

The EZ/EC study, more formally known as the Interim Outcomes Assessment, is scheduled to take place over five years and has three principal objectives: The two national initiatives pose complex challenges to evaluators. As saturation models whose strategies cut across multiple systems, they do not lend themselves to traditional evaluation methods involving randomized control groups or comparison groups to establish counterfactuals. In addition, although funded within national initiatives, all sites have used a bottom-up approach to program design, producing agendas of activities and objectives that are dependent on unique local conditions. Designing an appropriate cross-site evaluation framework with rigorously specified and consistent outcome measures is therefore very difficult (Hollister and Hill, 1995; Giloth, 1996). Recognizing these challenges, the funders sought evaluation frameworks that would blend traditional and nontraditional research methods, with significant evaluation resources devoted to applying a theory of change approach.

This paper discusses the experiences to date of the Abt Associates evaluation teams in their application of the theory of change approach in assessing these two national initiatives. Although the results are preliminary, we believe they highlight a number of key methodological issues and offer some insights into addressing the particular challenges of CCI evaluation.

Challenges in Conducting a Multisite CCI Evaluation

Unlike evaluations that apply the theory of change approach to a single site, the national evaluations of the Jobs Initiative and the EZ/EC program are using this approach on a cross-site, multiple-community basis. Specifically, these evaluations are employing theory of change methods in all 6 Jobs Initiative sites and 18 of the 72 EZ/EC program sites to produce both site-specific and cross-site findings. Working across so many sites raises important issues regarding staffing and allocation of resources, articulating local theories of change, and reconciling discrepancies within the evaluation framework.

Staffing and Allocating Resources

A theory of change approach typically requires detailed articulation and tracking of micro-stages in the intervention, entailing the commitment of fairly substantial research staff resources. Our mandate to study a large number of sites geographically distant from one another limits the amount of time that the core national evaluation team can spend on-site. Accordingly, the Abt national evaluation teams have recruited local research affiliates to conduct many of the data collection activities at each site. These researchers reflect a broad range of academic disciplines and provide the national evaluators with invaluable local expertise regarding the political, economic, and social contexts of the interventions. Their proximity to the sites also extends the evaluation’s on-site presence, essential for a theory of change approach.

For the most part, the local research affiliates have been recruited from the faculty of local universities, although independent consultants are being utilized at some sites. Typically, the local research team consists of one or two individuals, sometimes supported (where the affiliates are faculty members) by graduate students acting as research assistants. In all cases, the individuals selected to serve as local research affiliates had demonstrated interest in and experience with the issues being addressed by the local initiative. In general, however, the local research affiliates had little or no previous experience in applying the theory of change approach. Consequently, at the commencement of each study the local affiliates were brought together for a two-day training conference on the theory of change approach. In addition to ongoing guidance provided through memoranda, internet list servers, and conference calls, periodic cross-site meetings are held to identify and respond to issues that arise in the application of the theory of change approach.

Perhaps the most significant ongoing research issue, however, is the limited amount of resources available for local research affiliates. Budget constraints have allowed the national evaluations to allocate an average of less than eight staff hours per week per site to local research affiliates. Although several local teams have been able to supplement their budgets with funding from other sources, all teams constantly face hard choices in setting priorities for data collection activities.

In addition, using a large group of affiliates has brought to light some difficulties that traditionally trained research professionals may experience in understanding and accepting the theory of change approach. Some local affiliates have raised issues about the "hybrid" nature of the approach and its failure to separate process and impact analyses. Others have been concerned about the lack of a clear counterfactual. Most important, many of the researchers have had to overcome their belief that they must maintain an "arm’s-length relationship" with local stakeholders.

On the other hand, it is important to acknowledge that, while many of its features are nontraditional, the theory of change approach relies heavily on traditional data collection methods. Moreover, the findings it yields can be buttressed by more traditional analysis frameworks. For example, in the national evaluation of the Jobs Initiative, we will be conducting pre and post surveys of project participants and assessing longitudinal data from administrative databases, in addition to interviewing key stakeholders, observing governance meetings and other project activities, and conducting focus groups with community leaders. Similarly, in the EZ/EC national evaluation, we will supplement the data collected through interviews and focus groups with interrupted time series analyses of business activities in the zone, neighborhoods contiguous to the zone, and comparison areas in the city, and with pre and post surveys of a random sample of business firms.

Articulating Local Theories of Change

Both evaluations are currently at the stage of articulating theories of change with the individual sites. Working with stakeholders in 24 separate sites provides a unique opportunity to explore common patterns and variations in the theory articulation process and to identify techniques that may be useful in facilitating that process.

Introducing the Approach and Its Terminology

In both evaluations, we are working with sites that have either completed a formal planning process or are currently engaged in a process independent of the theory of change exercises. This situation has both advantages and disadvantages. On the positive side, the existing planning documents give the evaluation team a good place to start. Moreover, to the extent that the documents have been formally approved by governing boards, they present "official" consensus positions of influential stakeholders, at least in the short term. The planning documents specify in some detail the key problems to be addressed and the ultimate objectives the sites are trying to achieve; some also describe the major strategies to be implemented in pursuit of those objectives.

Yet applying a theory of change approach after substantial planning has already taken place has a negative side. First, co-constructing the theories may be perceived by stakeholders as duplicating the earlier planning process and adding unnecessary burdens on the local implementers. Alternatively, the theory of change exercise may be viewed as a critique of the existing planning process and documents, causing resentment or frustration among the stakeholders. Our experience suggests that existing planning documents vary widely in completeness and quality and that stakeholders may need to define expected performance milestones more precisely for the purposes of the evaluation. Stakeholders may not acknowledge the limitations of the previous planning activities and may resist the added accountability implied by the further specification of performance measures.

Although they are by no means guarantees of success, we have found several techniques to be reasonably effective at reducing stakeholder resistance. First, we believe it is best to avoid using jargon, including the term "theory of change," when discussing the approach with stakeholders. Specifically, "theory of change" seems to imply a level of abstraction that many stakeholders find objectionable. Rather, we use the term "pathway of change," which clarifies the importance of articulating elements along the pathway, specifying their sequence and timing, and tracking the actual evolution of the intervention. We also emphasize that stakeholders will help define the assessment measures. We explain to local stakeholders, for example, that "the evaluation will be based on the goals and activities that you feel are most important," and that evaluators will work with them closely to identify "a step-by-step description of what you hope to accomplish and how you hope to accomplish it."

Working with Stakeholders

Because of the importance of stakeholders as partners in specifying the evaluation framework, a fundamental question for any theory of change evaluation is how broadly to define the term "local stakeholders." Resource constraints play a major role in setting limits on how broadly to cast the net. Further, although a core group of individuals generally emerge as key stakeholders, members of that group may want to specify who the additional stakeholders will be and the sequence and venues in which the evaluation team will talk to them. In cases where there has been a change in the leadership or a struggle over conflicting visions for the initiative, the core group may express very strong opinions about individuals who should not be considered stakeholders. In a traditional evaluation framework, where the independence of the evaluator is emphasized, these issues are somewhat easier to resolve than in a theory of change approach, in which the evaluator must establish a close working relationship with the stakeholders. The process of identifying stakeholders will need to be approached thoughtfully, in order to avoid alienating key local actors. At the same time, the evaluator must take pains to ensure that the final specification of stakeholders is broad enough to include all significant stakeholder groups, including some individuals who may express a theory of change that differs appreciably from that articulated by the local "establishment."

In initiatives that have already gone through a formal planning process, the evaluator may not be able to reach all the key stakeholders who shaped the initial plan. Moreover, as the composition of key stakeholders changes over time, there may also be changes in the consensus concerning the intervention’s purposes and activities; therefore, the current theory of change may be substantially different from the theory presented in the formal planning documents. This situation is especially common in initiatives that have experienced a turnover in leadership.

Some changes in the composition of stakeholders are probably inevitable over the course of most initiatives. Consequently, the evaluator must recognize the potentially tentative nature of the articulated theory, especially if it was surfaced during the early stages of the initiative. Evaluators need to document shifts in leadership and initiative design and attempt to assess the reasons for these changes. In cases where new leadership has abandoned an initial "formal" plan that was developed prior to the introduction of the theory of change approach, it seems logical for the evaluator to base the progress measurement plan on the theory that is actually being implemented. However, to the extent that a competing theory of change has significant support among stakeholders, the measurement plan should ideally seek to assess progress relative to this alternate pathway as well.

Describing the Theory of Change

The process of surfacing the local theory of change can occur in a number of different ways, including individual interviews or focus groups with stakeholders. As the basic theory begins to emerge, the evaluator can prepare a written, schematic description of the theory to be shared with stakeholders for confirmation. The evaluator can then work with stakeholders to refine the theory by filling in gaps or addressing apparent inconsistencies. Alternatively, the evaluator can prepare a general description of what the local theory might look like, based on written materials from a pre-existing planning process or the evaluator’s prior experience with similar efforts. The stakeholders can review this initial description and provide detailed feedback that the evaluator can use to correct and refine the description. The revised theory is then reviewed again by stakeholders for acceptance or further refinements.

A benefit of the first approach is that starting with a clean slate reduces the risk that evaluators will impose their own biases in theory formulation. This approach, however, can be a very time-consuming, iterative process for both evaluators and stakeholders. The second approach can be much more efficient, but the evaluator must be sensitive to the potential for unduly influencing how the stakeholders frame the local theory.

Because of limited on-site research resources and a desire to minimize the demands on stakeholders, the Jobs Initiative and EZ/EC program evaluations have taken the second, less staff-intensive approach to the articulation of local theories of change. To help offset stakeholder resistance to what might be perceived as duplication of the prior planning activities, we have been very explicit about using the existing planning documents to frame the initial articulation of the theory of change. If these documents are based on a sound planning process, they can supply a good portion of the information needed to complete a preliminary description of the underlying theory of change. In fact, even incomplete, illogical, or otherwise problematic documents should be prominently reflected, as a means of demonstrating that the evaluator sees value in the previous planning efforts and to start the process from a vantage point familiar to the stakeholders.

In cases where the planning documents are problematic, the next step—and potentially a very difficult one—is getting the stakeholders to acknowledge the limitations of the existing plans and accept the theory of change approach as a potentially effective way to move beyond those limitations. The evaluator can stress the value of the theory of change approach as a strategic planning tool to review, extend, or make more explicit the existing plans. The evaluator can also explain that the evaluation will provide the site with timely data for self-assessment and mid-course corrections. In this way, stakeholders can begin to appreciate the potential of the approach to add value to their endeavors, rather than viewing the exercise as redundant or threatening.

Picturing the Theory of Change

The choice of how best to summarize the intervention pathway visually depends on the complexity of the underlying relationships in the theory and, more important, the compatibility of the representation with the learning styles of the stakeholders: some individuals find flowcharts easy to interpret, and others do not. In general, schematic representations have proven useful in summarizing the theory elements and their temporal relationships, especially when supplemented with narrative descriptions.

The underlying assumptions and hypotheses about the logical relationships among theory elements should be stated explicitly, since these relationships are at the heart of what the evaluation will be testing. (For example, do improved cognitive and interpersonal communication skills help new employees to adapt better in the workplace, and therefore lead to increased retention?) In a flowchart, these hypotheses are often reduced to arrows connecting the various elements. To avoid this problem, we number the arrows to correspond with narrative descriptions of the assumptions or hypotheses they represent.

Encouraging a Broad and Strategic Approach

Stakeholders have a tendency, perhaps reinforced by the use of existing planning documents, to focus on specific projects or activities when defining the initiative pathway rather than on the strategies underlying the action steps. This tendency produces several undesirable effects. First, if the initiative involves a large number of distinct projects or activities (an EZ site, for example, can have as many as 100 separate projects), the theory description frequently gets bogged down in excessive detail. Moreover, by concentrating on projects or activities at the expense of broader strategic questions, the key milestones along the pathway often end up being defined largely in terms of inputs, events, and outputs rather than outcomes.

In addition, many of the local interventions are taking place in communities where other initiatives are already addressing similar problems and may even share common strategies and objectives. For example, in several study sites, the EZ/EC program is seen as one element in a larger movement to address decline in the target neighborhoods. In Seattle, the Jobs Initiative strategies have been adopted by the city’s welfare-to-work effort. In such cases, the intervention must be examined in the larger context if its contribution is to be properly understood. This larger context can easily be overlooked if stakeholders and researchers become too narrowly focused on specific program activities and projects when attempting to articulate the local theory of change.

It is very important that researchers urge local stakeholders to think strategically in articulating their theories. For example, an official in one community explained that the local program was implementing a one-stop shop as part of its initiative because it represented the current "state of the art" thinking about promoting economic development in a distressed area. This view suggests that the site had simply looked at what other communities were doing for the latest "in vogue" approaches, rather than examining the underlying problems facing the community. In such a case, even extensive probing by the evaluator may fail to uncover a detailed theory of change. In most instances, however, the intervention design will be found to be based on a complex if unstated set of assumptions regarding problems, approaches, action steps, outcomes, and the relationships among these elements. The challenge to the evaluator is how to tease out these implicit hypotheses through discussion with the stakeholders so that the assumptions can be examined and used to frame the overall theory or pathway of change.

Specifying Interim Activities and Outcomes

We have found it helpful to begin the process of articulating interim steps by getting local stakeholders first to confirm the ultimate objectives they hope the intervention will achieve. This can be done either by asking the stakeholders to describe those ultimate objectives or by presenting the stakeholders with the evaluator’s impressions regarding the ultimate objectives and having the stakeholders corroborate or revise them. In most cases, stakeholders can articulate the long-term goals of the local initiative, at least in qualitative terms, and specify the initial steps they feel the initiative should take. Once those beginning and end points have been made explicit, the evaluator and stakeholders can specify in reasonable detail the middle stages of the pathway. This middle period is by far the most challenging aspect of the articulation process, as it focuses on the period about which stakeholders’ views are most vague.

In some cases, it has proven useful to ask the stakeholders to work backwards from the long-range objectives, specifying the interim outcomes they would expect to see and describing the types of activities (and their sequence) to achieve those outcomes. In other cases, stakeholders may find it easier to speculate on what activities should follow the initial action steps, and then to try to make explicit the interim outcomes they would expect to see along the process. Accordingly, it is crucial that the evaluator remain flexible regarding moving from activities forward to outcomes, or from outcomes backwards to activities, depending on which approach the stakeholders find most helpful.

It should be acknowledged that, even if the evaluator and stakeholders go through this exercise very systematically, few sites will be able to provide much detail regarding the expected pathway over the next year or two. Thus, for a long-term intervention such as the Jobs Initiative (with a projected duration of at least eight years) or the EZ/EC program (where major outcomes may not be discernible until the ten-year point), the evaluator must recognize that it is unrealistic to believe that stakeholders can articulate the entire theoretical pathway. In all likelihood, the particulars of the overall theory or pathway of change will need to be articulated in waves over time.

Reconciling Practical and Theoretical Differences

As the theories or pathways of change are being articulated, the evaluator may need to reconcile important differences in the theories themselves and in practical aspects of the evaluation. In the two national evaluations, we have made several strategic choices to establish consistency within sites, among sites, between sites and funders, and over time.

Establishing a Consistent Level of Detail

Defining the level of data to collect, both to describe the theories fully and to track the actual experience of the interventions, is a difficult task in a multisite evaluation. Resource constraints play a large role in determining how much and what kinds of data can be collected, as does the tolerance of stakeholders for the data collection process. For example, even if the evaluator manages to collect minutely detailed information in the theory articulation process, stakeholders can be alienated from future cooperation if they see the resulting theory description as too complicated and difficult to understand. To minimize the data collection burden and make the process meaningful for stakeholders, we believe the evaluator should work with stakeholders to capture the essence of the local theory of change through the initiative’s key underlying assumptions and principles, implementation steps, and expected outcomes.

When applying the theory of change approach in a multisite context, the evaluator faces an added degree of difficulty, in that the complex process of theory articulation must follow a common framework to permit cross-site comparisons. Both to facilitate discussions with stakeholders and to promote comparability, we are using such a framework for initial exploration of the theories underlying the interventions. For each site, we are working with stakeholders to explicate the following elements:

Addressing Multiple Theories, Illogical Assumptions, and "No Theory" Situations

As we suggested above, the process of working with stakeholders to articulate the theory of change may sometimes reveal that the stakeholders collectively reflect multiple theories regarding what the initiative is about and how it should proceed. When this situation arises, the evaluator must consider when to try to facilitate a consensus among stakeholders and when to track multiple theories.

When differences among theories seem relatively minor, we feel it is helpful to bring these to the attention of the stakeholders, who can then directly consider the differences and their implications. This process can clarify distinctions in how various stakeholders perceive facets of the initiative and, by sensitizing members of the stakeholder group to differing viewpoints, improve communications among them. When differences are made explicit, the stakeholders can also collectively decide whether it is important to reach an "official" consensus position or, alternatively, to accept a degree of variation in their views regarding certain aspects of the initiative.

When distinctions between the theories are significant, the clarification and resolution process can be much more difficult. In some instances, the stakeholders may be resistant to making the conflicting theories explicit, fearing that the process will threaten fragile relationships. In other cases, the stakeholders may be willing to examine explicitly the conflicting theories but unable to decide among them. As Weiss (1995) has argued:

[A] community initiative may work through a variety of different routes. There is no need to settle on one theory. In fact, until better evidence accumulates, it would probably be counterproductive to limit inquiry to a single set of assumptions. Evaluation should probably seek to follow the unfolding of several different theories about how the program leads to desired ends. It should collect data on the intermediate steps along the several chains of assumptions and abandon one route only when evidence indicates that effects along that chain have petered out.
In addition to deciding how to reconcile multiple theories among stakeholders, our experience suggests that the evaluator must also be attentive to distinctions between local stakeholders’ theories and the theories held by funders. Program guidelines and contractual conditions associated with the initiative’s funding may guide early stakeholder descriptions of the intervention, especially those presented in proposals, planning documents, and other reports to the grantor.3 As a result, the evaluator may be tempted to superimpose the grantor’s theory on the local initiative. Despite superficial appearances of consistency between stakeholders’ and grantor’s theories, however, the evaluator must be careful to test whether such apparent correspondence is real; once the elements of the initiative have been specified more precisely, the evaluator may discern fundamental differences between the local stakeholders’ and the grantor’s assumptions. Also, the stakeholders’ and grantor’s theories are likely to diverge over time, as the site deals with unique local circumstances in the process of implementation and refines its theory accordingly.

The evaluator may also be required to decide how to handle apparently illogical assumptions embedded in the stakeholders’ theory. Although the theory of change approach as defined by Weiss (1995) and Connell (1997) is predicated on testing the stakeholders’ vision of how the initiative is expected to work, the value of the approach depends on the theory being plausible, measurable, and testable. If stakeholders’ hypotheses regarding the relationships between initiative elements—and particularly between planned actions and expected outcomes—are wholly without logical basis, it can be argued that the approach will have little merit as part of an impact evaluation since the outcome is largely preordained. On the other hand, there may be situations where, although the logical connections between elements are not immediately obvious, an evaluator’s probing can help stakeholders articulate a stronger case for the potential of the planned strategy. Accordingly, in the theory articulation process, the evaluator should examine even highly speculative hypotheses carefully: these are the situations where the most unexpected, and therefore perhaps the most important, lessons may emerge.

A final possible dilemma for the evaluator is the initiative that appears to have no underlying theory to guide the intervention. This situation may arise when the intervention design consists of a "shopping list" of activities with no apparently unifying strategic elements. However, our experience suggests that, even in these situations, careful discussion with the stakeholders often reveals key assumptions that have framed the initiative design and form a rudimentary theory of change, albeit a poorly developed one. When the intervention consists of disparate activities conducted in different neighborhoods, for instance, the evaluator may uncover a local theory of change governing how the activities were selected rather than a programmatic focus of the activities themselves: that is, the community is pursuing an empowerment strategy that allows each neighborhood to select activities that its own local residents see as most needed.

Applying Consistent Standards over Time

Revisiting the theory of change over time raises two questions that the evaluator must answer. First, how far should the evaluator go in pressing stakeholders to articulate details of the pathway beyond the next year or two, since at some point such conjecture becomes highly speculative? Second, what limits, if any, does the evaluator need to impose on stakeholders over time for revising their theory or pathway to reflect the actual experience of the intervention? The latter question reflects what appears to be a basic tension regarding the use of a theory of change approach for formative evaluations versus its potential value in performing impact assessments (Patton, 1980).4

Some proponents of theory-based evaluation, such as Weiss (1995), have described the provisional nature of the underlying hypotheses that stakeholders put forward. Inherent in this view, it seems, is a recognition that stakeholders will refine their theory on the basis of the intervention experience. In fact, under this conceptualization, the explicit revisiting and revision of the theory appears be one of the basic methods through which stakeholders derive lessons regarding possible improvements and the evaluator learns about community change processes. Therefore, for evaluators who want to use the theory of change approach for formative or process evaluation purposes, the repeated reshaping of the theory or pathway over the course of the intervention may not necessarily represent a methodological concern.

On the other hand, evaluators who wish to use the approach to conduct impact assessments may find that revisions to the theory raise a very thorny methodological issue. Although the theory of change approach does not purport to solve the problem of the counterfactual, it can be argued that, in order to build a case for causation between intervention and outcomes, the key hypothesis must be identified as clearly as possible at the beginning of the initiative and tested to determine the effect of the intervention. According to this view, the ability to infer attribution will be directly dependent on the degree of consistency found between the original hypothesis and reality as the intervention unfolds. Accordingly, while the stakeholders may revise their theory over time, the evaluator is primarily interested in the original hypothesis.

In the Jobs Initiative and EZ/EC evaluations, because we are trying to use the theory-based approach to assess both process and impacts, we have attempted to reconcile these two schools of thought. Consequently, while recognizing the limits to which the stakeholders can meaningfully specify details far into the future, we have attempted to get them to articulate the basic elements of their overall theory of change as early as possible in the intervention. The basic elements that we are working to delineate at or near the beginning of the local initiatives include the major hypotheses underlying their pathways and the key interim outcomes expected at each stage of the intervention; together, these elements represent a very abridged description of the entire theoretical pathway. In addition, we have asked them to specify in as much detail as possible the key resources, activities, and events expected for the upcoming year.

On an annual basis, we intend to revisit the theory with the stakeholders at each site to obtain specific details regarding resources, activities, and events for the successive year. In addition, we will ask stakeholders annually to identify refinements that they wish to make in the basic pathway elements, both those that have already been encountered and those anticipated in the future. In this way, our theory articulation approach will provide us with the "original" theory or pathway, as well as detailed data on how and why the pathway has been refined over time.

Developing a Data Collection and Measurement Plan

As part of the national evaluations, we have been developing research designs for each site. We have approached the development of these designs as two related steps: articulating the basic theory of change for the site and developing a data collection and measurement plan. The purpose of this plan is to identify methods for collecting and analyzing data that can help the evaluator and stakeholders determine the progress achieved by an intervention relative to the ultimate outcomes desired for individuals, institutions, and the community and how actual experience compares with the theory held by stakeholders. Given this purpose, data to be collected in tracking an initiative might include the following categories:

The first three items permit the evaluator to determine whether the initiative activities were implemented consistently with the stakeholders’ initial assumptions. The fourth item identifies new external factors that may affect the continuing validity of the assumptions underlying the theory of change. The outcome indicators—the last category—may be most challenging to identify and collect, but they are also the most essential for determining whether the activities and strategies are having the desired results.

Aggregating Activities and Outcomes

While the general categories of data to be collected may be expected to be fairly consistent across theory of change assessments, the evaluator will need to grapple with determining the appropriate level of detail and units of measurement to be used with each application. In developing a local research design, the evaluator will need to work with stakeholders to determine the parameters for appropriate generalizations. For example, few evaluations will have the resources to track all aspects of an intervention with many distinct activities or projects. Under such circumstances, the evaluator and stakeholders should agree on suitable interim outcome measures for groupings of related projects or activities. The evaluator and stakeholders must therefore work together to articulate both a theory of change and a measurement plan. A principal challenge in this process is finding a framework for generalizing the measurement process without losing the unique characteristics of the local intervention or excluding factors that may ultimately determine success or failure.

When confronted with a vast array of planned activities, the evaluator and stakeholders may have considerable difficulty in establishing priorities. In the national evaluation of the EZ/EC program, it has sometimes been useful to encourage local research affiliates and stakeholders to "follow the money" as a way to sort through the complexity. Even when an initiative encompasses a diverse assortment of strategies and activities, its fundamental priorities are usually reflected in the allocation of funds and other resources among the components. By looking at how resources have been assigned, it is often possible to identify the major initiative strategies, group the activities that relate to those strategies, and define a limited set of meaningful outcome measures.

In using this example, however, we do not mean to imply that the allocation of resources is always an effective indicator of key strategies and outcomes in a complex initiative. It is offered merely as an illustration of one approach for aggregating activities. It would not work well for an initiative whose major strategies concerned forming partnerships, for example.

Specifying Performance Measures

Once the key outcomes have been identified, the evaluator must specify the anticipated outcomes in measurable, and preferably quantifiable, terms. It is not enough to say that crime will decrease in the zone; rather, the theory and research design must also specify by how much, over what period, and how those changes are to be measured. The assessment is more likely to be viewed as relevant and meaningful when stakeholders are involved in this process, but their participation is not without potential shortcomings.

Under the theory of change approach, there is tacit acknowledgment that stakeholders can and should revise their theory and actions over the course of an intervention in light of the implementation experience and changing conditions. Embodied in this principle is a recognition that most interventions will involve some missteps, and that the important thing is not that such mistakes occur but how the initiative learns from its mistakes. Accordingly, from the stakeholders’ perspective, the theory of change approach is far less judgmental than many other evaluation frameworks. Even so, some stakeholders will still be tempted to define the evaluation framework to ensure that the initiative will not be seen as "failing." For example, some stakeholders may want to set performance goals at a low level or frame milestones in terms of inputs, activities, or outputs rather than outcomes.

In such situations, what is the evaluator expected to do? One technique for addressing unreasonably low performance measures is to walk the stakeholders through the articulated pathway, questioning explicitly whether the milestones being proposed can reasonably be expected to lead to the long-term objectives. Ideally, this exercise will encourage the stakeholders to establish more appropriate measures of progress. Ultimately, however, despite the inherently collaborative nature of the theory of change, the evaluator may need to maintain some independence to set performance measures, even if some stakeholders do not feel completely comfortable with them.

Related to the task of specifying performance measures is the question of determining the initiative’s expected differential impact: that is, sorting out the effects from the outcomes. In addition to the initiative-related activities, other local efforts and factors will inevitably influence the indicators that the evaluation is monitoring. Consequently, the evaluator will need to separate the potential impact of the initiative from those other factors. Therefore, in working with the stakeholders to establish clear performance measures, the evaluator will need to get them to address the question of how much difference the intervention is expected to make. To do this, they will need to speculate on the magnitude of change, if any, they would expect to see in the absence of the intervention, and then estimate the differential amount of change they anticipate will result from the intervention.

Organizing Data Collection

When the data to be collected have been sufficiently delineated, including the measures for performance indicators, the evaluator must determine appropriate methods for collecting those data. Data collection methods vary according to the nature of the initiative being studied. For our evaluations of the EZ/EC program and the Jobs Initiative, for example, we are using several methods:

Because the resources of the national evaluations are limited, the cooperation of local initiative staff has been essential to the data collection effort. All sites are conducting some form of self-assessment independent of the national evaluation. By familiarizing local initiative staff with the theory of change approach, we have been able to demonstrate that the national evaluation data are potentially valuable to their ongoing monitoring and self-assessment activities. This recognition has led the sites to agree to coordinate some data collection activities, thus allowing the national evaluations and the local staffs to benefit from both sets of data.

Revising the Design

A final point regarding development of the data collection and measurement plan relates to the revision of the research design. Under most traditional evaluation methods, the design is fixed at the beginning of the research effort and generally undergoes little revision over the course of the evaluation. With the theory of change approach, however, the stakeholders’ theory can change over time. Therefore, to give the evaluation the capacity to track revisions in the theory and new outcome measures that result from such changes, the evaluator must be prepared to make appropriate adjustments in the research design.

In the national evaluations, as noted above, we have found that local stakeholders can describe their theories in detail only a year in advance. As a result, we are planning to meet with the stakeholders on an annual basis to fill in details of the sequence and timing of activities planned for the upcoming year and identify changes in the long-term pathway of change. Accordingly, we will need to update the research design annually to ensure that the research effort continues to be directed toward the most appropriate measures.

Challenges in Completing the Analysis

The two national evaluations are in relatively early stages, and therefore our focus to date has been primarily on theory articulation and data collection, rather than on analysis functions. Nonetheless, we can anticipate some challenges we are likely to encounter as we begin to conduct the analysis. The large number of sites will produce some challenges, while other challenges may be more generally characteristic of theory-based evaluation.

Identifying Common Patterns from Multiple Data Sources

Both evaluations are designed to glean cross-site lessons from site-specific theories and interventions. In each, we hope to identify common patterns and lessons that can be discerned from the local initiatives and applied to similar efforts in the future. At first glance, the idea of cross-site analysis may seem antithetical to the theory of change approach, which emphasizes an evaluation framework unique to each site. What we hope to accomplish is a balance of site-specific findings, based on the unique character of each site, with cross-site findings, based on appropriate generalizations. The crux of the analytic challenge, then, is to accomplish generalizations that are true to each site’s experience. In fact, inherent in the theory of change approach is a mechanism that we hope can serve as an effective check to prevent distortion in the cross-site analysis. We believe that the evaluator’s periodic interactions with stakeholders to articulate and update the local theory can also be used to confirm the evaluator’s impressions regarding experiences that may be generalizable to other initiatives or communities.

Another analytic issue, and one that may be common to a variety of theory of change evaluations, is the task of bringing together information derived from multiple data collection methods to create a unified, coherent picture of the initiative’s unfolding. In principle, the triangulation that multiple data methods and sources makes possible can result in a more complete and accurate analysis of the intervention. However, determining how much weight to assign to the respective data sources is often difficult. This may be a particularly thorny issue if the stakeholders’ interpretation of events is not supported by other data sources.

Attributing Cause

Perhaps the most difficult analytic problem for the theory of change approach relates to the issue of causal attribution. For purposes of impact attribution, the ideal is that the local theory is surfaced completely at the beginning of the initiative and the actual intervention experience matches the theory in all appreciable respects. It seems reasonable to assume, however, that very few applications are likely to resemble this ideal. Instead, most initiatives will likely show some congruence between initial theory and actual experience, but also some divergence.

To the extent that the theory is articulated in waves, where stakeholders’ experience can inform their theory specification for subsequent phases, the congruence between theory and reality is likely to improve. Practitioners of more traditional impact evaluation methods are likely to argue, however, that this "theory in waves" approach may be appropriate for framing hypotheses that relate solely to a future period but would invalidate any impact analysis if used to reframe the overall intervention pathway.

Other challenges in dealing with the issue of attribution include the need to examine alternative plausible explanations for the results that have been observed and, as previously noted, the need to attribute differential contributions when the initiative is occurring in an environment where other changes are also taking place. The sites in both evaluations, for example, are facing a major external factor in the form of welfare reform.

These attribution issues have traditionally been addressed most successfully through the use of experimental or quasi-experimental research designs that permit statistical tests to determine confidence levels. These methods cannot be applied readily to the interventions being studied, and thus we come full circle to our purpose in using a theory of change approach for these assessments. The theory of change approach cannot provide statistically generated confidence levels, but it can provide compelling, detailed descriptions of the unfolding of the interventions and an argument regarding the apparently logical connections among theories, activities, and outcomes. The approach can provide insights about which kinds of interventions appear to work under particular conditions, which do not, and—unlike many experimental designs—the likely reasons why.

Such descriptive arguments may not be convincing to researchers who see experimental or quasi-experimental methods as the only reliable approaches to impact analysis, but we believe they will be welcomed by staff of community organizations and other practitioners who are looking for guidance on potentially effective strategies. To the extent that our theory of change research designs use traditional methods, such as pre and post surveys, those elements may enhance the credibility of the observations offered. At the very least, the theory of change approach will generate useful topics of inquiry, which can perhaps be tested later in a more controlled experimental framework.


Notes

  1. Not all Jobs Initiative activities are necessarily expected to result directly in benefits for job seekers. In fact, the jobs projects are largely seen as "vehicles for discovering the nature of reforms needed in existing public and private systems" (Annie E. Casey Foundation, 1995).
  2. Andrea Anderson also served on the Abt evaluation team for both studies through January 1998, when she left Abt to accept a position at Aspen Institute in New York City.
  3. In multisite initiatives (like the EZ/EC program and the Jobs Initiative) whose sites are selected through a competitive process, the key principles underlying the grantor’s theory of change will normally be reflected in the application guidelines. Yet even a local foundation responding to an unsolicited proposal from a community group will generally have its own theory about how the process of change is expected to occur, which in turn will influence the unfolding of the initiative. Accordingly, it is essential for the evaluator be aware of the grantor’s theory of change.
  4. Patton, citing Sanders and Cunningham (1974), explains that formative evaluations are "conducted for the purpose of improving programs in contrast to those evaluations which are done for the purpose of making basic decisions about whether or not the program is effective, and whether or not the program should be continued or terminated."

References

Annie E. Casey Foundation. 1995. Jobs Initiative: National Investor’s Outcome Outline.

Connell, James. 1997. "From Collaboration to Commitment: Rights and Responsibilities of Partners in Community-Change Initiatives." Paper presented at the Annie E. Casey Foundation Conference on Evaluation Community Change Research and Evaluation.

Hollister, Robinson G., and Jennifer Hill. 1995. "Problems in the Evaluation of Community-Wide Initiatives." In New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts, ed. James Connell et al. Washington, DC: Aspen Institute.

Giloth, Robert. 1996. "Mapping Social Interventions: Theory of Change and the Jobs Initiative." Draft report.

Patton, Michael Quinn. 1980. Qualitative Evaluation Methods. Beverly Hills, CA: Sage Publications.

Sanders, J., and D. Cunningham. 1974. Techniques and Procedures for Formative Evaluation. Research Evaluation Development Paper Series No. 2. Portland, OR: Northwest Regional Educational Laboratory.

Weiss, Carol Hirschon. 1995. "Nothing as Practical as Good Theory: Exploring Theory-based Evaluation for Comprehensive Community Initiatives for Children and Families." In New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts, ed. James Connell et al. Washington, DC: Aspen Institute.


Back to New Approaches to Evaluating Community Initiatives index.


Copyright © 1999 by The Aspen Institute
Comments, questions or suggestions? E-mail webmaster@aspenroundtable.org.
This page designed, hosted, and maintained by Change Communications.