Concepts, Methods, and Contexts
The Role of the Evaluator in Comprehensive Community Initiatives
The use of the term "evaluation" has undergone considerable expansion in its purpose, scope, and methods over the last twenty years. This expansion is reflected in the range of roles that evaluators currently play, or attempt to play, in carrying out their work: scientist, judge, educator, technical assistant, facilitator, documenter/historian and repository of institutional memory, coach, manager, planner, creative problem solver, co-learner, fund-raiser, and public relations representative. While the field has moved toward a view of evaluation that is fundamentally normative rather than technical, political rather than neutral or value-free, many unresolved questions remain about how evaluators should operationalize such a view as they select and shape the role(s) they play in any particular evaluation enterprise.
Evaluators of comprehensive community initiatives (CCIs) face a particularly wide and complex array of roles available to them. It is often the case, however, that different stakeholders in an initiative prioritize these roles differently, generate expectations that are unrealistic or difficult to manage simultaneously, and/or define the evaluator's role in a way that limits substantially the learning potential and, some would maintain, even the likelihood of the initiative's success. The goal of this paper is to explore how evaluators' roles are being defined, and with what consequences, in terms of the lessons we are learning and what we need to learn in the future about comprehensive community initiatives. The paper begins by reviewing the CCI characteristics that create special challenges for the evaluator, the existing social science context in which evaluations are being developed, and the different purposes and audiences for such evaluations. That is followed by sections on the current status of evaluations in this field and on the different options and strategies--both their limits and possibilities--open to evaluators. The paper ends with some thoughts on how evaluators can maximize the learning opportunities comprehensive community initiatives present through increased innovation and experimentation and through the opportunity to structure disciplined cross-site learning.
Comprehensive Community Initiatives
The characteristics of CCIs were described in depth earlier in this volume. However, the qualities that contribute to the particular challenges for evaluators are briefly reviewed here in order to set a context for a discussion of evaluation roles:
These qualities create evaluation challenges that are both methodological and political in nature, ranging from the problems of establishing attribution in a saturation design and of developing markers of progress for assessing short- and medium-term change, to balancing different stakeholders' needs for information and feedback and weighing the goals and values of rigor and relevance. Such questions are not unique to CCIs, nor do they pose entirely new challenges for program evaluators,1 but in combination they forecast the need to develop new ways of thinking about evaluation in this field.
- They have "broad, multiple goals, the achievement of which depends on complex interactions" through which they aim to promote "an ongoing process of `organic' or `synergistic' change" (Chaskin 1994).
- They are purposively flexible, developmental, and responsive to changing local needs and conditions.
- To varying degrees, they conceptualize devolution of authority and responsibility to the community as a necessary though not sufficient aspect of the change process. While the terms may be operationalized in quite different ways, all the current CCIs refer to some combination of community empowerment, ownership, participation, leadership, and/or capacity-building as central to their mission.
- They recognize the long-term nature of fundamental community change or neighborhood transformation and tend to have longer time frames than more narrowly defined categorical approaches.
- While they are intended to produce impacts at different levels in different spheres, their theories of change are generally unspecified or specified in the form of broad guiding principles rather than specific causal relationships. The empirical basis for these principles is generally lacking.
The social science context in which these challenges need to be addressed is also shifting and developing. Michael Patton writes that the traditional distinction between formative and summative evaluation may not be a helpful way to think about evaluation of "cutting edge" approaches in "uncharted territory." Formative evaluations have been directed toward the process of program implementation, while summative evaluations aim at "making a fundamental and generalizable judgment about effectiveness and replicability". Patton argues that "it is the nature of uncharted territory and the cutting edge that there are no maps. Indeed, in the early stages of exploration there may not even be any destination (goal) other than the exploration itself. One has to learn the territory to figure out what destination one wants to reach."2 Most of the comprehensive initiatives that are under way are so exploratory and developmental that premature specification of concrete, measurable outcomes can be seen as antithetical to the notion of ongoing course corrections and the discovery of creative new paths, possibly even new destinations. Corbett (1992, 27) echoes these concerns. "The old form of discrete, impact-focused evaluations, awarded to firms on a competitive basis, may be counterproductive. Longer time lines, less obsession with what works, and a more collaborative evaluation industry may be needed. The days of the short sprint--one-shot summative evaluations--may be ending. A new paradigm, where the marathon constitutes the more appropriate metaphor, may be emerging."
The conception of multiple exploratory paths and marathons conforms with the emerging philosophy that questions the utility of the "techno-rational, logical-positivist approach toward theory and practice" and moves toward a "relativistic, heuristic, postmodern perspective" (Bailey 1992; Lather 1986). In this framework, there is less emphasis on discovering the one, objective truth about a program's worth and more attention to the multiple perspectives that diverse interests bring to judgment and understanding. Such a framework is consistent with a CCI that is designed to stimulate a process of change that is likely to be defined and experienced in many different but equally valid ways by many different community constituencies. The differences between the positivist and interpretivist paradigms tend to "play out as dichotomies of objectivity versus subjectivity, fixed versus emergent categories, outsider versus insider perspectives, facts versus values, explanation versus understanding, and single versus multiple realities" (House 1994, 16). House argues, however, that the "choice does not have to be between a mechanistic science and an intentionalist humanism, but rather one of conceiving science as the social activity that it is, an activity that involves considerable judgment, regardless of the methods employed" (19). The fact that these issues are being debated at present in the evaluation field creates a context more open to experimentation and new combinations of paradigms and methods than might have existed a decade ago. In a social science context that acknowledges multiple perspectives and realities, it is easier to discuss the advantages and disadvantages of the role of evaluator as co-learner rather than expert, conveyor of information rather than deliverer of truth (Weiss 1983), educator rather than judge.
Finally, related to this notion that we do not know enough about the expectable developmental trajectories of these initiatives, let alone the realistic outcomes that can be anticipated within certain time frames, are the potentially disempowering consequences of committing too early to specific goals and criteria for success. As discussed above, one of the assumptions of these initiatives is that they are driven by a process in which community residents play key roles in identifying and implementing development strategies. For the process to work, the evaluator is often an "enabling partner," helping the initiative's participants articulate and frame their goals in ways that can be assessed over time. This in itself is "one of the outcomes of the process rather than one of the up front, preordinant determinants of the process." Furthermore, the goals may change as evaluators come to better understand the lay of the land. "When clear, specific, and measurable goals are set in stone at the moment the grant is made, the struggle of community people to determine their own goals is summarily pre-empted and they are, once again, disempowered--this time in the name of evaluation."3 In sum, the characteristics of CCIs shape the learning opportunities, constraints, and needs presented to the evaluator and help define the broad parameters of the evaluator's role. Another major determinant of the particular role(s) the evaluator selects is the purpose of the evaluation as defined by the primary client--that is, who is paying for the evaluation--and/or by the primary audience: who wants to do what with the evaluation findings?
Purpose and Audience for Evaluation
The purpose of an evaluation is often, though not always, determined by the funder, sometimes in negotiation with the various stakeholders and the evaluator. Most evaluations of CCIs serve one or more of the following overlapping functions:
These different purposes for evaluation put different premiums on the kind of the data the evaluator needs to collect, the relationship the evaluator establishes with the initiative's designers and participants, and the nature of the products the evaluator is expected to generate, both during and at the end of the initiative. In addition, the learning produced to serve these different evaluation functions has distinct primary audiences: funders, practitioners, policymakers, and community members, all of whom tend to place a different value on particular kinds of information and evaluation lessons. Also, they can be driven by different priorities and investments: there are those whose major goal (and often passion) is to improve the quality of life in the targeted community; those who want to know how successful strategies can be adapted and brought to scale in other communities; and those whose priorities are to develop for scholarly purposes a theory and body of knowledge about community change initiatives. As discussed later, an evaluator's success depends a great deal on the clarity and consensus with which the relevant parties define the purpose and intended products of the evaluation early on in the process.
- They provide information about the ongoing implementation of the initiative so that its progress and strategies can be assessed and mid-course corrections instituted.
- They build the capacity of the initiative participants to design and institutionalize a self-assessment process.
- They draw some conclusions or judgments about the degree to which the initiative has achieved its goals.
- They support a collaborative process of change that combines creating knowledge with mutual education and mobilization for action.
- They hold those conducting the initiative accountable to the funder, the community, and/or other stakeholder groups.
- They contribute to the development of broad knowledge and theory about the implementation and outcomes of comprehensive community initiatives.
- They promote a public relations and fund-raising capacity.
Current Status of Evaluations of Comprehensive Community Initiatives
The current status of evaluations of CCIs seems to mirror the sense of frustration and confusion, as well as excitement and hope, within the initiatives themselves. What we see so far is a range of expectations, often implicit and sometimes conflicting, about what is to be achieved and how that achievement is to be assessed. As Corbett (1992, 26) describes, part of the discontent is characteristic of the natural life cycle of new programs:[P]rograms are launched with great fanfare and exaggerated claims, to sell them in the first place; the pace and scope of implementation conform more to political cycles than to the hard work of program development; outcomes are (intentionally?) unclear or overly complex, thereby difficult to operationalize and measure; and the investment in program evaluation is insufficient given the complexity of underlying theoretical models (or the lack of them) and the fiscal and human costs at stake. Given this life cycle, it is all too easy for excitement to evolve into disenchantment and ultimately despair. . . .Political imperatives for solutions seem to overwhelm the patience and integrity that are required for good long-range policy/program development.
The signs of this discontent are widely manifest among those associated with CCIs. Evaluators produce interim reports that elicit such responses as: "Is that all we learned? We could have told you that at the start," or "Why didn't you give us feedback earlier so we could have done things differently?" Community members believe the reports are abstract or inaccessible, not timely, and/or irrelevant to them, and they often respond with anger because they feel over-studied without getting any useful feedback (or respect) in return. Learning is limited substantially by weak implementation. Funders can be intolerant of failure, unimpressed by partial successes and impatient with fine-tuning, unclear whether they are getting enough "bang for their buck." Decision-makers want a quick fix and are disappointed that the "bottom line" is so murky and takes so long to assess. Implementers worry that "truth" will make it hard to raise money, win elections, or maintain momentum or hope in the community. All the parties involved are looking for reassurance that unguided process is not replacing accountability and long for some well-accepted standards with clear timelines against which to assess an initiative's progress. Evaluators recognize the limits of traditional roles and methods but feel caught between standards that will bring them rewards in the academy and credibility in the funding and policy community, and the risks of trying out new ways of learning. Do their clients want "experiments or stories" (Smith 1994), and is there any creative middle ground between the two?4 Finally, the evaluation becomes the arena in which conflicting expectations and interests among all the parties involved inevitably get focused but are not always worked out. Issues of power and control concerning such questions as who defines the pace and criteria of success, how funding decisions are related to interim evaluation findings, and who shares what information with whom, can make it extremely difficult for evaluators and initiative operators and participants to establish relationships of trust and open communication. Evaluators may be called upon to "educate" both parties about what evaluation can and cannot do, the scale of investment required to address various kinds of questions, and the realistic time frame needed for examining initiative outcomes.
Evaluators can find themselves in the middle of an awkward, sometimes contentious, process between foundations and communities that are trying to operationalize an empowerment orientation in the context of a grantorgrantee relationship. Foundations may aim to establish new partnerships or collaborations with "community-driven" initiatives, while falling back on traditional dominant and subordinate roles and practices in the face of uncertainty or disagreement. This power dynamic is complicated by issues of race and class, given the players, who are largely white foundations and frequently distressed minority communities.
In addition, given their dependency on foundation support, both evaluators and community initiative leaders may be ambivalent about giving honest feedback to foundation staff who can be highly invested as the primary architect of the particular change model being implemented and less than receptive to "bad news." A culture of grantee accountability and foundation authority may serve to undermine a culture of learning, innovation, and partnership. This situation can be exacerbated when foundation staff do not recognize the power of the funds and ultimate authority they possess to affect the dynamics of CCI relationships and implementation. Documenting the role of the foundation as an actor in CCI planning, implementation, and evaluation is an important task for the evaluator notwithstanding its potential to generate discomfort on both sides.
Despite pervasive uncertainty and some outright unhappiness about the role of evaluation in current CCIs, there exists simultaneously among funders, policymakers, and practitioners a sense of urgency and need to know whether and how these initiatives can succeed. We know more than we did in the 1960s, both in terms of effective program models and in terms of program evaluation methods and approaches. On the one hand, there is a sense of hope that these initiatives are on the "right track" and a belief that we can't "give up" on persistently poor urban neighborhoods. On the other hand, there is a deep-seated fear that nothing short of structural changes in the economy can "transform" distressed urban neighborhoods. But still believing in the democratic values of individual and community potential, we also still believe in the value of experimentation in the broad sense, hence the many different community initiatives under way. This makes the role of knowledge development all the more pressing.
So . . . what's an evaluator to do?
Options and Strategies for the Evaluator
Given the demands of the initiatives themselves and a social science context that provides more support than it has historically for the notion that "science is not achieved by distancing oneself from the world" (Whyte 1991), it is not surprising that most of the new roles that evaluators have taken on in their work with comprehensive initiatives are strategies of engagement; that is, they serve to bridge the traditional distance between the evaluator and the activities under study. Chavis, Stucky, and Wandersman (1983, 424) talk about this distance in terms of the basic philosophical conflict that exists between some of the "values of scientists and citizens":The citizen or professional practitioner is often under pressure to act immediately, to solve complex problems with the incomplete information on hand, and to make judgments based on the knowledge available. The scientist is trained to reserve judgment until the data are complete, to test and refine hypotheses, to isolate variables and to hold conditions constant, and to reinterpret observations and revise theories as new data become available. Whereas the citizen needs to develop complex strategies in a confounded, changing environment, the scientist is cautious in generalizing from the data and controlled conditions of research. At the extreme, scientific objectivity may be seen to require separation between the researcher and subject. . . .The authors argue that both the evaluator and the community initiative benefit from reducing this separation and "returning research to the citizen": it can "enhance the quality and applicability of research, provide an opportunity for hypothesis generation and hypothesis testing, and facilitate planning and problem solving by citizens" (433).
While most evaluators of CCIs aim to reduce the traditional separation between themselves and the initiatives under study, they operationalize their roles and construct their relationships with the "citizens" in a range of different ways, presumably with different consequences for what is learned on both sides. Research models of engagement can take multiple forms. Many comprehensive community initiatives call for evaluations that provide ongoing feedback to the initiative's designers and operators, clearly taking the evaluator out of the role of a "faceless judge" and into the action in one way or another. By providing such feedback, the evaluator becomes part of the dynamic of the initiative. If he or she attaches recommendations to the feedback and supports the initiative's implementation of such recommendations, the evaluator moves into a coach role. Other initiatives define one of the evaluator's central roles as helping to build the capacity of the initiative to carry out its own ongoing evaluation or self-assessment. Here the evaluator plays an educational or technical assistance role. Some evaluators call this "facilitated problem-solving" in which the evaluator helps the group explore design alternatives but does not advocate for a particular position.
A different approach to bridging the gap between the evaluator and the initiative is to engage community members as advisory or steering group participants, key informants, and/or volunteers or paid staff as part of the evaluation team. A variant on this approach is "utilization-focused evaluation" in which the evaluator brings together decision-makers and information users in an "active-reactive-adaptive process where all participants share responsibility for creatively shaping and rigorously implementing an evaluation that is both useful and of high quality" (Patton 1986, 289). At the "close" end of the spectrum is the role of evaluator as a participatory action researcher. In this role, which is discussed in more depth later in the paper, the evaluator joins the initiative in a "collaborative, co-learning process which integrates investigation with education and collective action" (Sarri and Sarri 1992). The next sections describe the rationale for drawing upon various engagement strategies in evaluating comprehensive community initiatives, the skills required, and the debates that exist about their strengths and weaknesses as methodologies.
Rationale for Engagement
Although some of the roles described above are commonly assumed and others have yet to be fully embraced by evaluators of comprehensive community initiatives, they are rationalized to varying degrees by many of the same related arguments. First, when evaluators assume roles like coach, collaborator, or capacity-builder, they help to demystify and democratize the knowledge development process. The active involvement of participants in the process of knowledge generation creates the research, problem analysis, group problem-solving, technical skills, and leadership necessary for identifying and solving problems on an ongoing basis. Second, when evaluators become embedded in the initiative's implementation (to varying degrees depending on the roles they play), they help position the evaluation less as a discrete activity that can be "dispensed with as a cost-cutting measure" (Patton 1988) and more as an integral part of the initiative's core activities. Indeed, a developmental and responsive evaluation is seen as generating ongoing information that becomes a tool for reviewing current progress, making mid-course corrections, and staying focused on the primary goals of the initiative. Third, by engaging an initiative's operators and participants in its assessment, evaluators can enhance community understanding, stakeholder commitment, and utilization of the results. Fourth, reducing the distance between the evaluator and the community can serve to bridge the cultural gaps that may exist, enable the evaluator to draw upon the "popular knowledge" of participants, "explicate the meaning of social reality" from the different participants' perspectives, and increase the likelihood that the findings are experienced by participants as relevant (Patton 1988).
New Demands on the Evaluator
While debate exists about the wisdom of adopting these new evaluation roles in comprehensive community initiatives, it is clear that these new roles bring increased demands on the evaluator. The first and perhaps most important is that evaluators need to have a much broader range of skills than they might have needed to be "distant observers." Besides methodological and technical competency based on their training in systematic inquiry and analysis, evaluators are likely to need skills in communication and team building, group process, and negotiation (Guba and Lincoln 1989).5 The researcher's ability to facilitate a process that allows participants to contribute their expertise and develop new competencies is often critical to the success of the evaluation enterprise (Israel et al. 1992). Evaluators may also need:
Additionally, evaluators who take on more engaged roles inevitably find them more labor-intensive than expected. Involving multiple stakeholders at every stage in the research process, for example, takes a significant commitment of time and energy. The intended benefits to the evaluation process, however, are many. Such an approach can help participants become tuned to the complexities and priorities of the enterprise; clarify and focus goals; appreciate the strengths and limitations of various methods and measurement strategies; and develop realistic expectations for what questions can and cannot be addressed. This type of collaborative relationship also tends to reduce participants' suspicions and fears about the evaluation process because they know what decisions are being made and who is involved in making them. Establishing and sustaining such relationships takes time.
- pedagogical skills so they can teach both about evaluation and through evaluation (Wise 1980; Cousins and Earl 1992);
- political skills to help them assess multiple stakeholder interests and "incorporate political reality into the evaluation" (Palumbo 1987); and
- the ability both to gain stakeholder's cooperation and trust and to sustain their interest and involvement over an extended period of time (Fitzpatrick 1989, 577).
In sum, apart from their methodological strengths and weaknesses, which are discussed below, the new roles that evaluators are being asked to play in CCIs create new demands, some of which evaluators may not feel comfortable or competent in addressing. Traditionally trained evaluators may lack the technical skills, the temperament, and/or the desire to adopt these new roles. Some may experience a conflict between their legitimate need to be perceived as credible and their sense that taking on some roles traditionally considered outside of the evaluation enterprise may produce important and useful learning. It is their credibility, in fact, that makes it possible to even try out certain kinds of new research roles.6 Clearly these issues have implications for the curriculum and culture of training programs in the academy, for the value foundations place on different kinds of learning, and for the role of knowledge in the policymaking process.
Methodological Strengths and Weaknesses
There are obvious risks involved when the evaluator becomes positioned inside the action rather than at a distance from it. One critique is simply that such roles no longer constitute evaluation. Instead, evaluation becomes primarily an intervention tool (Israel et al. 1992), and the evaluator takes on a management consultant role, not a role charged with making judgments about the "efficiency and effectiveness" of a program (Rossi 1994). Questions of bias and lack of reliability are also raised. Or the evaluator may become an advocate for positions espoused by the respondents with whom he or she feels the most sympathy. Being part of a process of mutual learning gives the evaluator access to information in a form that contributes to a particular way of understanding the dynamics and effects of the initiative, which may have both limitations and strengths. By becoming so engaged in the planning and implementation process, the evaluator may not be able to assess outcomes with an open view or may encounter the danger of being used as a public relations tool. Perhaps more risky than the evaluator's own loss of "objectivity" may be a reduction in the credibility he or she is perceived to have in the eyes of some initiative constituencies. No longer seen as neutral, the evaluator's access to some sources of quality data may be decreased (although increased for others).
Many of these concerns stem from two larger questions: Can the term "evaluation" be defined broadly enough to encompass multiple ways of generating and using knowledge? Or should we call these new ways of learning something other than evaluation? And, second, what does empirical rigor mean in a post-positivist context (Lather 1986, 270)? What are the "scientific" standards against which evaluators should assess the quality of their work in comprehensive community initiatives?
There is still considerable debate within the field of evaluation about how broadly or narrowly evaluation should be defined, let alone what value should be placed on different methods and approaches. For example, Gilgun (1994) makes a good case for "thickly described" case studies that "take multiple perspectives into account and attempt to understand the influences of multilayered social systems on subjects' perspectives and behaviors." Scriven (1993, 62), however, concludes that rich (thick) description is "escapist and unrealistic" because instead of helping the client make hard decisions, "it simply substitutes detailed observations for evaluation and passes the buck of evaluation back to the client." Scriven seems to be setting up an argument with only the most extreme of constructivists who take the position that the best evaluators can do is produce journalistic narratives, a stance that "begs the questions of rigor and rationality, effectively takes evaluators out of the conversation, and obviates the necessity to do good. It is an escape from responsibility and action" (Smith 1994, 42). However, the acknowledgment of multiple perspectives and truths that evolve over time does not by definition release the evaluator from the right or obligation to both maintain high standards of scientific inquiry and to make judgments and recommendations as warranted.7 Having a more complex appreciation of the realities of life and dynamics of change within a distressed neighborhood should add a richness and force to evaluators' assessments rather than either undermine their ability to make judgments and/or contribute to a paralysis of action. As Smith (1994, 41) notes, "although objectivity, reliability and unbiasedness have been amply demonstrated as problematic, rationality, rigor and fairness can still be sought." Patton (1987, 135) proposes fairness as an evaluation criterion in place of objectivity, replacing "the search for truth with a search for useful and balanced information, . . . the mandate to be objective with a mandate to be fair and conscientious in taking account of multiple perspectives, multiple interests, and multiple realities." He also stresses the importance of keeping a focus on the empirical nature of the evaluation process, upon which the integrity of the evaluation ultimately depends. He conceptualizes the evaluator as the "data champion" who works constantly to help participants adopt an empirical perspective, to make sure that rival hypotheses and interpretations are always on the table, and to advocate the use of evaluation findings to inform action.
Another commonly adopted strategy to balance the weaknesses or narrow yield of any one method or data source is the use of multiple methods, types of data, and data sources. "Perhaps not every evaluator can or is willing to take on multiple approaches within a study, but he or she can promote, sponsor, draw on, integrate the findings of, negotiate over, and critique the methods and inferences of multiple approaches" (Smith 1994, 43). If data are to be credible, the evaluator has some responsibility to triangulate data methods, measures and sources in a way that allows for "counter patterns as well as convergence." In a related vein, Weiss (1983, 93) suggests that there may be benefits to funding several small studies (as opposed to a single blockbuster study) and sequencing them to respond to the shifting conditions and opportunities that emerge during implementation. Although problems of continuity and overall integration may arise, different teams of investigators, using different methods and measures, may be able to enrich understanding of the initiative in a way that is beyond the scope of a single evaluation team.
Whyte et al. (1991) make the case that the scientific standards that must be met to conduct more "engaged" approaches to evaluation are daunting, but have several built-in checks to enhance rigor that are not present in the standard model of evaluation. For example, in the standard model, the subjects usually have little or no opportunity to check facts or offer alternative explanations. Evaluators of comprehensive initiatives often devise mechanisms to feed back and test out the information they are collecting on a continuous basis, as well as in the form of draft interim and final reports. They also have the opportunity to test the validity and usefulness of the findings when they are fed back and become the basis for future action (insofar as some form of action research is adopted). Stoecker (1991) expands on the issue of establishing validity in the following ways: by seeing whether the findings lead to accurate prediction, by comparing findings derived from different methods, and by involving the "subjects" themselves in a validity check. The resulting knowledge is validated in action, and it has to prove its usefulness by the changes it accomplishes (Brunner and Guzman 1989, 16). Although "we are still low in the learning curve regarding our knowledge as to how action and research cycles can benefit from one another--and from greater participation," it is generally accepted that "broader participation can lead to stronger consensus for change and sounder models--because models arrived at through broader participation are likely to integrate the interests of more stakeholder groups. Participation also promotes continual adjustment and reinvention. . . . " (Walton and Gaffney 1991, 125).
Engagement also provides an evaluator with certain opportunities for the development of social science theory, one of the vital ingredients of the research process. Elden and Levin (1991) write about how a collaborative model rests on "`insiders' (local participants) and `outsiders' (the professional researchers) collaborating in cocreating `local theory' that the participants test out by acting on it. The results can be fed back to improve the participants' own `theory' and can further generate more general (`scientific') theory" (129). Ideally, this approach improves the quality of the research as well as the quality of the action steps and becomes a strategy to advance both science and practice.
Future Needs: Innovation and Disciplined Cross-Site Learning
Evaluations of CCIs now under way seem to suffer as a group from the lack of at least two phenomena that might contribute to accelerated learning in the field: innovation and experimentation, and disciplined comparative work. To address what they recognize as significant methodological and conceptual challenges that call out for new approaches, evaluators tend to respond by trying almost everything (but the kitchen sink) in their current tool kit: surveys, ethnographies, community forums, examination of initiative records, structured interviews, analysis of demographic data, file data extraction, and so forth. But few are developing new methods or defining their roles in substantially new ways, and few have the opportunity to develop a comparative perspective across initiatives. Thus the field is benefiting from neither innovation nor cross-site learning. This is in part because of insufficient resources for adequate evaluations of these initiatives, let alone support to experiment with new methodologies and roles. Few funders seem to have an investment in promoting the development of the field of evaluation, even though the current challenges facing evaluators constrain the learning possibilities and opportunities to improve the design and practice of initiatives that these funders currently support.
Many innovations are possible to enhance the learning that is being generated by evaluators of CCIs. One, participatory research, is described below because it seems to have a potentially interesting fit with the philosophy and operations of many comprehensive initiatives. The focus on this one example, however, should not detract from the overall need for more innovation and experimentation with a range of approaches and new learning strategies.
Different disciplines and traditions within the field of evaluation--sociology, psychology, organizational development, education, international development--have spawned a range of related approaches variously known as participatory research, action research, participatory action research, and participatory evaluation (Brown and Tandon 1983; Brunner and Guzman 1989; Whyte 1991; Hall 1992).8 While they differ significantly in their relative emphasis on action compared with research and theory building, in the role the researcher plays in the action, and in their political orientations, they constitute a group of approaches "committed to the development of a change-enhancing, interactive, contextualized approach to knowledge-building" that has "amassed a body of empirical work that is provocative in its implications for both theory and, increasingly, method" (Lather 1986). A number of the characteristics of participatory research described below apply to the other approaches as well.
Nash (1993) explains,Participatory research [PR] links knowing and doing through a three-part process of investigation, education, and action. As a method of social investigation, PR requires the active participation of community members in problem posing and solving. As an educational process, PR uncovers previously hidden personal and social knowledge and develops skills which increase "people's capacity to be actors in the world." Finally, PR is a process of collective action which empowers people to work to transform existing power structures and relationships that oppress them.The approach is explicitly normative in its orientation toward redressing inequity and redistributing power: it involves initiative participants as "researchers" in order to produce knowledge that could help stimulate social change and empower the oppressed (Brown and Tandon 1983). It is built on a "cyclical, overlapping and often iterative" process of data gathering, analysis and feedback, action planning and implementation, and assessment of the results of the action through further data collection (Bailey 1992).
The approach seeks to "reduce the distinction between the researcher and the researched" (Sarri and Sarri 1992). The role of the evaluator in participatory research is one of co-learner, member of the "co-inquiry" team, methodological consultant, collaborator, equal partner. While the researcher brings certain technical expertise and the community participants bring unique knowledge of the community, neither side uses these resources to "gain control in the research relationship" (Nyden and Wiewel 1992).
There are several interesting parallels in the goals and (sometimes implicit) theories behind participatory research and many comprehensive community initiatives. Both articulate a strong belief in individual and collective empowerment. Israel et al. (1992 91) define empowerment as the "ability of people to gain understanding and control over personal, social, economic, and political factors in order to take action to improve their life situations." The participatory research approach can be conceptualized as a way of developing knowledge that enhances the empowerment of initiative participants and, as a consequence, furthers the goals and agenda of the community initiative.9 Both participatory research and comprehensive community initiatives depend on the iterative process of learning and doing. Both recognize the power of participation and strive to develop vehicles to enhance and sustain that participation. Both have at their core a conception of the relationship between individual and community transformation, between personal efficacy and collective power. Both view the creation of knowledge as an enterprise that is both technical and includes other forms of consciousness. And both rely on the release of energy and hope that is generated by group dialogue and action.
All the cautions expressed earlier about any research approach that positions the researcher in a more interactive/collaborative relationship with the initiative being evaluated are amplified with participatory research. Such an approach may be particularly "cumbersome and untidy to execute" (Park 1992) because it is so labor-intensive and because it is unlikely to have much yield unless the evaluator brings a certain personal commitment to community change. It has yet to develop much legitimacy in the academy and has yet to be implemented in enough cases to identify its full limits and possibilities.10 So it is an approach to be used selectively, possibly along with other methodologies. It is a misconception, however, to characterize it as completely impractical for evaluating today's comprehensive community initiatives. Bailey's (1992) initial research with a community-based consortia in Cleveland and Sarri and Sarri's (1992) work in Detroit (as well as in Bolivia) illustrate the specifics of implementing participatory research in distressed urban communities.11 Weiss and Greene (1992) make a strong conceptual case for empowerment-oriented participatory evaluation approaches in the field of family support and education programs and describe several examples of such evaluations. Israel et al. (1992) provide a detailed account of the implementation of a six-year action research study within an organizational context. Whitmore (1990) describes six strategies she used as an evaluator to support participant empowerment in the process of evaluating a comprehensive prenatal program.
While participatory research should not be portrayed as the major answer to all the research challenges facing comprehensive community initiatives, it makes sense to add this underutilized approach to the array of evaluation strategies currently being tested. Others working in the field may have their own "personal favorites" that seem promising to them. What is important is that a research and demonstration context is created in which evaluators are provided with the resources they need and are encouraged to work with community initiatives to develop and try out new ways of learning about how these initiatives work and how their long-term impacts might be enhanced. This is more likely to occur if funders can conceive of these resources as integral to the initiative's implementation rather than competitive with the initiative's operational funding needs. Fawcett (1991, 632) outlines ten values for community research and action that may help "optimize the rigors of experimentation within the sometimes chaotic contexts of social problems." The goal is to support efforts that combine research and action in more "adventuresome" and functional ways so that the dual purposes of applied research--contributing to understanding and improvement--can be served.
Despite the sense of urgency about developing credible approaches, innovative tools, and useful theories to bring to bear on the evaluation of comprehensive community initiatives, little cross-site learning is actually taking place among the group of initiatives under way. Often, foundations supporting demonstrations are unenthusiastic about close external scrutiny of their models before those models have a chance to evolve and be refined. Evaluators are set up to compete with each other for evaluation contracts, making the sharing of experience with different tools and approaches a complex and variable enterprise. Initiatives feel a need to put the best light on their progress in order to obtain continued support. Community leaders recognize that any "bad news" delivered in an untimely and destructive fashion can undermine their efforts at community mobilization. And all parties are aware of a context in which the media and the taxpayer, as well as policymakers, are all too ready to conclude that "nothing works" in distressed urban communities.
Overcoming these barriers to cross-site learning will require a variety of strategies, all of which must be constructed to satisfy in one way or another each party's self-interest. The Roundtable is presumably one vehicle for supporting such efforts. It may be helpful, also, to think about the current initiatives as a series of case studies around which some comparative analyses could be conducted. An example of an issue that might benefit substantially from a comparative perspective is community participation: What place does it have in the different initiative's theories of change? "[I]s the purpose of community participation to improve the efficiency of project implementation or to contribute to the empowerment of politically and economically weaker communities or groups" and are these complementary or competing objectives (Bamberger 1990, 211)? And how can this evolutionary process--whose impacts may only be evident after a number of years--be measured in ways that provide "sufficient quantification and precision to permit comparative analysis between communities, or over time, while at the same time allowing in-depth qualitative description and analysis" (Bamberger 1990, 215)? While most challenging, selecting for comparative work some of the major concepts (such as community participation, program synergy, and building social fabric) that appear central to the underlying theories of change governing community initiatives may have the most pay-off for the evaluator at this point in the development of the field.
When full-scale comparative longitudinal evaluation is unrealistic, a low-cost methodology suggested by Wood (1993) is practitioner-centered evaluation, a qualitative technique that focuses on the informal theories of behavioral change that underlie a program as implemented. It relies on the ability of program implementers to document success and failure in the context of their own theories about "various cause and effect linkages set in motion by program activities" (Wood, 97). This approach focuses on the theory of action being tested from the people actually implementing it, encouraging them to define change strategies and anticipated outcomes more concretely than they often do and then to refine their theories on the basis of experience, including unintended as well as intended consequences. Wood characterizes this approach as a flexible model-building methodology that encourages initiatives to "build up a repertoire of successful cause-and-effect sequences" (98), some of which will be suitable for application elsewhere and all of which should contribute to cross-initiative learning. It is not a substitute for more rigorous evaluation methods but begins to explore systematically "the ambiguous realm between feeling that a program is good and knowing that it is" (91). The more such practitioner-centered evaluations can be set in motion, the more comparative learning an evaluator will have to draw upon to advance the field.
In sum, comprehensive community initiatives present evaluators with a host of methodological and strategic questions about how to define their roles and to prioritize the lessons they are asked to generate for different audiences. Some would frame the central concern for evaluators as finding the appropriate balance between scientific rigor and social relevance. Others would limit the definition of evaluation to quite a narrow enterprise, but then reframe questions that are too "messy" to be included in this enterprise as subject to "systematic study" that draws upon a broader range of methodologies and roles for the "evaluator." Still others, though a smaller group, would either aim to redefine the fundamental nature of the scientific approach, citing its limited ability to yield knowledge that is useful for CCI participants, or would reject CCIs as unevaluable and therefore unworthy of any evaluation role. This paper suggests that CCI implementers, sponsors, and evaluators work collaboratively to create a learning culture that encourages a range of strategies for generating knowledge and improving practice.
- See, for example, Peter Marris and Martin Rein, Dilemmas of Social Reform: Poverty and Community Action in the United States (Chicago: University of Chicago Press, 1967).
- Letter of August 14, 1991, from Michael Patton, then of the American Evaluation Association, to Jean Hart, Vice President of the Saint Paul Foundation.
- Patton, as cited in note 2.
- Phillip Clay, Associate Provost and Professor of City Planning at the Massachusetts Institute of Technology, in an October 24, 1994, review of this paper suggests that documentation may constitute a middle ground. "Documentation is a way of going well beyond description because documentation sets a milestone and offers some benchmarks against which to assess program implementation. By its nature it then looks back at the framing of the problem and the design of the intervention. Yet it is short of evaluation because it does not force the question, `Does this program work for the purpose for which it was intended?'"
- Guba and Lincoln (1989) characterize the first three generations in evaluation as measurement-oriented, description-oriented, and judgment-oriented. They propose that the key dynamic in the fourth generation is negotiation.
- Phillip Clay, as cited in note 4.
- Personal communication with Avis Vidal, July 19, 94.
- Participatory research has its roots in adult education and community development in Tanzania in the early 1970s and in the libratory tradition of Friere in Latin America. Hall (1992) reports that Friere made a trip to Tanzania in 1971; his talk was transcribed and became one of his first writings in English on the subject of alternative research methodologies. Participatory action research emerged from a very different tradition, that of organizational development, and represents a strategy for innovation and change in organizations. The term "participatory research" is used in this paper in its broadest sense in order to encompass the philosophy of participatory action research.
- "In addition to transformations in consciousness, beliefs, and attitudes, empowerment requires practical knowledge, solid information, real competencies, concrete skills, material resources, genuine opportunities, and tangible results" (Staples 1990, 38). Consistent with this definition of empowerment, both research methodologies place an emphasis on building the capacity--in individuals, groups, and communities--for effective action.
- A number of articles have been written about the conflicts between activist research and academic success, participatory research in the community and life in the university (Reardon et al. 1993; Cancian 1993; Hall 1993). Bailey (1992, 81) sees a role for academics in participatory research, noting that her own experience suggests the importance for the "outside other" to acknowledge his or her own values, biases, and interests that may "go beyond the community to include academic interests regarding the methodology and outcomes of the research."
- A personal conversation in July 1994 with Darlyne Bailey, now Dean of the Mandel School for Applied Social Sciences at Case Western Reserve University, revealed that it appears that the Department of Housing and Urban Development (HUD) will support some participatory research, as well as the establishment of a client tracking system, in its recent HOPE VI grant in Cleveland.
Bailey, Darlyne. 1992. "Using Participatory Research in Community Consortia Development and Evaluation: Lessons from the Beginning of a Story." The American Sociologist 23: 71-82.
Bamberger, Michael. 1990. "Methodological Issues in the Evaluation of International Community Participation Projects." Sociological Practice: 20825.
Brown, L. David and Rajesh Tandon. 1983. "Ideology and Political Economy in Inquiry: Action Research and Participatory Research." Journal of Applied Behavioral Science 19: 27794.
Brunner, Ilse, and Alba Guzman. 1989. "Participatory Evaluation: A Tool to Assess Projects and Empower People." In International Innovations in Evaluation Methodology: New Directions for Program Evaluation, ed. R.F. Conner and M. Hendricks. San Francisco: Jossey-Bass.
Cancian, Francesca. 1993. "Conflicts between Activist Research and Academic Success: Participatory Research and Alternative Strategies." The American Sociologist 24: 92106.
Chaskin, Robert. 1994. "Defining Neighborhood." A Background Paper Prepared for the Neighborhood Mapping Project of the Baltimore: Annie E. Casey Foundation.
Chavis, D., P. Stucky, and A. Wandersman. 1983. "Returning Basic Research to the Community: A Relationship between Scientist and Citizen." American Psychologist 38 (April): 42434.
Corbett, Thomas. 1992. "The Evaluation Conundrum: A Case of `Back to the Future'?" In Focus, vol. 14, no. 1: 2527. Madison, Wisc.: Institute for Research on Poverty.
Cousins, J. Bradley and Lorna Earl. 1992. "The Case for Participatory Evaluation." Educational Evaluation and Policy Analysis 14: 397418.
Elden, Max and Morten Levin. 1991. "Cogenerative Learning: Bringing Participation into Action Research." In Participatory Action Research, ed. W. Whyte, 12742. Newbury Park, Calif.: Sage Publications.
Fawcett, Stephen. 1991. "Some Values Guiding Community Research and Action." Journal of Applied Behavior Analysis 24: 62136.
Fitzpatrick, Jody. 1989. "The Politics of Evaluation with Privatized Programs: Who is the Audience?" Evaluation Review 13: 56378.
-----. 1988. "Roles of the Evaluator in Innovative Programs: A Formative Evaluation." Evaluation Review 12: 44961.
Gilgun, Jane. 1994. "A Case for Case Studies in Social Work Research." Social Work 39: 37180.
Gold, Norman. 1983. "Stakeholders and Program Evaluation: Characteristics and Reflections." In Stakeholder-Based Evaluation, edited by Anthony Bryk, New Directions for Program Evaluation, no. 17: 63-72.
Guba, Egon and Yvonna Lincoln. 1989. Fourth-Generation Evaluation. Newbury Park, Calif.: Sage Publications.
Hall, Budd. 1993. "From Margins to Center? The Development and Purpose of Participatory Research." The American Sociologist 23: 1528.
House, Ernest. 1994. "Integrating the Quantitative and Qualitative." In The Qualitative-Quantitative Debate: New Perspectives, ed. C. Reichardt and S. Rallis, New Directions for Program Evaluation, no. 61: 1322.
Israel, Barbara et al. 1992. "Conducting Action Research: Relationships between Organization Members and Researchers." Journal of Applied Behavioral Science 28: 74101.
Lather, Patti. 1986. "Research as Praxis." Harvard Educational Review 56: 25577.
McLaughlin, John, Larry Weber, Robert Covert, and Robert Ingle, eds. 1988. Evaluation Utilization. New Directions for Program Evaluation, no. 39: 17.
Nash, Fred. 1993. "Church-Based Organizing as Participatory Research: The Northwest Community Organization and the Pilsen Resurrection Project." The American Sociologist 24: 3855.
Nyden, Philip and Wim Wiewel. 1992. "Collaborative Research: Harnessing the Tensions Between Researcher and Practitioner." The American Sociologist 23: 43-55.
Palumbo, Dennis. 1987. "Politics and Evaluation." In The Politics of Program Evaluation, ed. D. Palumbo. Newbury Park, Calif.: Sage, 1987.
Park, Peter. 1992. "The Discovery of Participatory Research as a New Scientific Paradigm: Personal and Intellectual Accounts." The American Sociologist 23: 2942.
Patton, Michael. 1993. "The Aid to Families in Poverty Program: A Synthesis of Themes, Patterns and Lessons Learned." Report prepared for the McNight Foundation, Minneapolis.
-----. 1988. "Integrating Evaluation into a Program for Increased Utility and Cost-Effectiveness." In Evaluation Utilization, ed. J. McLaughlin et al., New Directions for Program Evaluation, no. 39: 8594.
-----. 1987. "The Policy Cycle." In The Politics of Program Evaluation, ed. D. Palumbo, 10045. Newbury Park, Calif.: Sage Publications.
-----. 1986. Utilization-Focused Evaluation, second ed. Newbury Park, Calif.: Sage Publications.
Petras, Elizabeth and Douglas Porpora. 1993. "Participatory Research: Three Models and an Analysis." The American Sociologist 24: 10726.
Reardon, Ken et al. 1993. "Participatory Action Research from the Inside: Community Development in East St. Louis." The American Sociologist 24: 6991.
Rossi, Peter. 1994. "The War Between the Quals and the Quants: Is a Lasting Peace Possible?" In The Qualitative-Quantitative Debate: New Perspectives, ed. C. Reichardt and S. Rallis, New Directions for Program Evaluation 61: 2335.
Sarri, Rosemary and Catherine Sarri. 1992. "Organizational and Community Change Through Participatory Action Research." Administration in Social Work 16: 99122.
Scriven, Michael. 1993. Hard-Won Lessons in Program Evaluation. New Directions for Program Evaluation, no. 58 (entire issue).
Smith, Mary. 1994. "Qualitative Plus/Versus Quantitative: The Last Word." In The Qualitative-Quantitative Debate: New Perspectives, ed. C. Reichardt and S. Rallis, New Directions for Program Evaluation 61: 3744.
Staples, Lee. 1990. "Powerful Ideas about Empowerment." Administration in Social Work 14: 2942.
Stoecker, Randy. 1991. "Evaluating and Rethinking the Case Study." The Sociological Review 39: 88112.
Walton, Richard and Michael Gaffney. 1991. "Research, Action, and Participation: The Merchant Shipping Case." In Participatory Action Research, ed. W. Whyte, 99126. Newbury Park, Calif.: Sage Publications.
Weiss, Carol. 1983. "The Stakeholder Approach to Evaluation: Origins and Promise" and "Toward the Future of Stakeholder Approaches in Evaluation." In Stakeholder-Based Evaluation, ed. A. Bryk, New Directions for Program Evaluation, no. 17: 314, 8396.
Weiss, Heather and Jennifer Greene. 1992. "An Empowerment Partnership for Family Support and Education Programs and Evaluations." Family Science Review 5: 13148.
Whitmore, Elizabeth. 1990. "Empowerment in Program Evaluation: A Case Example." Canadian Social Work Review 7: 21529.
Whyte, William, et al. 1991. "Participatory Action Research: Through Practice to Science in Social Research." In Participatory Action Research, ed. W. Whyte, 1955. Newbury Park, Calif.: Sage, 1991.
Wise, Robert. 1980. "The Evaluator as Educator." In Utilization of Evaluation Information, ed. L. Braskamp and R. Brown. New Directions for Program Evaluation, no. 5: 1118.
Wood, Miriam. 1993. "Using Practitioners' Theories to Document Program Results." Nonprofit Management and Leadership 4: 85106.
Back to New Approaches to Evaluating Community Initiatives index.
Copyright © 1999 by The Aspen Institute
Comments, questions or suggestions? E-mail firstname.lastname@example.org.
This page designed, hosted, and maintained by Change Communications.