Concepts, Methods, and Contexts
Anne C. Kubisch, Carol H. Weiss, Lisbeth B. Schorr, James P. Connell
The idea of comprehensive community development is not new. Its roots lie in the settlement houses of the late nineteenth century and can be traced through the twentieth century in a number of neighborhood-based efforts, including the fight against juvenile delinquency in the 1950s, the War on Poverty in the 1960s, and the community development corporation movement of the last thirty years (Halpern 1994).
The current generation of efforts, referred to in this volume as "comprehensive community initiatives" (CCIs), began in the late 1980s and early 1990s, primarily by national or community foundations. While varied, they all have the goal of promoting positive change in individual, family, and community circumstances in disadvantaged neighborhoods by improving physical, economic, and social conditions. Most CCIs contain several or all of the following elements and aim to achieve synergy among them: expansion and improvement of social services and supports, such as child care, youth development, and family support; health care, including mental health care; economic development; housing rehabilitation and/or construction; community planning and organizing; adult education; job training; school reform; and quality-of-life activities such as neighborhood security and recreation programs. Moreover, most CCIs operate on the premise that the devolution of authority and responsibility from higher-level auspices to the neighborhood or community is a necessary aspect of the change process. (For overviews of current CCIs and of the field, see American Writing Corporation 1992; Eisen 1992; Fishman and Phillips 1993; Gardner 1992; Himmelman 1992; Jenny 1993; Rosewater, et al. 1993; Sherwood 1994; and Stagner 1993.)
The Evolution of Comprehensive Community Initiatives
The emergence of CCIs over the last few years can be attributed to the convergence of several trends:
It would be a mistake, however, to suggest that the rationale for comprehensive community-based intervention has been grounded solely in a frustration with unsuccessful social interventions. There was and continues to be a sense that we know a great deal about "what works" for at-risk populations and that if we could manage to concentrate and integrate resources and program knowledge in particular communities over a sustained period of time, we could demonstrate that positive outcomes are indeed "within our reach" (Schorr 1988). This call for cross-sector, cross-system reform has been further justified by recent social science research that has begun to identify the linkages and interconnectedness among the various strands of an individual’s life, and of the importance of family and neighborhood influences in determining individual-level outcomes. Thus, CCIs are the offspring of a marriage between program experience and academic findings, and they offer hope at a time when skepticism about the efficacy of strategies to help those most in need is high.
- Human services professionals were recognizing that fragmentation and categorization of social services and supports were limiting program success.
- Experience in several domains was revealing the high cost and uncertain success of remediation, and a search for effective prevention strategies was emerging.
- Community development experts were recognizing that, with some notable exceptions, physical revitalization had come to dominate activities on the ground, but that "bricks and mortar" alone were not achieving sustained improvements in low-income neighborhoods.
- For both pragmatic and ideological reasons, public–private partnerships and local action were being promoted as complementary, or even alternative, approaches to relying on "big government" to solve social problems.
Whether or not interest in and commitment to the current wave of comprehensive community initiatives is sustained by the public and private funding communities, the principles that underlie them will surely continue to infiltrate social policy. The last two years alone have seen a large number of new federal initiatives that have adopted a comprehensive, community-based approach--including new efforts aimed at teen pregnancy prevention, youth employment and training, and crime prevention, as well as the more broad-based Empowerment Zones/Enterprise Communities. The states also have taken on the task of reforming their service delivery, education, and economic development activities to make them more responsive to families and communities (Chynoweth et al. 1992). And, national and local foundations have launched a significant number of experiments based on the principles of comprehensiveness and community-based change (Rosewater 1992).
Why CCIs Are So Hard to Evaluate
The attributes of CCIs that make them particularly difficult to evaluate include horizontal complexity, vertical complexity, the importance of context, the flexible and evolving nature of the interventions, the breadth of the range of outcomes being pursued, and the absence of appropriate control groups for comparison purposes.
Horizontal Complexity. Although each comprehensive community initiative is unique, they all are based on the notion of working across systems or sectors. They aim to revitalize the community physically by building or improving housing, to strengthen the system of social supports for children and families, to improve schools and other education and training centers, and to promote economic activity within the community and access to economic opportunity outside the community. Given this complex array of activities, what should the evaluator seek to measure?
One option is to track progress in each of the individual program areas. Not only would that be an extensive task, but it may miss the essence of the initiative: the general reason that program designers and funders are willing to consider a comprehensive range of potential interventions is that they believe that a certain synergy can be achieved among them. By focusing on discrete program components, the evaluator might ignore the effects of their interaction. Moreover, if the CCI director believes that evaluation will be based exclusively upon program-level outcomes, his or her own management strategies may become narrowly focused. But tracking and measuring "synergy" is a problem that the methodologists have yet to solve.
Vertical Complexity. CCIs are seeking change at the individual, family, and community levels, and are predicated on the notion that there is interaction among those levels. For example, a key assumption is that improvements in community circumstances will improve outcomes for individuals. But social science research is only beginning to identify the forces that influence these community-level circumstances and which among them, if any, are amenable to intervention. Moreover, our understanding of the specific pathways through which community-level variables affect individual outcomes is still rudimentary, making it difficult for evaluators to judge whether an initiative is "pulling the right levers." Finally, because there is little good information about how low-income urban communities work, how these communities evolve over time, and how to detect and measure community improvement, it is difficult to learn about change in the other direction--that is, how improvements in individual and family conditions affect the wider community.
Contextual Issues. By definition, the current CCIs are community-focused. Although most are designed with an appreciation for the need to draw upon political, financial, and technical resources that lie outside the community, there is a broader set of circumstances and events that may have a direct bearing on the success of CCIs but that they have little power to affect. The macroeconomic climate may be the best example: it becomes especially difficult to strengthen disadvantaged communities when the economy is undergoing changes that have significant negative consequences for low-wage and low-skilled workers. The racial and cultural barriers facing minority populations is yet another important condition that may well constrain the ability of CCIs to make substantial improvements in individual or community circumstances. A host of other political, demographic, and geographic factors may also apply.
Flexible and Evolving Intervention. CCIs are designed to be community-specific and to evolve over time in response to the dynamics of the neighborhood. The "intervention," therefore, is flexible, constantly changing, and difficult to track. As it unfolds and is implemented, it may look very different from its design document. Even in multi-site initiatives where all communities have the same overall charge, the approach and the individual program components may vary significantly from place to place.
Broad Range of Outcomes. CCIs seek improvements in a range of less concrete domains for which there are few agreed-upon definitions, much less agreed-upon measures. For example, as mentioned above, most CCIs operate on the premise that authority and responsibility must shift from higher-level sponsors to the neighborhood or community in order to effect change. They have fairly explicit goals about community participation, leadership development, empowerment, and community building. They also aim for significant changes in the ways institutions operate in the community, and many seek reforms in government agency operations at the municipal, state, or federal system level as well. But operationalizing those concepts, and then measuring their effects, is difficult.
Absence of a Comparison Community or Control Group. The community-wide and community-specific characteristics of CCIs rob the evaluator of tools that many consider essential to assess the impact of an initiative. Since CCIs seek to benefit all members of a community, individuals cannot be randomly assigned to treatment and control groups for the purposes of assessing impact. In addition, finding an equivalent "comparison" community that is not benefiting from the initiative, with which outcomes in the target community can be compared, is an alternative tool fraught with methodological and logistical problems. As a result, it is extremely difficult to say whether changes in individual or community circumstances are the result of the initiative itself or whether they would have occurred in the target population in any case.
Addressing the Challenge of Evaluating CCIs
The challenge of evaluating comprehensive community initiatives warrants the attention of practitioners, scholars, policymakers, funders, and initiative participants for three main reasons:
Yet, at this time, the field is faced with enormous difficulties in making judgments about CCIs and in learning from them and other similarly complex interventions. As a result, four trends have emerged. First, knowledge is not being developed in a way that could inform new comprehensive programs or, at a broader level, guide the development of major new social policies.; Second, CCI funders are not able to determine with any degree of certainty whether their initiatives are succeeding and merit continued investment. Third, program managers are not getting adequate information about how the initiatives are working and how to modify them in order to improve their impact. And finally, the communities themselves are receiving little feedback from the efforts that they are investing in the program.
- CCIs embody many of the most promising ideas to promote the well-being of disadvantaged individuals and communities in the United States today. A number of audiences--from national policymakers to individual community residents--have a stake in knowing whether and how CCIs work.
- CCIs are testing a range of important hypotheses about individual development, family processes, and community dynamics. If the CCIs are well evaluated, the findings will have implications for our understanding of human behavior and will suggest important directions for further research as well as for broad policy directions.
- CCIs offer an opportunity to expand and redefine the current boundaries of the field of evaluation, perhaps to address similar challenges posed by other complex, interactive, multi-system interventions. Evaluators are continually challenged to meet the demand for information--whether for accountability or for learning purposes--more effectively and efficiently. The evaluation of CCIs raises, in new and more complex ways, fundamental questions about how to ascertain the ways in which an investment of resources has "paid off."
The mismatch between prevailing evaluation approaches and the needs of CCIs has produced a situation where program designers, funders, and managers have been faced with imperfect options. One such option has been to limit the design and scope of the program by, for example, narrowing the program intervention and specifying a target population, in order to make it easier to evaluate. A second option has been to resist outcome-oriented evaluation out of a fear that current methodology will not do justice to a complex, nuanced, long-term intervention. In this case, monitoring events associated with the initiative serves as the principal source of information. A third option has been to accept measures or markers of progress that are not wholly satisfactory but may provide useful feedback. These include documenting "process" such as undertaking collaborative planning activities, measuring inputs, conducting selective interviews or focus-group discussions, establishing a community self-monitoring capacity, and selecting a few key indicators to track over time. In actuality, the CCIs have generally selected from the range of strategies presented in this third option, often combining two or more in an overall evaluation strategy that aims to give a textured picture of what is happening in the community, but may lack important information and analysis that inspires confidence in the scientific validity and generalizability of the results.
The Contribution of this Volume
Taken together, the papers in this volume suggest that CCIs are difficult to evaluate for reasons that relate both to the design of the initiatives themselves and to the state of evaluation methods and measures. They also suggest that work can be done on both fronts that will enhance our ability to learn from and to judge the effectiveness of CCIs and, ultimately, other social welfare interventions.
The first paper, by Alice O’Connor, puts today’s CCIs and the problems of their evaluation in historical context by reviewing the experiences of the juvenile delinquency programs of the 1950s, the Gray Areas, Community Action and Model Cities programs, and the community development corporation movement. The next two papers focus on evaluation problems that emerge as a result of the complex design of CCIs and suggest new ways of approaching the task. Carol Weiss outlines the promise of CCI evaluations that are based on their own "theories of change" and discusses how such an approach would serve the multiple purposes of evaluation. James Connell, J. Lawrence Aber, and Gary Walker then present a conceptual framework, based on current social science research and theory, that could inform the program-based theories that Weiss describes. Both papers conclude that theory-based evaluation holds promise for CCIs. The next two papers address methodological problems associated with CCI evaluation. Robinson G. Hollister and Jennifer Hill focus on the absence of control groups or comparison communities for CCI evaluation purposes and discuss the problems that arise as a result. Claudia Coulton’s paper focuses on measurement dilemmas. She describes some of the problems that she and her colleagues have had using community-level indicators in Cleveland and strategies they have adopted to assess community programs in that city. The final paper, by Prudence Brown, brings the volume to a close by arguing that evaluators take new roles with respect to CCIs, roles that engage the evaluator in the initiative more than has traditionally been the case.
Evaluating Comprehensive Community Initiatives: A View from History
In "Evaluating Comprehensive Community Initiatives," Alice O’Connor gives an historical context for the volume by critically reviewing where the field of evaluation and the field of comprehensive, community-based development have converged and diverged over the last three decades. She points out that it was in the 1960s that evaluation came to be recognized as a distinct research field and that an evaluation industry was born, strongly influenced by the "hard" sciences, in particular the traditional scientific search for quantifiable outcomes. The New Jersey Income Maintenance Experiment, launched in 1967, was the first of the large-scale controlled experiments, testing various versions of the package with randomly assigned individuals, and that experience informed a large number of subsequent social experiments that have had considerable policy impact, notably in the area of welfare reform.
The community-based social action programs that emerged during the same period, like today’s CCIs, did not lend themselves to that type of evaluation. The documentary evidence from the social welfare and anti-poverty programs of the 1950s, 1960s, and 1970s reveals less attention to the programs' actual or potential impact on the urban problems they were designed to address than to generating knowledge about the origins of and remedies to unhealthy individual behavior or community environments, documenting changes in institutional relationships, and providing feedback to guide program implementation. Nonetheless, from the evaluations and other analyses produced at the time, O’Connor succeeds in drawing a number of important lessons for current CCIs, ranging from, for example, the difficulty in achieving public agency coordination, to the critical role of race, to addressing the tensions created when an initiative has long-term goals but needs to be able to demonstrate results in a relatively short time.
Throughout her paper, O’Connor cautions us that the barriers to developing effective evaluation strategies have been as much political and institutional as they have been substantive. Moreover, she warns: "[N]o matter how rigorous the scientific method, evaluative evidence will play only a limited--and sometimes unpredictable--role in determining the political fate of social programs. In the past, decisions about community-based initiatives--or about welfare reform, for that matter--have been driven not, primarily, by science but by the values, ideologies, and political interests of the major constituencies involved." As a result, she concludes with a strong recommendation that evaluations of today’s initiatives focus on the "contextual factors" that influence their success or failure--that is, on identifying the economic, political, and other conditions at the federal and local levels under which CCIs can be most effective.
Addressing Design-Related Dilemmas in the Evaluation of CCIs
Two of the papers in this volume--by Carol Weiss and by James Connell, J. Lawrence Aber, and Gary Walker--offer promising avenues for addressing some of the problems that emerge as a result of the complex objectives and designs of CCIs. Because CCIs are broad, multi-dimensional, and responsive to community circumstances, their design features are generally underspecified at the outset of the initiative. The absence of a well-specified and clearly defined intervention makes the evaluation task extremely difficult.
Carol Weiss posits that, even when the design is not clearly specified or linked to ultimate goals from the start, a large number of implicit "theories of change" underlie the decisions that program designers and funders have made in the process of launching CCIs. In her paper, Weiss challenges the CCI designer to be specific and clear about the premises, assumptions, hypotheses, or theories that guide decisions about the overall structure and specific components of the initiative. She suggests that once these theories are brought to the surface, they can drive the development of a plan for data collection and analysis that tracks the unfolding of events. Evaluation would then be based on whether the program theories hold during the course of the CCI. With this approach, testing the program’s "theories of change" is offered as a means of assessing the progress and the impact of the intervention.
Weiss gives examples of the kinds of hypotheses that she sees underlying many of the CCIs: a relatively modest amount of money will make a significant difference in the community; the involvement of local citizens is a necessary component of an effective program; the neighborhood is a unit that makes sense for improving services and opportunities; comprehensiveness of services is indispensable; and benefits provided to an individual family member accrue to the entire family. In each case, she shows how those hypotheses can be played out through a series of micro steps to a set of desired ends. The types of data that an evaluator would need to collect in order to confirm the underlying theory become clear, as do the points at which specific hypotheses can be tested.
Weiss offers four reasons for pursuing theory-based evaluation for CCIs. First, it provides guidance about the key aspects of the program on which to focus scarce evaluation resources. Second, CCIs are not only attempting to test the merits of particular configurations of services, economic development activities, and so forth--they are also testing a broader set of assumptions about the combination and concentration of efforts that are required to make significant and sustained improvements in the lives of disadvantaged people. Theory-based evaluation will tell whether those assumptions, on which many specific program decisions are based, are valid and, if they are not, where they break down. Third, this approach to evaluation helps participants in the initiative reflect on their assumptions, examine the validity and practicality of those assumptions, and ensure that a common understanding exists about the theories that are being put into practice. Fourth, Weiss suggests that validating--or disproving--fundamental theories of change has the potential to powerfully affect major policy directions.
While Weiss’s paper contains examples of some of the theories of change that underlie the structural or operational dimensions of current CCIs, the next paper, by Connell, Aber, and Walker, complements Weiss's suggestions by demonstrating how current thinking and research in the social sciences can inform the development of the theories of change that underlie the program dimensions of initiatives. The authors present a framework for understanding how community dimensions affect outcomes for individuals both directly and indirectly. The paper focuses on young adolescents as a case, but the framework that the authors present can and will be applied to future research on young children and families and on older youth as well.
The authors identify and define three desired outcomes for youth: economic self-sufficiency, healthy family and social relationships, and good citizenship practices. They review social science research on factors influencing those outcomes and conclude that community variables--physical and demographic characteristics, economic opportunity structure, institutional capacities, and social exchange and symbolic processes--affect the outcomes, directly in some cases, but mostly indirectly through their effects on social mediators and developmental processes. The key developmental processes are defined as learning to be productive, learning to connect, and learning to navigate. According to the authors, recent research has made considerable progress in demonstrating how the social mediators of family, peers, and other adults affect those developmental processes and ultimately the desired outcomes for youth.
By organizing and presenting the research in this way, the authors can spin their general theory of change into ever-more specific micro steps that give guidance for program design. As an example, they focus attention on the part of the framework that addresses the relationships between youth and adults and they demonstrate how program decisions would be made based on the framework’s hypothesized pathways.
Thus, the research-based framework that Connell, Aber, and Walker present can help guide program designers in developing their theories and thereby facilitate the evaluation task. Moreover, this basic research can also help spur progress on some of the current challenges to developing the measures that could be used to track CCI activities and outcomes.
Determining and Measuring the Effects of CCIs
The Absence of a Comparison Community or Control Group
Evaluators point to a fundamental problem in the evaluation of CCIs: it is virtually impossible to establish a "counterfactual" to a comprehensive community initiative--that is, to set up a situation that would permit an evaluator to know what would have happened in the same community in the absence of the intervention. As Hollister and Hill note in their paper, the traditional approach to evaluation compares outcomes for the population that is affected by the initiative with outcomes in communities that do not receive the initiative and, from that comparison, draws conclusions about its effects. The way to obtain the best comparison, closest to what the situation would have been in the same community without the initiative, is through random assignment of similar communities either to receive the intervention or to serve as "controls." Hollister and Hill refer to random assignment as the "nectar of the gods" and say that "once you’ve had a taste of the pure stuff it is hard to settle for the flawed alternatives." Researchers, the policy community, and funders have come to expect the high standards of validity associated with experimentation. However, funders have not selected communities for CCIs randomly, nor are they likely to do so in the future, and in any case appropriate communities are too few in number and CCIs are too idiosyncratic to justify randomization at the community level. Another traditional approach is random assignment of individuals within a community to treatment and control groups (or alternative treatment groups), as a way to draw valid conclusions about the impact of the intervention. Yet, since CCIs aim to affect all residents in the community--and many CCIs depend on this "saturation" to build support for the initiative--random assignment of individuals is not an option.
In their paper, Hollister and Hill examine alternative approaches for establishing a counterfactual that might have relevance for the evaluation of community-wide initiatives--such as constructing comparison groups of individuals, selecting comparison communities, and examining the community pre- and post-intervention--and assess the experience of various experiments that have used these alternative approaches. They conclude that none of these alternatives serves as an adequate counterfactual, "primarily because individuals and communities are changing all the time with respect to the measured outcome even in the absence of any intentional intervention." Moreover, little effort has been made up to this point to develop a statistical model of community and community change that might serve as a theoretical counterfactual.
Hollister and Hill conclude that there are no clear second-best methods for obtaining accurate assessments of the impact of a community-wide intervention. They turn their attention, instead, to steps that can be taken to facilitate the development of better methods of CCI evaluation. In particular, they point to the need for high-quality information about how communities evolve over time through, for example, better small-area data, improved community-records data, panel studies of communities, and better measures of social networks and community institutions. Such improvements would not only assist evaluators on the ground, but would also help researchers understand and model community-level variables. In time, a statistical model of a community undergoing "ordinary" change might be able to serve as an appropriate comparison for communities undergoing planned interventions.
Identifying and Measuring Outcomes
Documenting outcomes and attributing them to the intervention should be, of course, a key element of any evaluation. For those who believe in the promise of CCIs, the challenge is to demonstrate with some degree of certainty that they are achieving positive change within a time frame that assures continued financial investment on the part of public and private funders and continued personal investment on the part of staff and community residents.
But, as all of the papers in this volume suggest, CCIs are operating at so many levels (individual, family, community, institutional, and system) and across so many sectors that the task of defining outcomes that can show whether the initiatives are working has become formidable. Although a number of indicators are currently in use to assess the impact on individuals of services and supports, many key child, youth, and adult outcomes are still not appropriately measured, and indicators of family- and community-level outcomes are poor. Those problems are compounded in CCIs by the fact that, although they seek long-term change, short-term markers of progress--for example, interim outcomes, measures of institutional and system reform, and indicators of community capacity--are important for sustaining commitment to the initiatives. Finally, even if appropriate measures could be defined, CCI evaluators encounter a range of obstacles in devising cost-effective and nonintrusive ways of obtaining accurate data and in ensuring compatibility of data that come from various sources.
Claudia Coulton discusses the data dilemmas in some detail in her paper. She writes from her experience working with existing community-level indicators in Cleveland and points out the conceptual, methodological, and practical challenges associated with using them. She also describes the strategies that she and her colleagues have adopted to obtain information, in spite of those constraints, that have been useful in the design and evaluation of community initiatives, with special attention to indicators of child well-being. She focuses on two kinds of measures: outcome-oriented and contextually oriented measures.
When outcome-oriented indicators are sought, communities are treated as units for measuring the status of resident individuals according to various social, economic, health, and developmental outcomes. At the community level, these kinds of data are most likely found in agency records and other administrative sources. The types of measures that are most readily available relate to the health and safety of children and can be obtained from sources such as birth and death certificates, official reports of child maltreatment, trauma registries in hospitals, and police departments. Measures of social development are more difficult, but Coulton reports success using teen childbearing rates, delinquency rates derived from court records, and teen drug violation arrest rates from police department records. Measures of cognitive development can be developed for communities in collaboration with the local school system. The economic status of families can best be obtained from the census, but, because the census is decennial, Coulton and her colleagues have been working to develop a model for estimating poverty rates in noncensus years using variables derived from Aid to Families with Dependent Children (AFDC) and food stamp use.
Contextually oriented indicators include measures of community structure and process, such as overall income levels or the presence or absence of strong social support networks, that are presumed to affect resident children and families either positively or negatively. As a result, they are particularly relevant for the evaluation of CCIs. Unfortunately, the sources for these types of indicators at the community level are limited. Many come from the census and are therefore only available in ten-year intervals. This is especially true for information about economic status. Coulton explains the potential relevance of information about the age and family structures of a community, residential mobility, environmental stress as measured by such indicators as vacant and boarded houses, and incidence of personal crime. She also stresses the importance of seeking data that describe not only the negative but also the positive contextual influence of communities such as supports for effective parenting and community resources for children.
Coulton describes a range of other community-level data problems, including disagreement about the geographic boundaries for a community and reporting bias in agency data, and concludes her paper with a set of recommendations for improving community-level indicators. She argues for community residents and leaders to be involved in designing the appropriate geographic units to be studied and the types of indicators that should be sought, and for mechanisms that make the information accessible and usable to community residents.
The Purpose of Evaluation and the Role of the Evaluator
The nature of comprehensive community initiatives and the state of the field of evaluation combine to suggest that the reasons for evaluating CCIs are more complex than for many other social experiments. That suggestion leads to a reconsideration of the objectives of CCI evaluations and of the role of the CCI evaluator. All of the papers in this volume touch upon this issue, and the last paper, by Prudence Brown, addresses it directly. A brief review of the key purposes and audiences that evaluations are meant to serve will help to set the stage for the direction that Brown recommends in her paper.
A main purpose of evaluation is impact assessment. All who are involved in an initiative--most especially the funders who are investing their money and the community members and staff who are investing their time and energy--have a need to know the degree to which it is working.
Accountability is a second purpose of evaluation, and this may become increasingly important if the call for decategorization of funds and devolution of authority to the local level is successful. In this case, there would likely be a trade-off between more flexible funding schemes and increased accountability, especially for outcomes (Gardner, forthcoming).
A third purpose aims to ensure that lessons from experiments are learned in a systematic way so that they can be applied to the next generation of policies, programs, and research. Alice O’Connor points out that history suggests that this process of social learning through evaluation is uncertain. Yet, this purpose of evaluation is particularly relevant for CCIs because they represent the operation of a new generation of social ideas.
Fourth, if an evaluation is so designed, it can become a program component of a CCI and serve the initiative’s goals through community building. The right kind of evaluation can build the capacity of initiative participants to design and institutionalize a self-assessment process and, through that, support an ongoing collaborative process of change.
Prudence Brown’s paper focuses on yet another purpose of evaluation that has become increasingly important in today's CCIs: evaluation can play an important "formative" function, affording a way to examine the ongoing implementation of the initiative and providing information for mid-course correction that can strengthen the initiative’s chances for success. Because CCIs are new and experimental, evaluators are being called upon more and more to perform this function.
Brown’s paper reviews the pros and cons of a more participatory role for the evaluator and concludes that a greater-than-normal degree of engagement in these multifaceted community initiatives is warranted. Indeed, it may be inevitable, since the multiple tasks with which the evaluator is likely to be charged cannot be performed well may depend on meaningful interaction with the initiative participants. These tasks include defining and articulating the underlying theories of change, tracking and documenting the implementation of the initiative, identifying interim and long-term outcome measures to assess its effectiveness, collecting the relevant data, determining whether outcomes can be ascribed to the intervention, and analyzing the implications for the field. Brown notes, however, that this high degree of involvement in the initiative does not "release the evaluator from the right or obligation to both maintain high standards of scientific inquiry and to make judgments and recommendations as warranted," and suggests that funders, especially funders of multi-site initiatives, should experiment with different methods for obtaining the highest-quality information.
We believe that the readers of this volume will come away feeling hopeful. The broad conclusion of this set of papers is that CCIs represent an important and promising opportunity to test the best of what we believe has been learned from recent social programs and economic development efforts to improve the lives of children and families: (1) they combine the social, economic, and physical spheres; (2) they recognize the critical role of "community-building" and community participation; (3) at the same time, they recognize that poor communities need financial, political, and technical resources that lie outside the community; (4) they recognize that improvements in the public sector’s systems of support must be complemented by private- and nonprofit-sector activities; (5) they recognize that the changes that are sought will require sustained investment over a long period of time.
Taken together, the papers in this volume convey the following messages.
To the program designer, they say: You are on the right track. Working simultaneously on a variety of fronts seems to offer the greatest promise of success. But, comprehensiveness should not be a cover for imprecision or for the absence of rigorous thinking. You still need to be clear about your goals and about your theories of change. You need to articulate your theories to all who are involved in the initiative. You need to be able to use negotiation around your theories as a vehicle for engaging all stakeholders in the process. And you need the theories to serve as the foundation for your evaluation.
To the methodologists, they say: We understand that random assignment is the best way to control for selection bias and gives you the greatest confidence in ruling out alternative, nonprogram-related explanations for how an outcome was achieved. But, given the nature and magnitude of the problem that we are trying to combat, we cannot limit our research questions and programmatic approaches to those for which random-assignment demonstration research is best suited. We are prepared to redefine precision in a search for meaningful answers to more relevant, complex, and multi-dimensional questions, and we need your help. But we are not coming empty-handed. We offer sound and well-articulated theories to inform the conversation. You can help us give our theories of change a scientific and more formal representation. You can also help us develop the measures to track whether our theories are holding up and encourage the collection of relevant data. Finally, you have an important role to play in legitimizing theory-based evaluation to the policy and funding communities.
To the program evaluators, they say: Your role is dramatically different in this new generation of interventions. You are part of the team that will work to define the program theory and you need to develop the tools that will facilitate that process. You will also need to develop valid measures of an initiative’s success and help negotiate agreement on them among stakeholders. Your measures can certainly include input and process dimensions, but you also need to focus on outcomes. You need to develop ways to analyze both quantitative and qualitative data in a way that will deliver scientifically credible conceptual and statistical information on an initiative’s progress. And your methods need to be cost-effective and respectful of those who are being evaluated.
To the social science research community, they say: You have told us quite a bit about the critical features of services, supports, and interventions that lead to improved outcomes for children and youth. But we need to know more about families. And we need much more information about communities, especially about how disadvantaged communities function and evolve and what it means to "build" a community. You must help us understand what the mediating factors are between the environment and family and individual outcomes, and how to influence them. This includes knowing much more about the elements that work best together to reinforce a trend toward positive outcomes and the conditions under which they are most likely to succeed.
To the funding and policy community, they say: You need to continue to press for evidence that the initiative is accomplishing the objectives for which it has been funded, but you must be mindful of the fact that significant change takes a long time. You need to become comfortable with the fact that the efforts that you fund may be necessary but not sufficient to achieve improved outcomes. For this reason, you should be thinking creatively about how several initiatives in the same community, operating under separate auspices and supported by separate funding, might be encouraged to agree to be held jointly accountable for achieving improved outcomes that none could achieve alone. You also need to reassess your standards of "certainty" and "elegance" in evaluations of these initiatives, because your pressures for evaluations to conform to a narrow set of methods may not only distort program design and operations but may also suppress information that is both rigorous and relevant. Finally, of all the stakeholders in these efforts, you are best placed to influence the larger environments and conditions that bear upon an initiative’s likelihood of success, and you should focus your energies in that direction.
With the above messages in mind, what should be the next steps for the Roundtable’s Steering Committee on Evaluation and for the larger community of individuals and organizations working directly with CCIs and on their evaluation? This volume suggests work on several fronts.
We need to work with program designers, funders, managers, and participants to identify and articulate both the programmatic and operational theories of change, whether explicit or implicit, that are guiding their efforts. We also need to construct frameworks, based on current theory and research findings, that lay out, as specifically as possible, the ways in which community-level and individual-level variables are known to affect one another. These two lines of information can then be brought together to develop richer and more specific "theories of change" about how to effect meaningful improvement in the lives of residents of disadvantaged communities that are solidly grounded in both practice and research and that can guide evaluation strategies.
Development of evaluation methods would then focus on (1) tracking the extent to which CCIs put their assumptions into practice and (2) identifying and analyzing the linkages between CCI activities and desired outcomes. We must seek to identify the data, qualitative and quantitative, that will be necessary to indicate advancement on both of those dimensions as well as promising new strategies for analyzing those data. And finally, these "new" approaches need to be applied to operating initiatives to ascertain how well they serve the purposes of assessing impact, ensuring accountability, encouraging social learning, and guiding program modification and improvement.
The Roundtable’s Evaluation Committee plans to pursue the implications of these papers in the year ahead and hopes that their publication will enable many other interested individuals and agencies to do so as well. The Roundtable welcomes comments, suggestions, and accounts of experience that could contribute to this process.
The authors wish to thank Alice O’Connor, Robert Granger, and J. Lawrence Aber for their helpful comments on earlier drafts.
American Writing Corporation. 1992. "Building Strong Communities: Strategies for Urban Change." Report of a conference sponsored by the Annie E. Casey, Ford, and Rockefeller Foundations, Cleveland, Ohio, May 1992.
Chynoweth, Judith K, Lauren Cook, Michael Campbell, and Barbara R. Dyer. 1992. "Experiments in Systems Change: States Implement Family Policy." Final Report to The Ford Foundation and United Way of America. Washington, DC: Council of Governors’ Policy Advisors.
Eisen, Arlene. 1992. "A Report on Foundations’ Support for Comprehensive Neighborhood-Based Community-Empowerment Initiatives." Report sponsored by East Bay Funders, the Ford Foundation, The New York Community Trust, the Piton Foundation, and the Riley Foundation, March 1992.
Fishman, N. and M. Phillips. 1993. "A Review of Comprehensive Collaborative Persistent Poverty Initiatives." Paper prepared for the Poverty Task Force of the Donor’s Forum of Chicago, June 1993. Mimeographed.
Gardner, Sid. 1992. "Elements and Contexts of Community-Based, Cross-Systems Reforms." Paper prepared for Discussion by the Roundtable on Effective Services [now the Roundtable on Comprehensive Community Initiatives for Children and Families], October 1992. Mimeographed.
Gardner, Sid. Forthcoming. "Reform Options for the Intergovernmental Funding System: Decategorization Policy Issues." Roundtable on Comprehensive Community Initiatives for Children and Families, Working Paper No.1. Queenstown, Md.: The Aspen Institute.
Halpern, Robert. 1994. "Historical Perspectives on Neighborhood-Based Strategies to Address Poverty-Related Social Problems." Paper prepared for the University Seminar on Children and Their Families in Big Cities, Columbia University, New York, April 11, 1994. Mimeographed.
Himmelman, Arthur T. 1992. "Communities Working Collaboratively for a Change." Minneapolis: The Himmelman Consulting Group. Mimeographed.
Jenny, Patricia. 1993. "Community Building Initiatives: A Scan of Comprehensive Neighborhood Revitalization Programs." Paper prepared for The New York Community Trust, New York City, September 1993. Mimeographed.
Rosewater, Ann. 1992. "Comprehensive Approaches for Children and Families: A Philanthropic Perspective." Washington, DC: Council on Foundations.
Rosewater, Ann, Joan Wynn, et al. 1993. "Community-focused Reforms Affecting Children and Families: Current Foundation Initiatives and Opportunities for the MacArthur Foundation." Paper prepared by the Chapin Hall Center for Children, University of Chicago, April 1993. Mimeographed.
Schorr, Lisbeth B. 1988. Within Our Reach: Breaking the Cycle of Disadvantage. With Daniel Schorr. New York: Doubleday.
Sherwood, Kay. 1994. "Comprehensive Community Planning Initiatives: A List of Foundation-Sponsored Projects." Paper prepared for the Foundation for Child Development and the Chicago Community Trust, December 1994. Mimeographed.
Stagner, Matthew. 1993. "Elements and Contexts of Community-based Cross-systems Reform: An Update." Paper prepared for Discussion by the Roundtable on Effective Services [now the Roundtable on Comprehensive Community Initiatives for Children and Families], October 1993. Mimeographed.
Back to New Approaches to Evaluating Community Initiatives index.
Copyright © 1999 by The Aspen Institute
Comments, questions or suggestions? E-mail email@example.com.
This page designed, hosted, and maintained by Change Communications.