Chapter 11 Evaluation and Policy Research Evaluation Research

  • Slides: 52
Download presentation
Chapter 11 Evaluation and Policy Research

Chapter 11 Evaluation and Policy Research

Evaluation Research n n Evaluation research is not a method of data collection, like

Evaluation Research n n Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement. Instead, evaluation research is social research that is conducted for a distinctive purpose: to investigate social programs (e. g. , substance abuse treatment programs, welfare programs, criminal justice programs, or employment and training programs).

n Evaluation Research may use one or more of the methods (Experiment, Survey, Observation,

n Evaluation Research may use one or more of the methods (Experiment, Survey, Observation, Intensive Interview, Focus Groups etc. ) to analyze data

Evaluation Research, cont. n n For each project, an evaluation researcher must select a

Evaluation Research, cont. n n For each project, an evaluation researcher must select a research design and method of data collection that are useful for answering the particular research questions posed and appropriate for the particular program investigated. The development of evaluation research as a major enterprise followed on the heels of the expansion of the federal government during the Great Depression and World War II.

Evaluation Research, cont. n n Large Depression-era government outlays for social programs stimulated interest

Evaluation Research, cont. n n Large Depression-era government outlays for social programs stimulated interest in monitoring program output, and the military effort in World War II led to some of the necessary review and contracting procedures for sponsoring evaluation research. In the 1960 s, criminal justice researchers began to use experiments to test the value of different policies (Orr 1999: 24).

Evaluation Research, cont. n n n In the early 1980 s, after this period

Evaluation Research, cont. n n n In the early 1980 s, after this period of rapid growth, many evaluation research firms closed in tandem with the decline of many Great Society programs. However, the demand for evaluation research continues, due, in part, to government requirements. The growth of evaluation research is also reflected in the social science community. The American Evaluation Association was founded in 1986 as a professional organization for evaluation researchers (merging two previous associations) and the publisher of an evaluation research journal.

Evaluation Basics n n First, clients, customers, students, or some other persons or units—cases—enter

Evaluation Basics n n First, clients, customers, students, or some other persons or units—cases—enter the program as inputs. (people functioning as raw materials to be processed. ) Resources and staff required by a program are also program inputs.

Evaluation Basics, cont. n n Next some service or treatment is provided to the

Evaluation Basics, cont. n n Next some service or treatment is provided to the cases. This may be attendance in a class, assistance with a health problem, residence in new housing, or receipt of special cash benefits. The program process may be simple or complicated, short or long, but it is designed to have some impact on the cases.

Evaluation Basics, cont. n n n The direct product of the program’s service delivery

Evaluation Basics, cont. n n n The direct product of the program’s service delivery process is its output. Program outputs may include clients served, case managers trained, food parcels delivered, or arrests made. The program outputs may be desirable in themselves, but they primarily serve to indicate that the program is operating.

Evaluation Basics, cont. n n n Program outcomes indicate the impact of the program

Evaluation Basics, cont. n n n Program outcomes indicate the impact of the program on the cases that have been processed. Outcomes can range from improved test scores or higher rates of job retention to fewer criminal offenses and lower rates of poverty. Any social program is likely to have multiple outcomes, some intended and some unintended, some positive and others that are viewed as negative.

Evaluation Basics, cont. n n Variation in both outputs and outcomes, in turn, influence

Evaluation Basics, cont. n n Variation in both outputs and outcomes, in turn, influence the inputs to the program through a feedback process. If not enough clients are being served, recruitment of new clients may increase. If too many negative side effects result from a trial medication, the trials may be limited or terminated. If a program does not appear to lead to improved outcomes, clients may go elsewhere.

Evaluation Basics, cont. n n n Evaluation research is simply a systematic approach to

Evaluation Basics, cont. n n n Evaluation research is simply a systematic approach to feedback: It strengthens the feedback loop through credible analyses of program operations and outcomes. Evaluation research also broadens this loop to include connections to parties outside of the program itself. A funding agency or political authority mandate the research, outside experts may be brought in to conduct the research, and the evaluation research findings may be released to the public, or at least funders, in a formal report.

Evaluation Basics, cont. n n The evaluation process as a whole, and feedback in

Evaluation Basics, cont. n n The evaluation process as a whole, and feedback in particular, can be understood only in relation to the interests and perspective of program stakeholders. Stakeholders are those individuals and groups who have some basis of concern with the program. They might be clients, staff, managers, funders, or the public. Who the program stakeholders are and what role they play in the program evaluation will have tremendous consequences for the research.

Evaluation Basics, cont. n n Unlike explanatory social science research, evaluation research is not

Evaluation Basics, cont. n n Unlike explanatory social science research, evaluation research is not designed to test the implications of a social theory; the basic issue often is: What is the program’s impact? Process evaluation, for instance, often uses qualitative methods like traditional social science does, but unlike exploratory research, the goal is not to induce a broad theoretical explanation for what is discovered.

Evaluation Basics, cont. n n Instead, the question is: How does the program do

Evaluation Basics, cont. n n Instead, the question is: How does the program do what it does? Unlike social science research, the researchers cannot design evaluation studies simply in accord with the highest scientific standards and the most important research questions; instead, it is program stakeholders who set the agenda. But there is no sharp boundary between the two. In their attempt to explain why the program has an impact, and whether the program is needed, evaluation researchers often bring social theories into their projects, but for immediately practical aims.

Questions for Evaluation Research n Evaluation projects can focus on several questions related to

Questions for Evaluation Research n Evaluation projects can focus on several questions related to the operation of social programs and the impact they have: o Is the program needed? o Can the program be evaluated? o How does the program operate? o What is the program’s impact? o How efficient is the program?

Needs Assessment n n n Is a new program needed or an old one

Needs Assessment n n n Is a new program needed or an old one still required? Is there a need at all? A needs assessment attempts to answer these questions with systematic, credible evidence. Need may be assessed by social indicators such as the poverty rate or the level of home ownership, by interviews of such local experts as school board members or team captains, by surveys of populations in need, or by focus groups with community residents (Rossi & Freeman 1989).

Evaluability Assessment n n n Evaluation research will be pointless if the program itself

Evaluability Assessment n n n Evaluation research will be pointless if the program itself cannot be evaluated. Yes, some type of study is always possible, but a study specifically to identify the effects of a particular program may not be possible within the available time and resources. So researchers may conduct an evaluability assessment to learn this in advance, rather than expend time and effort on a fruitless project.

Evaluability Assessment, cont. n n n Why might a social program not be evaluable?

Evaluability Assessment, cont. n n n Why might a social program not be evaluable? Management only wants to have its superior performance confirmed and does not really care whether the program is having its intended effects. This is a very common problem. Staff are so alienated from the agency that they don’t trust any attempt sponsored by management to check on their performance.

Evaluability Assessment, cont. n n Program personnel are just “helping people” or “putting in

Evaluability Assessment, cont. n n Program personnel are just “helping people” or “putting in time” without any clear sense of what the program is trying to achieve. The program is not clearly distinct from other services delivered from the agency and so can’t be evaluated by itself (Patton 2002: 164).

Evaluability Assessment, cont. n n An evaluability assessment can help to solve the problems

Evaluability Assessment, cont. n n An evaluability assessment can help to solve the problems identified. Discussion with program managers and staff can result in changes in program operations. The evaluators may use the evaluability assessment to “sell” the evaluation to participants and sensitize them to the importance of clarifying their goals and objectives. Knowledge about the program gleaned through the evaluability assessment can be used to refine evaluation plans.

Evaluability Assessment, cont. n n n Because they are preliminary studies to “check things

Evaluability Assessment, cont. n n n Because they are preliminary studies to “check things out, ” evaluability assessments often rely on qualitative methods. Program managers and key staff may be interviewed in-depth, or program sponsors may be asked about the importance they attach to different goals. These assessments also may have an “action research” aspect, because the researcher presents the findings to program managers and encourages changes in program operations.

Process Evaluation n What actually happens in a social program? Finding this out would

Process Evaluation n What actually happens in a social program? Finding this out would be process analysis or process evaluation—research to investigate the process of service delivery. Process evaluation is even more important when more complex programs are evaluated. Many social programs comprise multiple elements and are delivered over an extended period of time, often by different providers in different areas.

Process Evaluation, cont. n n The term formative evaluation may be used instead of

Process Evaluation, cont. n n The term formative evaluation may be used instead of process evaluation when the evaluation findings are used to help shape and refine the program Formative evaluation procedures that are incorporated into the initial development of the service program can specify the treatment process and lead to changes in recruitment procedures, program delivery, or measurement tools

Process Evaluation, cont. n n n Process evaluation can employ a wide range of

Process Evaluation, cont. n n n Process evaluation can employ a wide range of indicators. Program coverage can be monitored through program records, participant surveys, community surveys, or utilizers versus dropouts and ineligibles. Service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program recipients (Rossi & Freeman, 1989).

Process Evaluation, cont. n Qualitative methods are often a key component of process evaluation

Process Evaluation, cont. n Qualitative methods are often a key component of process evaluation studies because they can be used to elucidate and understand internal program dynamics—even those that were not anticipated n Qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes and evolves over time.

Impact Analysis n n n The core questions of evaluation research are Did the

Impact Analysis n n n The core questions of evaluation research are Did the program work? and Did it have the intended result? This part of the research is variously called impact analysis, impact evaluation, or summative evaluation. Formally speaking, impact analysis compares what happened after a program with what would have happened had there been no program.

Impact Analysis, cont. n n Rigorous evaluations often lead to the conclusion that a

Impact Analysis, cont. n n Rigorous evaluations often lead to the conclusion that a program does not have the desired effect Depending on political support for the program and its goals, the result may be efforts to redesign the program (as with D. A. R. E. ) or reduction or termination of program funding.

Efficiency Analysis n n Whatever the program’s benefits, are they sufficient to offset the

Efficiency Analysis n n Whatever the program’s benefits, are they sufficient to offset the program’s costs? Are the taxpayers getting their money’s worth? What resources are required by the program? These efficiency questions can be the primary reason that funders require evaluation of the programs they fund. As a result, efficiency analysis, which compares program effects to costs, is often a necessary component of an evaluation research project.

n Efficiency analysis A type of evaluation research that compares program costs to program

n Efficiency analysis A type of evaluation research that compares program costs to program effects. It can be either a cost-benefit analysis or a cost-effectiveness analysis.

n Cost-benefit analysis A type of evaluation research that compares program costs to the

n Cost-benefit analysis A type of evaluation research that compares program costs to the economic value of program benefits.

Efficiency Analysis, cont. n n n A cost-benefit analysis must identify the specific program

Efficiency Analysis, cont. n n n A cost-benefit analysis must identify the specific program costs and the procedures for estimating the economic value of specific program benefits. This type of analysis also requires that the analyst identify whose perspective will be used in order to determine what can be considered a benefit rather than a cost. Program clients will have a different perspective on these issues than do taxpayers or program staff.

n Cost-effectiveness analysis A type of evaluation research that compares program costs to actual

n Cost-effectiveness analysis A type of evaluation research that compares program costs to actual program outcomes.

Efficiency Analysis, cont. n n A cost-effectiveness analysis focuses attention directly on the program’s

Efficiency Analysis, cont. n n A cost-effectiveness analysis focuses attention directly on the program’s outcomes rather than on the economic value of those outcomes. In a cost-effectiveness analysis, the specific costs of the program are compared to the program’s outcomes, such as the number of jobs obtained, the extent of improvement in reading scores, or the degree of decline in crimes committed.

Design Decisions n Once we have decided on, or identified, the goal or focus

Design Decisions n Once we have decided on, or identified, the goal or focus for a program evaluation, there are still important decisions to be made about how to design the specific evaluation project.

Design Decisions, cont. n The most important decisions are the following: o o o

Design Decisions, cont. n The most important decisions are the following: o o o —Do we care how the program gets results? —Whose goals matter most? Researcher or Stakeholder —Which methods provide the best answers? Qualitative, Quantitative or both —How complicated should the findings be?

Quantitative or Qualitative Methods n n Evaluation research that attempts to identify the effects

Quantitative or Qualitative Methods n n Evaluation research that attempts to identify the effects of a social program typically is quantitative: Did the response times of emergency personnel tend to decrease? Did the students’ test scores increase? Did housing retention improve?

Quantitative or Qualitative Methods, cont. n n n It’s fair to say that when

Quantitative or Qualitative Methods, cont. n n n It’s fair to say that when there’s an interest in comparing outcomes between an experimental and a control group, or tracking change over time in a systematic manner, quantitative methods are favored. But qualitative methods can add much to quantitative evaluation research studies, including more depth, detail, nuance, and exemplary case studies Perhaps the greatest contribution qualitative methods can make in many evaluation studies is investigating the program

Quantitative or Qualitative Methods, cont. n n n Although it is possible to track

Quantitative or Qualitative Methods, cont. n n n Although it is possible to track service delivery with quantitative measures like staff contact and frequency of complaints, finding out what is happening to clients and how clients experience the program can often best be accomplished by observing program activities and interviewing staff and clients intensively. Another good reason for using qualitative methods in evaluation research is the importance of learning how different individuals react to the program. Qualitative methods can also help reveal how social programs actually operate.

Simple or Complex Outcomes n n n Does the program have only one outcome?

Simple or Complex Outcomes n n n Does the program have only one outcome? Unlikely. The decision to focus on one outcome rather than another, on a single outcome or on several, can have enormous implications. In spite of the additional difficulties introduced by measuring multiple outcomes, most evaluation researchers attempt to do so. The result usually is a much more realistic, and richer, understanding of program impact.

Ethics in Evaluation research can make a difference in people’s lives while the research

Ethics in Evaluation research can make a difference in people’s lives while the research is being conducted, as well as after the results are reported. Job opportunities, welfare requirements, housing options, treatment for substance abuse, and training programs are each potentially important benefits, and an evaluation research project can change both the type and availability of such benefits. This direct impact on research participants and, potentially, their families, heightens the attention that evaluation researchers have to give to human subjects’ concerns.

Ethics in Evaluation, cont. n n It is when program impact is the focus

Ethics in Evaluation, cont. n n It is when program impact is the focus that human subjects considerations multiply. What about assigning persons randomly to receive some social program or benefit?

Ethics in Evaluation, cont. n n n One justification given by evaluation researchers has

Ethics in Evaluation, cont. n n n One justification given by evaluation researchers has to do with the scarcity of these resources. If not everyone in the population who is eligible for a program can receive it, due to resource limitations, what could be a fairer way to distribute the program benefits than through a lottery? Random assignment also seems like a reasonable way to allocate potential program benefits when a new program is being tested with only some members of the target recipient population.

Ethics in Evaluation, cont. n n However, when an ongoing entitlement program is being

Ethics in Evaluation, cont. n n However, when an ongoing entitlement program is being evaluated and experimental subjects would normally be eligible for program participation, it may not be ethical simply to bar some potential participants from the programs. Instead, evaluation researchers may test alternative treatments or provide some alternative benefit while the treatment is being denied.

Ethics in Evaluation, cont. n n n There are many other ethical challenges in

Ethics in Evaluation, cont. n n n There are many other ethical challenges in evaluation research: How can confidentiality be preserved when the data are owned by a government agency or are subject to discovery in a legal proceeding? Is it legitimate for research decisions to be shaped by political considerations?

Ethics in Evaluation, cont. n n Must evaluation findings be shared with stakeholders rather

Ethics in Evaluation, cont. n n Must evaluation findings be shared with stakeholders rather than only with policy makers? Will the results actually be used?

Ethics in Evaluation, cont. n n The problem of maintaining subject confidentiality is particularly

Ethics in Evaluation, cont. n n The problem of maintaining subject confidentiality is particularly thorny, because researchers, in general, are not legally protected from the requirements that they provide evidence requested in legal proceedings, particularly through the process known as “discovery. ” However, it is important to be aware that several federal statutes have been passed specifically to protect research data about vulnerable populations from legal disclosure requirements.

Ethics in Evaluation, cont. n n n Ethical concerns must also be given special

Ethics in Evaluation, cont. n n n Ethical concerns must also be given special attention when evaluation research projects involve members of vulnerable populations as subjects. In order to conduct research on children, parental consent usually is required before the child can be approached directly about the research. Adding this requirement to an evaluation research project can dramatically reduce participation, because many parents simply do not bother to respond to mailed consent forms. (mentally disabled? )

Conclusions n Hopes for evaluation research are high: Society could benefit from the development

Conclusions n Hopes for evaluation research are high: Society could benefit from the development of programs that work well, accomplish their policy goals, and that serve people who genuinely need them.

Conclusions, cont. n n n Because social programs and the people who use them

Conclusions, cont. n n n Because social programs and the people who use them are complex, evaluation research designs can easily miss important outcomes or aspects of the program process. Because the many program stakeholders all have an interest in particular results from the evaluation, researchers can be subject to an unusual level of cross-pressures and demands. Because the need to include program stakeholders in research decisions may undermine adherence to scientific standards, research designs can be weakened.

Conclusions, cont. n n Because some program administrators want to believe their programs really

Conclusions, cont. n n Because some program administrators want to believe their programs really work well, researchers may be pressured to avoid null findings or, if they are not responsive, find their research report ignored. Plenty of well-done evaluation research studies wind up in a recycling bin, or hidden away in a file cabinet. Because the primary audience for evaluation research reports are program administrators, politicians, or members of the public, evaluation findings may need to be overly simplified, distorting the findings.

Conclusions, cont. n n n The rewards of evaluation research are often worth the

Conclusions, cont. n n n The rewards of evaluation research are often worth the risks, however. Evaluation research can provide social scientists with rare opportunities to study complex social process, with real consequences, and to contribute to the public good. Although they may face unusual constraints on their research designs, most evaluation projects can result in high-quality analysis and publications in reputable social science journals.