Case Studies in Evaluation

Instructor: Dr. Delwyn Goodrick

Description: Case study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context mediates the influence of program and project interventions. While case study designs are often adopted to describe or depict program processes, their capacity to illuminate depth and detail can also contribute to an understanding of the mechanisms responsible for program outcomes. 

The literature on case study is impressive, but there remains tension in perspectives about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case study is conceived and practiced within the evaluation profession.  This workshop aims to disentangle the discussions and debates, and highlight the central principles critical to effective case study practice and reporting.

This two day workshop will explore case study design, analysis and representation.  The workshop will address case study topics through brief lecture presentation, small group discussion and workshop activities with realistic case study scenarios.  Participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in doing case studies in evaluation contexts.  Discussion of the ethical principles underpinning case study will be integrated throughout the workshop

Specific topics and questions to be addressed over the two days include,

  • The utility of case studies useful in evaluation. Circumstance in which case studies may not be appropriate
  • Evaluation questions that are suitable for a case study approach
  • Selecting the unit of analysis in case study
  • Design frameworks in case studies – single and multiple case study; the intrinsic and instrumental case
  • The use of mixed methods in case study approaches – sequential and concurrent designs
  • Developing case study protocols and case study guides
  • Analyzing case study materials – within case and cross case analysis, matrix and template displays that facilitate analysis
  • Principles and protocols for effective team work in multiple case study approaches
  • Transferability/generalizability of case studies
  • Validity and trustworthiness of case studies
  • Synthesizing case materials
  • Issues of representation of the case and cases in reporting

Detailed course notes will be provided to all participants and practice examples referenced over the two days.  Text provided and used in the course: Yin, R.P. Applications of Case Study Research (Sage, 2012).

>> Return to top


Design and Administration of Internet, Mail, and Mixed-Mode Surveys

Instructor: Dr. Jolene D. Smyth

Description: Surveys have long been a key method used by evaluators and social scientists to understand behaviors, opinions, and outcomes.  The success of many evaluation projects depends on the quality of survey data collected. Most of us have been exposed to survey methods in a few college methods courses or on the job, but survey methodology is an entire scientific discipline on its own, and methods for doing surveys are changing rapidly.  In the last decade, this change has included enormous growth in internet surveys (including web surveys on mobile devices), the revival of postal mail surveys, and, perhaps most importantly, increased mixing of survey modes to overcome the growing reluctance of sample members to respond. Substantial growth in our knowledge of best practices for conducting mail, internet, and mixed-mode surveys has occurred in tandem with these changes. This course will provide new and updated information about best practices for designing and conducing internet, mail, and mixed-mode surveys.

The course begins with a discussion of fundamental concepts from the science of survey methodology.  Students will gain an understanding of the multiple sources of survey error that must be minimized to achieve quality results.  The course then takes a very practical turn, focusing on how the various sources of survey error can be minimize through best practices for writing questions; visual design of questions (drawing on concepts from the vision sciences); putting individual questions together into a formatted questionnaire; designing web surveys; fielding surveys and encouraging response by mail, web, or in a mixed-mode design; and mixing multiple modes to minimize error.

The course is made up of a mixture of PowerPoint presentation, discussion, and activities built around real-world survey examples and case studies.  Participants will get the chance to apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the challenges they face with the instructor and other participants.  Participants will receive a copy of course slides and of the text “Internet, mail, and Mixed-Mode Surveys: The Tailored Design Method” by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian (4th Edition, 2014).

>> Return to top


Designing, Managing, and Analyzing Multi-Site Evaluations

Instructor: Dr. Debra J. Rog

Description: Guidance on how to carry out multi-site evaluations is scarce and what is available tends to focus on quantitative data collection and analysis and usually treats diverse sites in a uniform manner. This course will present instruction on designing, managing, and analyzing multi-site studies and focus on the differences that are required due to the specifics of the situation, e.g., central evaluator control vs. interactive collaboration; driven by research vs. program interests; planned and prospective vs. retrospective; varied vs. standardized sites; exploratory vs. confirmatory purpose; and data that are exclusively quantitative vs. qualitative vs. mixture. Topics include stakeholder involvement, collaborative design, maintaining integrity/quality in data, monitoring and technical assistance, data submission, communication and group process, cross-site synthesis and analysis, and cross-site reporting and dissemination. Practical strategies learned through first-hand experience as well as from review of other studies will be shared. Teaching will include large- and small-group discussions and students will work together on several problems. Detailed course materials are provided. Text provided: Herrell, J.M. & R.B. Straw, Conducting Multiple Site Evaluations in Real-World Settings, New Directions in Evaluation #94 (Jossey-Bass, 2002).

Prerequisites: Understanding of evaluation and research design.

>> Return to top


Sampling: Basic Methods for Probability and Non-Probability Samples

Instructor: Dr. Gary T. Henry

Description: Careful use of sampling methods can save resources and often increase the validity of evaluation findings. This seminar will focus on the following: (a) The Basics: defining sample, sampling and validity, probability and non-probability samples, and when not to sample; (b) Error and Sampling: study logic and sources of error, target population and sampling frame; (c) Probability Sampling Methods: simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling; (d) Making Choices before, during, and after sampling; and (e) Sampling Issues. Many examples will be used to illustrate these topics and participants will have the opportunity to work with case exercises. The instructor’s text Practical Sampling (Sage, 1990) will be provided as part of the course fee in addition to take-home class work materials.

>> Return to top


Outcome and Impact Assessment

Instructor: Dr. Mark W. Lipsey

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course. (Participants in this class will each receive one copy of the 7th Edition (Sage, 2003) Rossi et al. text, Evaluation: A Systematic Approach.)

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.

>> Return to top


Qualitative Evaluation Methods

Instructor: Dr. Michael Quinn Patton

Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified.  Exercises will provide experience in applying qualitative methods and analysis in evaluations. Individuals enrolled in this class will each receive one copy of Dr. Patton’s text: Qualitative Research and Evaluation Methods, (Sage, 2015, 4th Edition).

>> Return to top


Using Non-experimental Designs for Impact Evaluation

Instructor: Dr. Gary T. Henry

Description: In the past few years, there have been very exciting developments in approaches to causal inference that have expanded our knowledge and toolkit for conducting impact evaluations. Evaluators, statisticians, and social scientists have focused a great deal of attention on causal inference, the benefits and drawbacks of random assignment studies, and alternative designs for estimating program impacts. For this workshop, we will have three goals:

  • to understand a general theory of causal inference that covers both random assignment and observational studies, including quasi-experimental and non-experimental studies;
  • to identify the assumptions needed to infer causality in evaluations; and
  • to describe, compare and contrast six, promising alternatives to random assignment studies for inferring causality, including the requirements for implementing these designs, the strengths and weaknesses of each, and examples from evaluations where these designs have been applied.

The six alternative designs to be described and discussed are: regression discontinuity; propensity score matching; instrumental variables; fixed effects (within unit variance); difference-in-differences; and interrupted time series. Also, current findings concerning the accuracy o.f these designs relative to random assignment studies from “within study” assessments of bias will be presented and the implications for practice discussed. Prerequisites: This class assumes some familiarity with research design, threats to validity, impact evaluations, and multivariate regression.

>> Return to top


Using Program Theory and Logic Models in Evaluation

Instructor: Dr. Patricia Rogers

Description: It is now commonplace to use program theory, or logic models, in evaluation as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, logic models can provide conceptual clarity, motivate staff, and focus evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course focuses on developing useful logic models, and using them effectively to guide evaluation and avoid some of the most common traps. It begins with the assumption that participants already know something about logic models and program theory* but come with different understandings of terminology and options. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use logic models to positive advantage (e.g., to identify criteria, develop questions, identify data sources and bases of comparison); (b) ways they are used with negative results (e.g., focusing only on intended outcomes, ignoring differential effects for client subgroups, seeking only evidence that confirms the theory); and (c) strategies to avoid traps (e.g., differentiated theory, market segmentation, competitive elaboration of alternative hypotheses). Participants receive the instructor’s co-authored text, Purposeful Program Theory, (Jossey-Bass, 2011).

*Note: Prior to attendance, those with no previous experience with program theory should work through the University of Wisconsin Extension’s course in ‘Enhancing Program Performance with Logic Models’ available at no cost at www.uwex.edu/ces/lmcourse/

>> Return to top