Evaluation Theory, Design, and Methods Courses
- Case Studies in Evaluation
- Conducting Successful Evaluation Surveys
- Designing, Managing, and Analyzing Multi-Site Evaluations
- Outcome and Impact Assessment
- Qualitative Evaluation Methods
- Quantitative Evaluation Methods
- Sampling: Basic Methods for Probability and Non-Probability Samples
- Using Non-experimental Designs for Impact Evaluation
- Using Research, Program Theory, & Logic Models to Design and Evaluate Programs
- Using Program Theory and Logic Models in Evaluation
Case Studies in Evaluation
Instructor: Delwyn Goodrick, PhD
Description: Case study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context mediates the influence of program and project interventions. While case study designs are often adopted to describe or depict program processes, their capacity to illuminate depth and detail can also contribute to an understanding of the mechanisms responsible for program outcomes.
The literature on case study is impressive, but there remains tension in perspectives about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case study is conceived and practiced within the evaluation profession. This workshop aims to disentangle the discussions and debates, and highlight the central principles critical to effective case study practice and reporting.
This two day workshop will explore case study design, analysis and representation. The workshop will address case study topics through brief lecture presentation, small group discussion and workshop activities with realistic case study scenarios. Participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in doing case studies in evaluation contexts. Discussion of the ethical principles underpinning case study will be integrated throughout the workshop.
Specific topics to be addressed over the two days include,
- The utility of case studies useful in evaluation. Circumstance in which case studies may not be appropriate
- Evaluation questions that are suitable for a case study approach
- Selecting the unit of analysis in case study
- Design frameworks in case studies – single and multiple case study; the intrinsic and instrumental case
- The use of mixed methods in case study approaches – sequential and concurrent designs
- Developing case study protocols and case study guides
- Analyzing case study materials – within case and cross case analysis, matrix and template displays that facilitate analysis
- Principles and protocols for effective team work in multiple case study approaches
- Transferability/generalizability of case studies
- Validity and trustworthiness of case studies
- Synthesizing case materials
- Issues of representation of the case and cases in reporting
Detailed course notes will be provided to all participants and practice examples referenced over the two days.
Recommended text: Yin, R.P. Applications of Case Study Research (Sage, 2012).
Conducting Successful Evaluation Surveys
Instructor: Jolene D. Smyth, PhD
Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducing internet, mail, and mixed-mode surveys.
Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); putting individual questions together into a formatted questionnaire; designing web surveys; designing for multiple modes; and fielding surveys and encouraging response by mail, web, or in a mixed-mode design.
The course is made up of a mixture of PowerPoint presentation, discussion, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants. Participants will receive a copy of course slides and of the text Internet, mail, and Mixed-Mode Surveys: The Tailored Design Method by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian (4th Edition, 2014).
Designing, Managing, and Analyzing Multi-Site Evaluations
Instructor: Debra J. Rog, PhD
Description: Guidance on how to carry out multi-site evaluations is scarce and what is available tends to focus on quantitative data collection and analysis and usually treats diverse sites in a uniform manner. This course will present instruction on designing, managing, and analyzing multi-site studies and focus on the differences that are required due to the specifics of the situation, e.g., central evaluator control vs. interactive collaboration; driven by research vs. program interests; planned and prospective vs. retrospective; varied vs. standardized sites; exploratory vs. confirmatory purpose; and data that are exclusively quantitative vs. qualitative vs. mixture. Topics include stakeholder involvement, collaborative design, maintaining integrity/quality in data, monitoring and technical assistance, data submission, communication and group process, cross-site synthesis and analysis, and cross-site reporting and dissemination. Practical strategies learned through first-hand experience as well as from review of other studies will be shared. Teaching will include large- and small-group discussions and students will work together on several problems. Detailed course materials are provided.
Prerequisites: Understanding of evaluation and research design.
Outcome and Impact Assessment
Instructor: Mark W. Lipsey, PhD
Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.
Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.
Qualitative Evaluation Methods
Instructor: Michael Quinn Patton, PhD
Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified. Exercises will provide experience in applying qualitative methods and analysis in evaluations. Individuals enrolled in this class will each receive one copy of Dr. Patton’s text: Qualitative Research and Evaluation Methods, (Sage, 2015, 4th Edition).
Quantitative Evaluation Methods
Instructor: Emily E. Tanner-Smith, PhD
Description: This course will introduce a range of basic quantitative social science research methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts.
Topics will include validity, sampling methods, measurement considerations, survey and interview techniques, observational and correlational designs, and experimental and quasi-experimental designs. This course is for those who want to update their existing knowledge and skills and will serve as an introduction for those new to the topic.
Sampling: Basic Methods for Probability and Non-Probability Samples
Instructor: Gary T. Henry, PhD
Description: Careful use of sampling methods can save resources and often increase the validity of evaluation findings. This seminar will focus on the following: (a) The Basics: defining sample, sampling and validity, probability and non-probability samples, and when not to sample; (b) Error and Sampling: study logic and sources of error, target population and sampling frame; (c) Probability Sampling Methods: simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling; (d) Making Choices before, during, and after sampling; and (e) Sampling Issues. Many examples will be used to illustrate these topics and participants will have the opportunity to work with case exercises.
Using Non-experimental Designs for Impact Evaluation
Instructor: Gary T. Henry, PhD
Description: In the past few years, there have been very exciting developments in approaches to causal inference that have expanded our knowledge and toolkit for conducting impact evaluations. Evaluators, statisticians, and social scientists have focused a great deal of attention on causal inference, the benefits and drawbacks of random assignment studies, and alternative designs for estimating program impacts. For this workshop, we will have three goals:
- to understand a general theory of causal inference that covers both random assignment and observational studies, including quasi-experimental and non-experimental studies;
- to identify the assumptions needed to infer causality in evaluations; and
- to describe, compare and contrast six, promising alternatives to random assignment studies for inferring causality, including the requirements for implementing these designs, the strengths and weaknesses of each, and examples from evaluations where these designs have been applied.
The six alternative designs to be described and discussed are: regression discontinuity; propensity score matching; instrumental variables; fixed effects (within unit variance); difference-in-differences; and comparative interrupted time series. Also, current findings concerning the accuracy of these designs relative to random assignment studies from “within study” assessments of bias will be presented and the implications for practice discussed. Prerequisites: This class assumes some familiarity with research design, threats to validity, impact evaluations, and multivariate regression.
Using Research, Program Theory, & Logic Models to Design and Evaluate Programs
Instructor: Stewart I. Donaldson, PhD
Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence-base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focuses on developing useful evidence-based program theories and logic models, and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.
Recommended Book: Program Theory-Driven Evaluation Science: Strategies and Applications (Psychology Press).
Students may also be interested in: Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (Sage).
Using Program Theory and Logic Models in Evaluation
Instructor: Patricia Rogers, PhD
Description: It is now commonplace to use program theory, or logic models, in evaluation as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, logic models can provide conceptual clarity, motivate staff, and focus evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course focuses on developing useful logic models, and using them effectively to guide evaluation and avoid some of the most common traps. It begins with the assumption that participants already know something about logic models and program theory but come with different understandings of terminology and options. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use logic models to positive advantage (e.g., to identify criteria, develop questions, identify data sources and bases of comparison); (b) ways they are used with negative results (e.g., focusing only on intended outcomes, ignoring differential effects for client subgroups, seeking only evidence that confirms the theory); and (c) strategies to avoid traps (e.g., differentiated theory, market segmentation, competitive elaboration of alternative hypotheses). Participants receive the instructor’s co-authored text, Purposeful Program Theory: Effective Use of Theories of Change and Logic Models (Jossey-Bass: Wiley).
Prerequisites: Prior to attendance, those with no previous experience with program theory should work through the University of Wisconsin Extension’s course in ‘Enhancing Program Performance with Logic Models’, available at no cost here.