July 10-12

Basics of Program Evaluation

(Previously taught as Foundations of Evaluation: Theory, Method, and Practice)

Instructor: Stewart I. Donaldson, PhD

Description: With an emphasis on constructing a sound foundational knowledge base, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; the view of evaluation as a transdiscipline; the logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods; reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, design sensitivity, and validity; the function of program theory and logic models in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; and emerging and enduring issues in evaluation theory, method, and practice. Although the major focus of the course is program evaluation in multiple settings (e.g., public health, education, human and social services, and international development), examples from personnel evaluation, product evaluation, organizational evaluation, and systems evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct ethical and high quality program evaluations using a contingency-based and contextually/culturally responsive approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of mini-presentations, breakout room discussions and application exercises. Audiences for this course include those who have familiarity with social science research, but are unfamiliar with program evaluation, and evaluators who wish to review current theories, methods, and practices.

Recommended Text: Donaldson, S. I. (2021). Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-Focused Applications.  New York, NY: Routledge.


July 10

Culture and Evaluation

Instructor: Leona Ba, EdD

Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive evaluations. In addition, they will explore strategies for applying cultural responsiveness to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

Prerequisites: Understanding of evaluation and research design.

This course uses some material from Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In Hood, S., Hopson, R., & Frierson, H. (Eds.) Continuing the journey to reposition culture and cultural context in evaluation theory and practice, (pp. 3-27). Charlotte, NC; Information Age Publishing, Inc.


July 10-11

Needs Assessment

Instructor: Ryan Watkins, PhD

Description: The initial phase of a project or program is among the most critical in determining its long-term success. Needs assessments support this initial phase of project development with proven approaches to gathering information and making justifiable decisions. In a two-day course, you will learn how needs assessment tools and techniques help you identify, analyze, prioritize, and accomplish the results you really want to achieve. Filled with practical strategies, tools, and guides, the workshop covers both large-scale, formal needs assessments and less formal assessments that guide daily decisions. The workshop blends rigorous methods and realistic tools that can help you make informed and reasoned decisions. Together, these methods and tools offer a comprehensive, yet realistic, approach to identifying needs and selecting among alternative paths forward.

In this course, we will focus on the pragmatic application of many needs assessment tools, giving participants the opportunity to practice their skills while learning how needs assessment techniques can improve the achievement of desired results. With participants from a variety of sectors and organizational roles, the workshop will illustrate how needs assessments can be of value in a variety of operational, capacity development, and staff learning functions.


July 10-14

Applied Statistics for Evaluators

Instructor: Theodore H. Poister, PhD

Description: In this class students will become familiar with a set of statistical tools that are often used in program evaluation, with a strong emphasis on appropriate application of techniques and interpretation of results. It is designed to “demystify” statistics and clarify how and when to use particular techniques. While the principal concern focuses on practical applications in program evaluations rather than the mathematical support underlying the procedures, a number of formulas and computations are covered to help students understand the logic of how the statistics work. Topics include introduction to data analysis; simple descriptive statistics; examination of statistical relationships; the basics of statistical inference from sample data; two-sample t tests, chi square and associated measures; analysis of variance; and an introduction to simple and multiple regression analysis.

Students will learn how to generate a wide variety of tables and graphs for presenting results, and a premium will be placed on clear presentation and interpretation of results. This “hands-on” class is conducted in a computer lab in which each participant has a computer for running statistical procedures on a wide range of real-world data sets, using SPSS software. However, no prior knowledge of statistics or SPSS is required. While this is an introductory course, it can also serve as a refresher for those with some training in statistics, and an “eye opener” for evaluators who are working with statistics now but are not comfortable with when and how they should be used.


July 11-12

Conducting Successful Evaluation Surveys

Instructor: Jolene D. Smyth, PhD

Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducing internet, mail, and mixed-mode surveys.

Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); putting individual questions together into a formatted questionnaire; designing web surveys; designing for multiple modes; and fielding surveys and encouraging response by mail, web, or in a mixed-mode design.

The course is made up of a mixture of PowerPoint presentation, discussion, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants.

Recommended text: Internet, mail, and Mixed-Mode Surveys: The Tailored Design Method by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian (4th Edition, 2014).


July 11-12

Presenting Data Effectively: Practical Methods for Improving Evaluation Communication

Instructor: Stephanie Evergreen, PhD

Description: Crystal clear charts and graphs are valuable–they save an audience’s mental energies, keep a reader engaged, and make you look smart. In this workshop, attendees will learn the science behind presenting data effectively. We will go behind-the-scenes in Excel and discuss how each part of a visualization can be modified to best tell the story in a particular dataset. We will discuss how to choose the best chart type, given audience needs, cognitive capacity, and the story that needs to be told about the data–and this will include both quantitative and qualitative visualizations. We will walk step-by-step through how to create newer types of data visualizations and how to manipulate the default settings to customize graphs so that they have a more powerful impact. Attendees will build with a prepared spreadsheet to learn the secrets to becoming an Excel dataviz ninja. Attendees will get hands-on practice implementing direct, practical steps that can be immediately implemented after the workshop to clarify data presentation and support clearer decision-making. Full of guidelines and examples, after this workshop you’ll be better able to package your data so it represents your smart, professional quality.

Note: Attendees are strongly encouraged to maximize the workshop experience by bringing a slideshow that contains graphs under current construction. Attendees should bring their own laptops loaded with Microsoft Excel. No tablets or smart phones. PCs preferred; Macs okay.

In the second day of workshop, Dr. Stephanie Evergreen will lead attendees through how to manipulate Excel into making impactful charts and graphs, step-by-step, using provided data sets distributed to the audience. Audience members will leave the session with more in depth knowledge about to craft effective data displays. Completing the session moves one to Excel Ninja Level 10.

Attendees will learn:

  1. Visual processing theory and why it is relevant for evaluators
  2. How to apply graphic design best practices and visual processing theory to enhance data visualizations with simple, immediately implementable steps
  3. Which chart type to use, when
  4. How to construct data visualizations and other evaluation communication to best tell the story in the data
  5. Alternative methods for reporting

Workshop attendees will leave with helpful handouts and a copy of the instructor’s book, Effective Data Visualization.

Registrants should regularly develop graphs, slideshows, technical reports and other written communication for evaluation work and be familiar with the navigational and layout tools available in simple software programs, like Microsoft Office.


July 12-13

Policy Analysis, Implementation and Evaluation

Instructor: Doreen Cavanaugh, PhD

Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.

Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.


July 13-15

Evaluation Research Methods: A Survey of Quantitative and Qualitative Approaches

Instructor: David B. Wilson, PhD

Description: This course will introduce a range of basic quantitative and qualitative social science research methods that are applicable to the evaluation of various programs. This is a foundational course that introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts.

Topics will include observational and qualitative methods, survey and interview (structured and unstructured) techniques, experimental and quasi-experimental designs, and sampling methods. This course is for those who want to update their existing knowledge and skills and will serve as an introduction for those new to the topic.

Recommended text: Creswell, J. Research Design (Sage, 2017).


July 13-14

Introduction to Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Robert D. Shand, PhD

Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).


July 13

Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations.


July 14-15

Monitoring and Evaluation: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

The two-day, interactive course will cover the following:

  • M&E introduction and overview
  • Defining the purpose and scope of M&E
  • Engaging stakeholders and establishing and evaluative climate
    • The role and effect of partnership and boundary spanners, policy, and advocacy
  • Identifying and supporting needed capabilities
  • M&E frameworks – agreement on M&E targets
    • Performance and Results-Based M&E approaches
  • Connecting program design and M&E frameworks
    • Comparisons – Is a counterfactual necessary?
    • Contribution versus attribution
  • Identification of key performance indicators (KPIs)
    • Addressing uncertainties and complexity
  • Data: collection and methods
    • Establishing indicator baselines (addressing the challenges of baseline estimates)
    • What data exists? What data/information needs to be collected?
  • Measuring progress and success – contextualizing outcomes and setting targets
    • Time to expectancy – what can be achieved by the program?
  • Using and reporting M&E findings
  • Sustaining M&E culture

The course focuses on practical application. Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches.  Case examples will be used to illustrate the M&E process. Course participants are encouraged to submit their own case examples, prior to the course for inclusion in the course discussion. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

Familiarity with evaluation is helpful, but not required, for this course.


July 14-15

Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

Instructor: Debra J. Rog, PhD

Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative.  Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches.  The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches.  The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation).  A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants.  The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work.  Participants will work in small groups on an example that will carry through the two days of the course.

Participants will be sent materials prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable.


July 15

Intermediate Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Joseph Cordes, PhD

Description: The Intermediate Cost-Benefit Analysis course provides a more advanced and detailed review of the principles of social cost and social benefit estimation than is provided in TEI’s Introduction to Cost-Benefit and Cost Effectiveness Analysis. Working with the instructor, students will undertake hands-on estimation of the costs and benefits of actual programs in the computer lab. The objective is to develop the ability both to critically evaluate and use cost-benefit analyses of programs in the public and nonprofit sectors, and to use basic cost-benefit analysis tools to actively undertake such analyses. Topics covered in the course will include:

I. Principles of Social Cost and Social Benefit Estimation

  1. Social Cost Estimation: (a) Components (capital, operating, administrative); (b) Budgetary and Social Opportunity Cost
  2. Social Benefit Estimation: (a) Social vs. private benefits; (b) revealed benefit measures (Price/cost changes in primary market, Price/cost changes in analogous markets, Benefits inferred from market-trade-offs, and cost/damages avoided as benefit measures)
  3. Stated preference measures: Inferring benefits from survey data
  4. Benefit/Cost Transfer: Borrowing estimates of benefits and costs from elsewhere.
  5. Timing of Benefits and Costs: (a) Discounting and net present value, (b) Dealing with inflation, (c). Choosing a discount rate
  6. Presenting Results: (a) Sensitivity analysis (partial sensitivity analysis, best/worst case scenarios, break-even analysis, and Monte-Carlo analysis); (b) Present value of net social benefits, (c) Benefit Cost Ratio, (d) Internal rate of Return

II. Social Cost and Social Benefit Estimation in Practice

The use of the above principles of cost and benefit estimation will be illustrated using data drawn from several actual benefit cost analysis of real programs. The cases will be chosen to illustrate the application of the benefit/cost estimation principles in the case of social programs, health programs, and environmental programs. Working with the instructor in the computer lab, students will create a benefit-cost analysis template and then use that template to estimate social benefits and social costs, and to present a benefit-cost bottom line.

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with cost-benefit and/or cost-effectiveness analysis equivalent to the TEI course Introduction to Cost-Benefit and Cost-Effectiveness Analysis.


July 17-19

Linking Evaluation Questions to Analysis Techniques

Instructor: Melvin M. Mark, PhD

Description: Statistics are a mainstay in the toolkit of program and policy evaluators. Human memory being what it is, however, even evaluators with reasonable statistical training, over the years, often forget the basics. And the basics aren’t always enough. If evaluators are going to make sensible use of consultants, communicate effectively with funders, and understand others’ evaluation reports, then they often need at least a conceptual understanding of relatively complex, recently developed statistical techniques. The purposes of this course are: to link common evaluation questions with appropriate statistical procedures; to offer a strong conceptual grounding in several important statistical procedures; and to describe how to interpret the results from the statistics in ways that are principled and will be persuasive to intended audiences. The general format for the class will be to start with an evaluation question and then discuss the choice and interpretation of the most-suited statistical test(s). Emphasis will be on creating a basic understanding of what statistical procedures do, of when to use them, and why, and then on how to learn more from the data. Little attention is given to equations or computer programs, with the emphasis instead being on conceptual understanding and practical choices. Within a framework of common evaluation questions, statistical procedures and principled data inquiry will be explored.

(A) More fundamental topics to be covered include (1) basic data quality checks and basic exploratory data analysis procedures, (2) basic descriptive statistics, (3) the core functions of inferential statistics (why we use them), (4) common inferential statistics, including t-tests, the correlation coefficient, and chi square, and (5) the fundamentals of regression analysis.

(B) For certain types of evaluation questions, more complex statistical techniques need to be considered. More complex techniques to be discussed (again, at a conceptual level) include (1) structural equation modeling, (2) multi-level modeling, and (3) cluster analysis and other classification techniques.

(C) Examples of methods for learning from data, i.e., for “snooping” with validity, for making new discoveries principled, and for more persuasive reporting of findings will include (1) planned and unplanned tests of moderation, (2) graphical methods for unequal treatment effects, (3) use of previously-discussed techniques such as clustering, (4) identifying and describing converging patterns of evidence, and (5) iterating between findings and explanations.

Each participant will receive a set of readings and current support materials.

Prerequisites: Familiarity with basic statistics.


July 17-18

Qualitative Evaluation Methods

Instructor: Michael Quinn Patton, PhD

Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified.  Exercises will provide experience in applying qualitative methods and analysis in evaluations. The course will utilize Dr. Patton’s text: Qualitative Research and Evaluation Methods, (Sage, 2015, 4th Edition).


July 17-19

Outcome and Impact Assessment

Instructor: Melvin Mark, Ph.D.

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.


July 19-20

Developmental Evaluation: Systems and Complexity

(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)

Instructor: Michael Quinn Patton, PhD

Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity.  The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. The course will utilize the instructor’s book: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).


July 19-20

Qualitative Data Analysis

Instructor: Patricia Rogers, PhD

Description: Many evaluators find it challenging to analyze textual, visual, and aural data from interviews, diaries, observations, and open-ended questionnaire items in ways that are rigorous but practical within the time and staffing constraints of real evaluation. Analysis of qualitative data can range from simple enumeration and illustrative use to more detailed analysis requiring more expertise and time. In this class, participants will work through a structured approach to analyzing qualitative data based on an iterative process of considering the purpose of the analysis, reviewing suitable options, and working through interpretations. Techniques include grouping, summarizing, finding patterns, discovering relationships, and developing and testing relationships. The session will address practical and ethical issues in analyzing and reporting qualitative data-particularly who participates in interpretation, how confidentiality can be maintained, how analysis can be tracked and checked, and standards for good practice in qualitative data analysis. Hands-on exercises for individuals and small groups will be used throughout the class. Manual analysis of data will be used in exercises and participants will also be introduced to NVivo and other computer packages to assist analysis.

Recommended Text: Qualitative Data Analysis by Miles, Huberman & Saldaña (Sage, 2014).


July 19-20

Case Studies in Evaluation

Instructor: Delwyn Goodrick, PhD

Description: Case study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context mediates the influence of program and project interventions. While case study designs are often adopted to describe or depict program processes, their capacity to illuminate depth and detail can also contribute to an understanding of the mechanisms responsible for program outcomes.

The literature on case study is impressive, but there remains tension in perspectives about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case study is conceived and practiced within the evaluation profession.  This workshop aims to disentangle the discussions and debates, and highlight the central principles critical to effective case study practice and reporting.

This two day workshop will explore case study design, analysis and representation.  The workshop will address case study topics through brief lecture presentation, small group discussion and workshop activities with realistic case study scenarios.  Participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in doing case studies in evaluation contexts.  Discussion of the ethical principles underpinning case study will be integrated throughout the workshop.

Specific topics to be addressed over the two days include,

  • The utility of case studies useful in evaluation. Circumstance in which case studies may not be appropriate
  • Evaluation questions that are suitable for a case study approach
  • Selecting the unit of analysis in case study
  • Design frameworks in case studies – single and multiple case study; the intrinsic and instrumental case
  • The use of mixed methods in case study approaches – sequential and concurrent designs
  • Developing case study protocols and case study guides
  • Analyzing case study materials – within case and cross case analysis, matrix and template displays that facilitate analysis
  • Principles and protocols for effective team work in multiple case study approaches
  • Transferability/generalizability of case studies
  • Validity and trustworthiness of case studies
  • Synthesizing case materials
  • Issues of representation of the case and cases in reporting

Detailed course notes will be provided to all participants and practice examples referenced over the two days.

Recommended text: Yin, R.P. Applications of Case Study Research (Sage, 2012).


July 20-21

Strategy Mapping

Instructor: John Bryson, PhD

Description: The world is often a muddled, complicated, dynamic place in which it seems as if everything connects to everything else–and that is the problem! The connections can be problematic because, while we know things are connected, sometimes we do not know how, or else there are so many connections we cannot comprehend them all. Alternatively, we may not realize how connected things are and our actions lead to unforeseen and unhappy consequences. Either way, we would benefit from an approach that helps us strategize, problem solve, manage conflict, and design evaluations that help us understand how connected the world is, what the effects of those connections are, and what might be done to change some of the connections and their effects.

Visual strategy mapping (ViSM) is a simple and useful technique for addressing situations where thinking–as an individual or as a group–matters. ViSM is a technique for linking strategic thinking, acting, and learning; helping make sense of complex problems; communicating to oneself and others what might be done about them; and also managing the inevitable conflicts that arise.

ViSM makes it possible to articulate a large number of ideas and their interconnections in such a way that people can know what to do in an area of concern, how to do it, and why. The technique is useful for formulating and implementing mission, goals, and strategies and for being clear about how to evaluate strategies. The bottom line is: ViSM is one of the most powerful strategic management tools in existence. ViSM is what to do when thinking matters!

When can mapping help? There are a number of situations that are tailor-made for mapping. Mapping is particularly useful when:

  • Effective strategies need to be developed
  • Persuasive arguments are needed
  • Effective and logical communication is essential
  • Effective understanding and management of conflict are needed
  • When it is vital that a situation be understood better as a prelude to any action
  • Organizational or strategic logic needs to be clarified in order to design useful evaluations

These situations are not meant to be mutually exclusive. Often they overlap in practice. In addition, mapping is very helpful for creating business models and balanced scorecards and dashboards. Visual strategy maps are related to logic models, as both are word-and-arrow diagrams, but are more tied to goals, strategies, and actions and are more careful about articulating causal connections.

Objectives: (Strategy Mapping)

At the end of the course, participants will:

  • Understand the theory of mapping
  • Know the difference between action-oriented strategy maps, business model maps, and balanced scorecard maps
  • Be able to create action-oriented strategy maps for individuals – that is, either for oneself or by interviewing another person
  • Be able to create action-oriented maps for groups
  • Be able to create a business model map linking competencies and distinctive competencies to goals and critical success factors
  • Know how to design and manage change processes in which mapping is prominent
  • Have an action plan for an individual project


July 20-21

Using Non-experimental Designs for Impact Evaluation

Instructor: Gary T. Henry, PhD

Description: In the past few years, there have been very exciting developments in approaches to causal inference that have expanded our knowledge and toolkit for conducting impact evaluations. Evaluators, statisticians, and social scientists have focused a great deal of attention on causal inference, the benefits and drawbacks of random assignment studies, and alternative designs for estimating program impacts. For this workshop, we will have three goals:

  • to understand a general theory of causal inference that covers both random assignment and observational studies, including quasi-experimental and non-experimental studies;
  • to identify the assumptions needed to infer causality in evaluations; and
  • to describe, compare and contrast six, promising alternatives to random assignment studies for inferring causality, including the requirements for implementing these designs, the strengths and weaknesses of each, and examples from evaluations where these designs have been applied.

The six alternative designs to be described and discussed are: regression discontinuity; propensity score matching; instrumental variables; fixed effects (within unit variance); difference-in-differences; and comparative interrupted time series. Also, current findings concerning the accuracy of these designs relative to random assignment studies from “within study” assessments of bias will be presented and the implications for practice discussed. Prerequisites: This class assumes some familiarity with research design, threats to validity, impact evaluations, and multivariate regression.


July 21-22

Intermediate Qualitative Data Analysis

Instructor: Delwyn Goodrick, PhD

Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as, thematic analysis, pattern matching, template analysis, process tracing, schema analysis and qualitative comparative analysis are outlined and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology. The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods. A range of activities to support critical thinking and application of principles is integrated within the workshop program. Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.

Specific issues to be addressed:

  • What are the implications of an evaluator’s worldview for selection of qualitative data analysis (QDA) strategies?
  • Are there analytic options that are best suited to particular kinds of qualitative data?
  • How can participant experiences be portrayed through QDA without fracturing the data through formal coding?
  • What types of analysis may be appropriate for particular types of evaluation (program theory, realist, transformative)
  • What strategies can be used to address interpretive dissent when working in evaluation teams?
  • What are some ways that qualitative and quantitative findings can be integrated in an evaluation report?
  • How can I sell the value of qualitative evidence to evaluation audiences?

Participants interested in continuing their learning beyond this workshop may wish to purchase Qualitative Data Analysis: Practical Strategies by Patricia Bazeley (Sage, 2013).

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with qualitative data.


July 21-22

Using Research, Program Theory, & Logic Models to Design and Evaluate Programs

InstructorStewart I. Donaldson, PhD

Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence-base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focuses on developing useful evidence-based program theories and logic models, and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

Recommended Book: Donaldson, S. I. (2021). Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-Focused Applications.  New York, NY: Routledge.

Students may also be interested in: Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (Sage).

Prerequisites: None


July 21-22

Evaluability Assessment

Instructor: Debra J. Rog, PhD

Description: Increasingly, both public and private funders are looking to evaluation not only as a tool for determining the accountability of interventions, but also to add to our evidence base on what works in particular fields. With scarce evaluation resources, however, funders are interested in targeting those resources in the most judicious fashion and with the highest yield. Evaluability assessment is a tool that can inform decisions on whether a program or initiative is suitable for an evaluation and the type of evaluation that would be most feasible, credible, and useful.

This course will provide students with the background, knowledge, and skills needed to conduct an evaluability assessment. Using materials and data from actual EA studies and programs, students will participate in the various stages of the method, including the assessment of the logic of a program’s design and the consistency of its implementation; the examination of the availability, quality, and appropriateness of existing measurement and data capacities; the analysis of the plausibility that the program/initiative can achieve its goals; and the assessment of appropriate options for either evaluating the program, improving the program design/implementation, or strengthening the measurement. The development and analysis of logic models will be stressed, and an emphasis will be placed on the variety of products that can emerge from the process.

Students will be sent several articles prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable, as is familiarity with conducting program level site visits.

Contact Us

The Evaluators’ Institute

tei@cgu.edu