Courses Offered
-
-
- Applied Measurement for Evaluation
- Applying an Equity Lens to Visualizing and Communicating Data
- Basics of Program Evaluation: Culturally Responsive and Strengths-Focused Applications
- Blue Marble Evaluation
- Creating and Implementing Successful Evaluation Surveys
- Culture, Equity, and Evaluation
- Developmental Evaluation
- Evaluability Assessment
- Evaluating Training Programs and MEL (Monitoring, Evaluation, Learning) Initiatives
- Evaluation Design: Alignment with Evaluation Objectives
- Evaluation Research Methods: A Survey of Quantitative & Qualitative Approaches
- How to Enhance the Learning Function of Evaluation: Principles and Strategies
- Informing Practice Using Evaluation Models
- Intermediate Qualitative Data Analysis
- Introduction to Cost-Benefit and Cost-Effectiveness Analysis
- Introduction to Data Analysis for Evaluators and Applied Researchers
- Introduction to Data Visualization
- Introduction to Machine Learning for Evaluators
- Introduction to R Programming for Data Analysis and Visualization
- Managing for Success: Planning, Implementation, and Reporting
- Mixed Methods Evaluations: Integrating Qualitative and Quantitative Approaches
- Monitoring and Evaluation: Frameworks and Fundamentals
- Outcome and Impact Evaluation
- Policy Analysis, Implementation, and Evaluation
- Policy Design and Evaluation Across Cultures
- Principles-Focused Evaluation
- Qualitative Methods
- Quantitative Methods
- Strategic Planning with Evaluation in Mind
- Systems-based Culturally Responsive Evaluation (SysCRE)
- Using Research, Program Theory, and Logic Models to Design and Evaluate Programs
- Utilization-Focused Evaluation
- Utilizing Culturally Responsive and Racially Equitable Evaluation
- Working with Diverse Stakeholders: Appreciative, Strengths-based, and Culturally Responsive Approaches
- Sponsored Evaluation Development
Course Descriptions
TEI 300: Applied Measurement for Evaluation
Instructor: Ann Doucette, PhD
Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error.
The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose. This class content will cover:
- Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated. Issues to be addressed include: measurement/item bias, the sensitivity of measures in terms of developmental and cultural issues, scientific soundness (reliability, validity, error, etc.), and the ability of the measure to detect change over time.
- Quantification: Measurement is essentially assigning numbers to what is observed (direct and inferential). Decisions about how we quantify observations and the implications these decisions have for using the data resulting from the measures, as well as for the objectivity and certainty we bring to the judgment made in our evaluations will be examined. This section of the course will focus on the quality of response options and coding categories – Do response options/coding categories segment the respondent sample in meaningful and useful ways?
- Issues and Considerations – Using existing measures versus developing your own measures: What to look for and how to assess whether existing measures are suitable for your evaluation project will be examined. Issues associated with the development and use of new measures will be addressed in terms of how to establish sound psychometric properties, and what cautionary statements should accompany interpretation and evaluation findings using these new measures.
- Criteria for choosing measures: Assessing the adequacy of measures in terms of the characteristics of measurement – choosing measures that fit your evaluation theory and evaluation focus (exploration, needs assessment, level of implementation, process, impact and outcome). Measurement feasibility, practicability, and relevance will be examined. Various measurement techniques will be examined in terms of precision and adequacy, as well as the implications of using screening, broad-range, and peaked tests.
- Error-influences on measurement precision: The characteristics of various measurement techniques, assessment conditions (setting, respondent interest, etc.), and evaluator characteristics will be addressed.
Recommended Audience: This course would be of interest and benefit to anyone using quantitative (e.g., surveys, etc.) or qualitative (interviews, focus groups, etc.) measurement in their evaluations.
The course focuses heavily on the application of measurement, and the effects of sound versus poorly developed or inappropriately used measures on evaluation results. The course covers traditional measurement topics (reliability, validity, dimensionality, sensitivity to change, etc.) but emphasizes how these topics affect our evaluations, not the mathematical algorithms.
TEI 301: Basics of Program Evaluation: Culturally Responsive and Strengths-Focused Applications
Instructor: Stewart I. Donaldson, PhD
Description: With an emphasis on constructing a sound foundational knowledge base guided by the American Evaluation Association (AEA) evaluator competencies and public statement on cultural competence in evaluation, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; the view of evaluation as transdisciplinary; the logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods; reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, design sensitivity, and validity; the function of program theory and logic models in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; culturally responsive and strengths-focused applications; and emerging and enduring issues in evaluation theory, method, and practice.
Although the major focus of the course is program evaluation in multiple settings (e.g., public health, education, human and social services, and international development), examples from personnel evaluation, product evaluation, organizational evaluation, and systems evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct ethical and high-quality program evaluations using a contingency-based and contextually/culturally responsive approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of mini-presentations, breakout room discussions, and application exercises.
Recommended Text: Donaldson, S. I. (2021). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. Routledge.
Recommended Audience: Audiences for this course include those who have familiarity with social science research but are unfamiliar with program evaluation, and evaluators who wish to review current theories, methods, and practices.
TEI 331: Applying an Equity Lens to Visualizing and Communicating Data
Instructor: Alice Feng
Description: Data visualization can be a powerful means of communicating the insights found in data and analyses. However, it is important to not just stop at creating technically correct charts and graphs – data visualizations must also be designed with an equity lens in mind so that they do not perpetuate biases, stereotypes, or other kinds of harm.
In this intermediate-level class, participants will learn about considerations surrounding the use of language, color, ordering, icons, and more when it comes to applying an equity lens to their visualizations along with strategies to incorporate empathy into the way they work with and communicate data. This class will be taught seminar-style, so participants are expected to bring with them a data visualization project to work on and be prepared to actively discuss how to apply the equity principles presented to their work.
Recommended Audience: This course is designed for evaluators who have a mastery of the basics of data visualization including an understanding of data encodings, pre-attentive attributes, data types, chart types, and how to use different chart types appropriately. Students should also have experience making charts and/or maps using a tool of their choosing.
TEI 329: Blue Marble Evaluation
Instructor: Michael Quinn Patton, PhD
Description: Blue Marble refers to the iconic image of the Earth from space without borders or boundaries, a whole Earth perspective. Blue Marble Evaluation consists of principles and criteria for evaluating transformational initiatives aimed at a more equitable and sustainable world.
We humans are using our planet’s resources, and polluting and warming it, in ways that are unsustainable. Many people, organizations, and networks are working to ensure the future is more sustainable and equitable. Blue Marble evaluators enter the fray by helping design, implement, and evaluate transformational initiatives based on a theory of transformation. Blue Marble evaluation is utilization-focused, developmental, and principles-based in providing ongoing feedback for adaptation and enhanced systems transformation impact.
Incorporating the Blue Marble perspective means looking beyond nation-state boundaries and across sector and issue silos to connect the global and local, connect the human and ecological, and connect evaluative thinking and methods with those trying to bring about global systems transformation. Forecasts for the future of humanity run the gamut from doom-and-gloom to utopia. Evaluation as a transdisciplinary, global profession has much to offer in navigating the risks and opportunities that arise as global change initiatives and interventions are designed and undertaken to ensure a more sustainable and equitable future. This workshop will provide a framework and tools (a thoughtkit) for evaluating global systems transformation.
Recommended Text: Patton, M. (2019). Blue Marble Evaluation: Premises and Principles. Guilford Press.
Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.
TEI 302: Creating and Implementing Successful Evaluation Surveys
Instructor: Jason T. Siegel, PhD
Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducting surveys.
Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); question ordering; increasing effortful responding; and increasing response rates.
The course is made up of a mixture of PowerPoint presentations, discussions, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants.
Recommended Text: Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.
Recommended Audience: This course will be of interest to anyone using or planning to use surveys in their evaluations.
TEI 303: Culture, Equity, and Evaluation
Instructor: Leona Ba, EdD
Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive and equitable evaluations, which require integrating diversity, inclusion, and equity principles into all phases of program design and evaluation. The course will use Theory-Driven Evaluation as a framework because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):
- Develop program impact theory
- Formulate and prioritize evaluation questions
- Answer evaluation questions
During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive and equitable evaluations. In addition, they will explore strategies for applying cultural responsiveness and equity to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.
Recommended Text: Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 3-27). Information Age Publishing, Inc.
Recommended Audience: This course is recommended for commissioners or practitioners who wish to ensure their evaluations are culturally responsive and equitable.
TEI 304: Developmental Evaluation
Instructor: Michael Quinn Patton, PhD
Description: Developmental Evaluation (DE) supports those involved in social change innovation by guiding adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.
The Covid Pandemic significantly increased use of DE as programs around the world had to pivot and adapt to the turbulence of responding to efforts to control the pandemic. This led to innovations and new directions in DE as it served to guide adaptations to the challenges of the pandemic. This course includes those new applications and directions.
The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and complex change initiatives. Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation, different kinds of DE, and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer in applying DE. The course will include opportunities for participants to discuss and get consultation on their own evaluation work.
Learning Outcomes: Participants will know (1) the niche, nature, and purpose of developmental evaluation; and (2) the evaluation criteria for conducting a developmental evaluation.
Recommended Text: Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and Use. Guilford Press.
Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.
TEI 305: Evaluability Assessment
Instructor: Debra J. Rog, PhD
Description: Increasingly, both public and private funders are looking to evaluation not only as a tool for determining the accountability of interventions, but also to add to our evidence base on what works in particular fields. With scarce evaluation resources, however, funders are interested in targeting those resources in the most judicious fashion and with the highest yield. Evaluability assessment is a tool that can inform decisions on whether a program or initiative is suitable for an evaluation and the type of evaluation that would be most feasible, credible, and useful.
Recommended Audience: This course is suitable for new and experienced evaluators responsible for evaluating programs and initiatives.
TEI 306: Evaluating Training Programs and MEL (Monitoring, Evaluation, Learning) Initiatives
Instructor: Ann Doucette, PhD
Description: Many of our social programs focus on providing added information; building awareness; changing attitudes and influencing behavioral change to mitigate adversity. This type of effort is often referred to as training – teaching meaningful competencies for a specific purpose – filling knowledge, skill, and capacity gaps. While we use training and capacity building interchangeably, when we speak of training, we frequently focus beyond the training target and the delivery mechanisms to how training contributes to desired outcomes.
This course examines training within the sphere of demonstrated capacities, and now includes a focus on MEL (monitoring, evaluation, learning) – what changes do we expect from training – change in capacity for what? What is it that we want individuals to do differently? What makes training work? How will such changes affect the participating individuals, their social networks, organizational and system levels? The evaluation of training programs, especially behavioral application of content, organizational benefits from training, and NEL efforts continues to be an evaluation challenge.
Today’s training targets are wide-ranging – from efforts to increase individual knowledge and skill to program portfolio and organizational level MEL efforts. While training targets tend to be more clearly defined at individual and program levels, the “L” in the MEL equation is overlooked and simply assumed to be a product of M&E activity, with little direction on how best to evaluate MEL effectiveness. Training approaches are extensive, including classroom-type presentations, online platform lectures, self-directed online study courses, online tutorials and distance coaching components, supportive technical assistance, and so forth. Evaluation approaches must be sufficiently facile to accommodate training targets and modalities, as well as the individual and organizational, and system outcomes/impact that result from such efforts.
A number of evaluation training models will be introduced. The Kirkpatrick (1959, 1976) training model has been a longstanding evaluation approach; however, it is not without criticism or suggested modification. The course provides an overview of multiple training program evaluation frameworks: 1) the Kirkpatrick 4-level model — reaction to training content, learning, knowledge, and skill acquisition, behavioral application of training, and benefits at the organizational/societal levels; 2) Brinkerhoff (2006) – systems and case study approach; 3) Phillips (1996) – return on investments; 4) Kaufman (1995) – training resources and delivery; 5) Anderson (2007) – training alignment with strategic goals; and, 6) the Concerns-based Adoption Model (CBAM), a diagnostic approach that assesses stages of participant concern about how training will affect individual performance and how training will be configured and practiced within the larger community. Comparisons across models will be introduced, illustrating the advantages and disadvantages of each.
The course is designed to be interactive and to provide a practical approach for planning (those leading or commissioning training or capacity building/development evaluations), implementing, conducting, or managing such evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics, and measures; training evaluation designs; data collection – instrumentation and administration, data quality; reporting progress, change, results; and, disseminating findings and recommendations – knowledge management resulting from training initiatives. Case examples are included throughout the course to illustrate the course content. Day 3 of this course will focus on MEL — differentiating M&E (producing data) from MEL (learning from data); linking MEL initiatives with training evaluation models for more rigorous evaluation of MEL efforts; using MEL as a case example in developing and applying an evaluation plan that investigates outcome/impact at individual and collective (organization, system, etc.) levels. Measurement, methodology, design issues, challenges in conducting such evaluations, and strategies for mitigating such issues are highlighted.
Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.
TEI 333: Evaluation Design: Alignment with Evaluation Objectives
Instructor: Ann Doucette, PhD
Description: Design is essentially the structure, the recipe that is used to assess program/intervention outcomes. This course focuses on design decisions and their alignment with evaluation questions, the precision and strength of outcome evidence needed from the evaluation, the resources that are available for the evaluation, as well as practical considerations in conducting the evaluation study. Design choice speaks to validity –the evaluator’s ability to draw conclusions in terms of the cause and effect or association between the program/intervention and outcomes (internal validity), and to generalize likely outcomes to broader samples/populations (external validity). As Cook and Campbell (1979) assert, there is no single best design approach. Designs are grouped into three primary categories – experimental, quasi-experimental and non-experimental, with a range of choices within each category. Traditionally, experimental designs have been characterized as the “gold standard,” a decidedly biased representation, as the “best;” when in fact, design choice should be informed by the evaluation questions to be addressed, and the precision needed in outcome estimates, along with practical considerations. Design choices, whether experimental, quasi-experimental or non-experimental all have limitations and practical considerations in terms of their use in evaluation studies.
The course will cover the following design categories, highlighting advantages and disadvantages of each; identifying when best to use specific design types; and will provide case examples of each.
Experimental Designs: completely randomized, randomized block, post-test only control group
Quasi-experimental Designs: non-random, pre-existing, non-equivalent groups
-
- Score-matching (propensity score – statistically matching program and comparison groups)
- Regression discontinuity – using a cutoff score to identify program and non-program groups
- Natural experiments – difference-in-difference (program and comparison group)
- Interrupted time series
Correlation – Ex-pot Facto Designs: Identification of conditions that have occurred or are present, and investigating the presumed cause – association with prior implemented program
Non-experimental Designs: no manipulation of independent variables (program versus non-program)
-
- Cross-sectional, panel studies
- Observational
- Single variable
- Correlational – relationship between two variables, but no control over possible confounding factors
The course is intentionally interactive. Participants will work with case materials, identifying design types; selecting designs that are best aligned with evaluation questions; building a rationale for the strength of evidence design choices yield; and characterizing the pros and cons of design choices.
Participants will be sent materials and resources prior to the course.
Recommended Audience: The course is geared to individuals having familiarity with evaluation or applied research.
TEI 307: Evaluation Research Methods: A Survey of Quantitative & Qualitative Approaches
Instructor: Jason T. Siegel, PhD
Description: This course will introduce a range of basic quantitative and qualitative social science research methods that apply to evaluating various programs. This foundational course introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts. Topics will include qualitative research with a special emphasis on focus groups and interviews, experimental design, quasi-experimental design, and survey research methods.
Recommended Text: There are no recommended textbooks, but there will be optional readings available on the course website before the start of the course.
Recommended Audience: This course is suitable for those who want to update their existing knowledge and skills, and will serve as an introduction for those new to the topic.
TEI 308: How to Enhance the Learning Function of Evaluation: Principles and Strategies
Instructors: J. Bradley Cousins, PhD and Jill A. Chouinard, PhD
Description: Historically, organizations have conducted and used evaluation to meet internal and external accountability demands with approaches focused on impact assessment and value for money. In practice, rigid focus on accountability-oriented objectives can lead to evaluation outcomes that are at best symbolic. Yet we know from research that evaluations which contribute significantly to learning about program functioning and context tend to leverage higher degrees of evaluation use and provide more credible, actionable outcomes. They can be used to improve the effectiveness and enhance the sustainability of interventions, for example.
This two-day course situates learning-oriented evaluations within the organizational landscape of evaluation options. The focus is on the value of the learning function of evaluation and practical strategies to enhance it. Participants can expect to:
1. Develop knowledge, skills, and strategies to plan useful learning-oriented evaluations in the context of traditional domestic and international development interventions.
2. Understand how collaborative approaches to evaluation (CAE) and culturally responsive evaluation (CRE) can be integrated in the context of results-based approaches.
3. Grasp evaluation’s potential to leverage planned learning and program improvement through organizational evaluation policy reform and the development of evaluation capacity building (ECB) strategies.
This course will be run with a mix of instructor input and opportunities for participants to apply what they have learned in practical activities (e.g., case analyses). Practical resources will be made available.
Recommended Audience: This course is open to new and experienced evaluators looking to augment their working knowledge of program evaluation logic and methods.
TEI 309: Informing Practice using Evaluation Models
Instructor: Melvin Mark, PhD
Description: Evaluators who are not aware of the contemporary and historical aspects of the profession “are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes” Madaus, Scriven and Stufflebeam (1983). “Evaluation theories are like military strategy and tactics; methods are like military weapons and logistics. The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations. The good evaluator needs theories for the same reasons in choosing and deploying methods.” Shadish, Cook and Leviton (1991).
These quotes from Madaus et al. (1983) and Shadish et al. (1991) provide the perfect rationale for why the serious evaluator should be concerned with models and theories of evaluation. The primary purpose of this class is to overview major streams of evaluation theories (or models), and to consider their implications for practice. Topics include: (1) why evaluation theories matter, (2) frameworks for classifying different theories, (3) in-depth examination of several major theories, (4) identification of key issues on which evaluation theories and models differ, (5) benefits and risks of relying heavily on any one theory, and (6) tools and skills that can help you in picking and choosing from and combining across different theoretical perspectives in planning an evaluation in a specific context. The overarching theme will be on practice implications, that is, on what difference it would make for practice to follow one theory or some other.
Theories to be discussed will be ones that have had a significant impact on the evaluation field, that offer perspectives with major implications for practice, and that represent important and different streams of theory and practice. Case examples from the past will be used to illustrate key aspects of each theory’s approach to practice.
Participants will be asked to use the theories to question their own and others’ practices, and to consider what characteristics of evaluations will help increase their potential for use.
Recommended Audience: The instructor’s assumption will be that most people attending the session may have some general familiarity with the work of a few evaluation theorists, but will not themselves be scholars of evaluation theory. At the same time, the course should be useful, whatever one’s level of familiarity with evaluation theory.
TEI 310: Intermediate Qualitative Data Analysis
Instructor: Delwyn Goodrick, PhD
Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as thematic analysis, pattern matching, template analysis, process tracing, schema analysis and qualitative comparative analysis are outlined and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology.
The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods. A range of activities to support critical thinking and application of principles is integrated within the workshop program. Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.
Specific issues to be addressed:
- What are the implications of an evaluator’s worldview for selection of qualitative data analysis (QDA) strategies?
- Are there analytic options that are best suited to particular kinds of qualitative data?
- How can participant experiences be portrayed through QDA without fracturing the data through formal coding?
- What types of analysis may be appropriate for particular types of evaluation (program theory, realist, transformative)
- What strategies can be used to address interpretive dissent when working in evaluation teams?
- What are some ways that qualitative and quantitative findings can be integrated in an evaluation report?
- How can I sell the value of qualitative evidence to evaluation audiences?
Recommended Text: Bazeley, P. (2013). Qualitative Data Analysis: Practical Strategies. Sage.
Recommended Audience: This course is best suited for evaluators with some experience of basic coding processes who are looking to extend their analysis toolkit.
TEI 311: Introduction to Cost-Benefit and Cost-Effectiveness Analysis
Instructor: Robert D. Shand, PhD
Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).
Recommended Text: Levin, H. M., McEwan, P. J., Belfield, C. R., Bowden, A. B., & Shand, R. D. (2017). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis (3rd ed.). SAGE.
Recommended Audience: This course is best suited for entry-level and mid-career evaluators with some background and experience in impact evaluation looking to complement these skills with economic evaluation methods.
TEI 312: Introduction to Data Analysis for Evaluators and Applied Researchers
Instructor: Dale Berger, Ph.D.
Description: In this course we will introduce and review basic data analysis tools and concepts commonly used in applied research and evaluation. The focus will be on fundamental concepts that are needed to guide decisions for appropriate data analyses, interpretations, and presentations. The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses.
Topics include data screening and cleaning, selecting appropriate methods for analysis, detecting statistical pitfalls and dealing with them, avoiding silly statistical mistakes, interpreting statistical output, and presenting findings to lay and professional audiences. Examples will include applications of basic distributions and statistical tests (e.g., z, t, chi-square, correlation, regression).
Recommended Audience: The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses. This course is especially suited for entry-level evaluators looking to develop their expertise with the foundational logic and methods of data analysis. Mid-level professionals seeking a refresher and greater facility with data analysis will also find this course helpful.
TEI 314: Introduction to Data Visualization
Instructor: Alice Feng
Description: In today’s increasingly data-driven world, the ability to clearly communicate the insights in one’s data is more important than ever. Data visualizations can help make data and analyses more easily understood, accessible, and impactful to broader audiences.
In this introductory course, participants will learn the fundamentals of creating effective data visualizations, including how to identify interesting stories in their data, how to choose appropriate chart forms to convey that story, and how to finesse the design of their charts to maximize the impact of the message being conveyed. This course will be interactive and hands-on, with opportunities to practice creating charts using DataWrapper or a tool of the participant’s choosing. Ultimately, participants will create a visualization using their own data that applies the concepts covered in this course.
Recommended Audience: This course is designed for evaluators who have some experience developing graphs, visual aids, and reports for evaluation work, but no formal knowledge of data visualization concepts. Familiarity with data analysis is recommended but not required.
TEI 332: Introduction to Machine Learning for Evaluators
Instructor: Peter York
Description: There is a growing demand from public and private policymakers and funders to apply big data science and machine learning for evaluation. The demand is growing due to public awareness of how the private sector uses machine learning algorithms to create on-demand tools that cost-effectively augment human planning, assessment, prediction, and decision-making. In fact, government agencies like the National Science Foundation and the U.S. Department of Health and Human Services are currently using big data science and machine learning to evaluate their impact. When applied correctly, machine learning algorithms can significantly reduce the cost and time of conducting evaluations, including producing on-demand quasi-experimental actionable evidence on an ongoing basis.
In this introductory course, participants will learn the fundamentals of integrating the theory, methods, and machine learning algorithms of big data science into their evaluation approach. This will include an introduction to Bayesian theory, machine learning algorithms, predictive and prescriptive analytics, causal modeling, and addressing selection and algorithmic bias. The course will guide participants through an interactive step-by-step process of building evaluation models using primary and secondary datasets. This will include (1) finding and assessing the quality of existing data; (2) cleaning and preparing the data; (3) framing and aligning the data to your theory of change or logic model; (4) staging the evaluation to mitigate selection bias; (5) training machine learning algorithms to find and evaluate naturally occurring counterfactual experiments in history; and (6) evaluating and addressing the level and types of algorithmic bias in the results. This course will introduce machine learning algorithms for structured (quantitative, ordinal, and categorical) and unstructured (qualitative text) data modeling, including how to train machine learning algorithms to support conducting a mixed methods evaluation. For text analytics, participants will learn about natural language processing (NLP) algorithms that are used to improve the breadth and depth of qualitative analyses while significantly reducing the time it takes. The course will use an open-source, no-cost, no-code (knowledge of R or Python is not required) visual-based analytics platform – KNIME – and will introduce participants to its suite of analytic tools and machine learning algorithms.
Recommended Audience: This course is best suited for mid to late-career evaluators with experience conducting quantitative and mixed methods evaluations, especially preparing and analyzing primary and secondary datasets using analytic software packages like SPSS, SAS, and Stata.
TEI 313: Introduction to R Programming for Data Analysis and Visualization
Instructor: David Wilson, PhD
Description: This course will introduce you to the R programming language for data analysis and data visualization. The course will introduce you to importing data into R, basic data manipulations and clean-up, common graphing methods, and basic statistical analyses such as t-tests, chi-square, ANOVA, and regression, as well as standard descriptive statistics. The course will use the RStudio interface for R and will introduce you to using RMarkdown for enhancing analysis replicability and documentation. The course will focus on the programming language and assumes you are already familiar with basic statistical methods.
Note: Attendees should bring their own laptops loaded with R and RStudio to class each day.
Recommended Audience: This is best suited to program evaluators with at least some prior data analysis experience using software other than R, such as SPSS.
TEI 315: Managing for Success: Planning, Implementation, and Reporting
Instructor: Tiffany Berry, Ph.D.
Description: Program evaluations are often complex, challenging, multi-faceted endeavors that require evaluators to juggle stakeholder interests, funder requirements, data collection logistics, and their internal teams. Fortunately, many of these challenges can be minimized with effective evaluation management. In this interactive workshop, we provide tools, resources, and strategies that intentionally build evaluators’ project management toolkit so that evaluators can manage their evaluations successfully.
During Day 1, using case studies, mini-lectures, and group discussions we explore traditional evaluation management practices focusing on the processes and logistics of how to manage an evaluation team and the entire evaluation process from project initiation and contracting through final reporting. To reinforce and practice the content covered, participants will also engage in a variety of simulation exercises that explore how evaluation managers effectively mitigate challenges as they inevitably arise during an evaluation.
During Day 2, we continue to build participants’ evaluation management toolkit by introducing four essential, experience-tested strategies that will elevate all participants’ project management game. That is, effective evaluation management is more than a series of steps or procedures to follow, but requires a deep understanding of (1) the competencies you and your team bring to the evaluation, (2) extent to which you are responsive to program context; (3) how you collaborate with stakeholders throughout the evaluation process; and (4) and how you use strategic reporting. Through interactive activities, we’ll explore these strategies (and the interconnections among them) as well as discuss how they help evaluators’ “manage for success.” Throughout our discussion, we’ll also encourage participants to think critically about how each strategy facilitates evaluation management and/or prevents mismanagement. Across both days, there will be ample opportunities to share your own perspective, ask relevant questions, and apply content covered to your own work.
Recommended Audience: This course is best suited for novice and mid-level professionals seeking to strategically build project management skills in the evaluation context.
TEI 316: Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches
Instructor: Debra J. Rog, PhD
Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative. Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.
The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches. The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches. The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation). A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants. The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work. Participants will work in small groups on an example that will carry through the two days of the course.
Participants will be sent materials prior to the course as a foundation for the method.
Recommended Audience: The course is best suited for evaluators who have some prior experience in conducting evaluations, but have not had formal training in designing, conducting, and analyzing mixed methods studies.
TEI 317: Monitoring and Evaluation: Frameworks and Fundamentals
Instructor: Ann Doucette, PhD
Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.
The two-day interactive course will cover the following:
- M&E introduction and overview
- Defining the purpose and scope of M&E
- Engaging stakeholders and establishing and evaluative climate
- The role and effect of partnership and boundary spanners, policy, and advocacy
- Identifying and supporting needed capabilities
- M&E frameworks – agreement on M&E targets
- Performance and Results-Based M&E approaches
- Connecting program design and M&E frameworks
- Comparisons – Is a counterfactual necessary?
- Contribution versus attribution
- Identification of key performance indicators (KPIs)
- Addressing uncertainties and complexity
- Data: collection and methods
- Establishing indicator baselines (addressing the challenges of baseline estimates)
- What data exists? What data/information needs to be collected?
- Measuring progress and success – contextualizing outcomes and setting targets
- Time to expectancy – what can be achieved by the program?
- Using and reporting M&E findings
- Sustaining M&E culture
The course focuses on practical application. Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches. Case examples will be used to illustrate the M&E process. Course participants are encouraged to submit their own case examples, prior to the course for inclusion in the course discussion. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.
Recommended audience: Familiarity with evaluation is helpful, but not required, for this course.
TEI 318: Outcome and Impact Evaluation
Instructor: Melvin M. Mark, Ph.D.
Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. Multiple approaches exist for tracking or detecting a program’s outcomes, and multiple methods and designs exist for trying to estimate a program’s impact. This course will overview alternative approaches that may be more appropriate under different conditions. This includes monitoring approaches based on a small-t theory of the program’s chain of outcomes, as well as approaches to use when the complexity of the situation precludes placing one’s confidence in such a theory of the program. Considerable attention will be given to the experimental and quasi-experimental methods that are the foundation for much of contemporary impact evaluation. Related topics, including issues in the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects, will be covered, some briefly. Emphasis will primarily be conceptual, focusing on the logic of outcome and impact evaluation, the appropriateness of different approaches under different circumstances, and the conceptual and methodological nature of the approaches. Nonetheless, we’ll cover key statistical analysis methods for impact evaluation.
Recommended Audience: This course is best suited for mid-career evaluators. Some familiarity with program evaluation, research methods, and statistical analysis is necessary to effectively engage in the various topics that are covered.
TEI 330: Policy Analysis, Implementation, and Evaluation
Instructor: Doreen Cavanaugh, PhD
Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course, students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change through program implementation.
Participants will explore a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, and conflicting expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy area from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.
Recommended Audience: This course is best suited to professionals interested in evaluating policies and programs supported in full or in part by international, national, state or local public funding as well as policies/programs supported in full or in part by international, national or local private resources or combinations of support from multiple public and private sources.
TEI 319: Policy Design and Evaluation Across Cultures
Instructor: Robert Klitgaard, PhD
Description: Policy design and evaluation share the task of assessing what treatments (policies, programs) work well, for whom, and in what settings. Two big challenges emerge: inference and extrapolation. Inference refers to estimating how treatments affect outcomes, other things remaining equal. Extrapolation refers to transporting those estimates to other cultures. This course reviews both challenges and suggests practical ways forward. The highly interactive pedagogy presents analytical material and case studies from around the world. Of particular interest is a field-tested method for combining generic international expertise with local knowledge, with the goal of creative, evidence-based policy design. Participants will have the chance to apply the ideas to an issue important to them.
Recommended Audience: People who use the results of evaluations in the design of effective and equitable public policies, especially when the policies apply to diverse cultural settings.
TEI 320: Principles-Focused Evaluation
Instructor: Michael Quinn Patton, PhD
Description: Principles-driven leaders engage in principles-based initiatives that call for principles-focused evaluation. Principles-focused evaluation makes principles the focus of evaluation. Three questions are the focus of evaluation: (1) To what extent and in what ways are the principles meaningful to those meant to be guided by the principles? (2) if meaningful, to what extent and in what ways are the principles adhered to? (3) if adhered to, to what extent and in what ways do principles guide results? The webinar will present and explain the GUIDE approach to developing and evaluating principles. GUIDE calls for principles to be directive, useful, inspiring, adaptable to contexts, and evaluable. Examples of principles-focused initiatives and corresponding principles-focused evaluations will be shared. This innovative approach to evaluation is on the leading edge of the field and is attracting attention around the world as a way of engaging with change and transformation in complex dynamic systems.
Learning Outcomes: Participants will know (1) the niche, nature, and purpose of principles-focused evaluation; (2) the evaluation criteria for conducting a principles-focused evaluation; and (3) the GUIDE framework for principles-focused evaluation.
Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.
TEI 321: Qualitative Methods
Instructor: Michael Quinn Patton, PhD
Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified. Exercises will provide experience in applying qualitative methods and analysis in evaluations.
Recommended Text: Patton, M. (2015). Qualitative research and evaluation methods (4th ed.). Sage.
Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of qualitative evaluation methods. Mid-level professionals seeking a refresher on the basics of qualitative evaluation will also find this course helpful.
TEI 328: Quantitative Methods
Instructor: Emily E. Tanner-Smith, PhD
Description: This course will introduce a range of basic quantitative social science research designs and methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research designs and concepts.
This course will introduce a range of basic quantitative social science research designs and methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research designs and concepts.
Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of quantitative evaluation designs and methods. Mid-level professionals seeking a refresher on the basics of quantitative evaluation designs will also find this course helpful.
TEI 322: Strategic Planning with Evaluation in Mind
Instructor: John Bryson, PhD
Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses – along with the many opportunities – facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.
This course presents a pragmatic approach to strategic planning based on John Bryson’s best-selling and award-winning book, Strategic Planning for Public and Nonprofit Organizations, Fifth Edition (Jossey-Bass, 2018). The course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible. The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.
The following topics are covered through a mixture of mini-lectures, case analyses, individual and small group exercises, and plenary discussions:
- Understanding why strategic planning has become so important
- Understanding what strategic planning is – and is not
- Gaining knowledge of the range of different strategic planning approaches
- Understanding the Strategy Change Cycle (Prof. Bryson’s preferred approach)
- Gaining experience with key strategic planning tools and techniques, including stakeholder analysis, SWOT analyses, and causal mapping for purposes of understanding issues, developing strategies, and conducting evaluations
- Knowing how to appropriately design formative, summative, and developmental evaluations of strategic planning processes, missions, strategies, and organizational performance
- Knowing what it takes to initiate strategic planning successfully
- Understanding the importance of leadership of many kinds for strategic planning success
- Understanding what can be institutionalized
- Making sure ongoing strategic planning, acting, learning, and evaluation are linked
Recommended Audience: The course is suitable for anyone wanting to know more about strategic planning theory and practice, including leaders, managers, board members, policymakers, and, of course, evaluators. Evaluation topics will include approaches to evaluating strategic planning processes for organizations and coalitions, missions, strategies, strategic plans, and performance.
TEI 323: Systems-based Culturally Responsive Evaluation (SysCRE)
Instructor: Wanda Casillas
Description: Culturally Responsive Evaluation (CRE) is often described as a way of thinking, a stance taken, or an emerging approach to evaluation that centers culture and context in all steps of an evaluation process. As an evaluation approach, CRE is often used in service of promoting equitable outcomes across many sectors such as education, health, social services, etc. However, large-scale social problems require evaluation and applied research strategies that can further our thinking about complex issues and equip us to engage with the complex and layered contextual factors that impact equity.
CRE is an essential tool in a practitioner’s toolkit when evaluating large-scale systems change efforts that emphasize equity; and CRE married with relevant and overlapping systems principles leads to a robust evaluation and applied research practice. In this course, we will engage with a core set of CRE and systems principles to anchor evaluation practice in an approach that identifies and addresses important cultural and contextual systems in which evaluations and their stakeholders are embedded.
The first day of the workshop will focus on establishing a foundation of important historical underpinnings, concepts, and tenets of CRE and systems approaches and engage with exemplars of SysCRE practice to operationalize these concepts. On Days 2 and 3 of the workshop, we will simulate a step-wise SysCRE design using a case study and other interactive exercises to inform personal and professional practices and support group learning.
Recommended Audience: This course is best suited for early- to mid-level evaluators that have familiarity of evaluation designs and theoretical approaches.
TEI 324: Using Research, Program Theory, and Logic Models to Design and Evaluate Programs
Instructor: Stewart I. Donaldson, PhD
Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focus on developing useful evidence-based program theories and logic models and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.
Recommended Text: Donaldson, S. I. (2021). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. Routledge.
Students may also be interested in Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.). (2014). Credible and actionable evidence: The foundation for rigorous and influential evaluations. Sage.
Recommended Audience: Audiences for this course include those who have familiarity and some experience in evaluation practice, and who want to explore using stakeholder and research-informed program theories and logic models to guide the design and evaluation of programs.
TEI 325: Utilization-Focused Evaluation
Instructor: Michael Quinn Patton, PhD
Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on the intended use by intended users.
Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between the evaluator and primary intended users. Psychology of use undergirds and informs utilization-focused evaluation: intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.
Participants will learn:
• Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
• Implications of focusing an evaluation on the intended use by intended users.
• Options for evaluation design and methods based on situational responsiveness, adaptability, and creativity.
• Ways of building evaluation into the programming process to increase use.
Recommended Text: Patton, M. Q., & Campbell-Patton, C. E. (2022). Utilization-focused evaluation (5th ed.). Sage.
Recommended Audience: This course is suitable for new and experienced evaluators who work closely with the primary intended users of their evaluations.
TEI 326: Utilizing Culturally Responsive and Racially Equitable Evaluation
Instructors: Tracy Hilliard, PhD, Ebony Reddock, PhD, & Kristine Andrews, PhD
Description: The field of evaluation is being challenged to utilize a process that considers who is being evaluated and who is conducting the evaluation. MPHI has worked to develop useful frameworks, tools, and approaches that evaluators could consider focusing on the ways that race and culture might influence an evaluation process; this has resulted in the development of a framework for conducting evaluation using a culturally responsive and racial equity lens.
This workshop focuses on the practical use of a racial equity lens when conducting evaluation. The framework argues that culture and race are important considerations when conducting an evaluation because we believe that there are both critical and substantive nuances that are often missed, ignored, and/or misinterpreted when an evaluator is not aware of the culture of those being evaluated. Participants will be provided with a Template for Analyzing Programs through a Culturally Responsive and Racial Equity Lens, designed to focus deliberately on an evaluation process that takes race, culture, equity, and community context into consideration.
Presenters will also share a “How-to Process” focused on the cultural competencies of individuals conducting evaluations, how such competencies might be improved, and strategies for doing so. This “How-to Process” is the result of thinking around developing a self-assessment instrument for evaluators, is based primarily on the cultural-proficiencies literature, and relates specifically to components of the template. Participants will have the opportunity to engage in small-group exercises to apply the concepts contained in the template to real-world evaluation processes. Based on these experiences, participants will gain practical knowledge on the use of the lens.
Recommended Audience: This course is designed for evaluators at any level who are interested in furthering their understanding of culturally responsive, racially equitable evaluation and its practical applications.
TEI 327: Working with Diverse Stakeholders: Appreciative, Strengths-based, and Culturally Responsive Approaches
Instructor: Stewart I. Donaldson, PhD
Description: Working with diverse stakeholders in a participatory evaluation can be both enriching and challenging. Exemplary evaluations often result when this interactive process goes well, and evaluations can seriously underperform, become too time-consuming and costly, not be used, and/or be draining and conflictual for both the evaluator and the stakeholders when it does not go well. Guided by the American Evaluation Association (AEA) evaluator interpersonal competency domain and public statement on cultural competence in evaluation, this course is designed to provide participants with frameworks, strategies, and activities to improve the experience of diverse stakeholders and evaluators throughout the various phases of an evaluation process. For example, effective ways of engaging diverse stakeholders during the contracting phase (including negotiating the scope of work and budget), when developing logic models and theories of change, while evaluation questions are being formulated and prioritized, during the evaluation design, data collection, data analysis, and justifying conclusions phases, and ways to engage diverse stakeholder to help evaluators ensure use and share lessons learned will be explored. Participants will be encouraged to develop their evaluation facilitation skills in line with the goals of the AEA Interpersonal Competency Domain. The interpersonally competent evaluator:
5.1 – Fosters positive relationships for professional practice and evaluation use.
5.2 – Listens to understand and engage different perspectives.
5.3 – Facilitates shared decision-making for evaluation.
5.4 – Builds trust throughout the evaluation.
5.5 – Attends to the ways power and privilege affect evaluation practice.
5.6 – Communicates in meaningful ways that enhance the effectiveness of the evaluation.
5.7 – Facilitates constructive and culturally responsive interaction throughout the evaluation.
5.8 – Manages conflicts constructively.
Throughout the course, active learning is emphasized and, therefore, the instructional format consists of mini-presentations, breakout room discussions, and application exercises.
Recommended Text: Donaldson, S. I. (2021). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. Routledge.
Recommended Audience: Audiences for this course include those who have familiarity with participatory evaluation approaches and wish to explore ways to more effectively engage diverse stakeholders and improve the implementation of evaluation designs.
TEI 401: Sponsored Evaluation Development
Customized courses are offered at the request of a sponsoring company or organization to support their evaluation capacity development needs/to develop their evaluation skillsets; enrollment for a customized course is limited to identified individuals from the sponsoring company. The content and learning outcomes for a customized course are developed in collaboration between TEI faculty and the sponsoring organization. As part of the course, participants engage in case studies and other interactive exercises specific to the sponsoring organization’s context/needs. Topics frequently covered in customized courses include mixed methods research, applied measurement for evaluation, and using research, program theory, and logic models to design and evaluate programs.
-