Courses Offered

    • Course Descriptions

      TEI 340: AI-Powered Program Evaluation: From Administrative Data to Automated Insights

      Instructor: Peter York

      Description: This three-day hands-on course teaches evaluators to leverage artificial intelligence and machine learning to transform program administrative data into automated evaluation systems. Using KNIME, a free visual-based analytics platform that requires no coding experience, participants will learn how to implement causal modeling techniques that significantly reduce evaluation timeframes while increasing quasi-experimental rigor, insight depth, and actionability.

      Through guided exercises with sample datasets, participants will be provided the foundational tools and techniques for the complete workflow of modern AI-powered evaluation, including:

      • Cleaning and transforming both structured (numeric, ordinal, and categorical data) and unstructured (text) program data
      • Using large language models for qualitative analysis
      • Training machine learning algorithms to conduct causal modeling that identifies and evaluates natural experiments within historical program data.

      Key learning outcomes include: converting program administrative data into evaluation-ready formats; transforming qualitative/text data into structured insights using AI (Large Language Models); implementing causal modeling to evaluate the outcomes of programs on different types of beneficiaries; and automating these processes.

      The workshop includes in-depth, hands-on practice with KNIME, working with sample datasets to build actual evaluation models with automation. Participants will leave with foundational skills to begin applying these techniques in their evaluation work.

      Recommended Audience: This advanced course is designed for experienced evaluators who want to modernize their practice with AI and machine learning. Participants should have experience conducting mixed-methods evaluations and working with program administrative data. While no coding experience is required, comfort with data analysis software (e.g., SPSS, SAS, Stata, Excel) is ideal. Participants must bring a laptop capable of running KNIME (which can be downloaded and installed free of charge).

      Prerequisites: Participants must download and install KNIME before the first session. Sample datasets and workflows will be provided.

      TEI Certificate: This course fulfills the following requirements:


      TEI 300: Applied Measurement for Evaluation

      Instructor: Ann Doucette, PhD

      Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance/effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement, the measurement used in evaluation studies is often imprecise and characterized by considerable error.

      The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose.  This course will cover:

      • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated.
      • Quantification: Do response options/coding categories segment the respondent sample in meaningful and useful ways?
      • Issues and considerations for using existing measures versus developing your own measures
      • Criteria for choosing measures
      • Balancing measurement precision and error

      Recommended Audience: This course would be of interest and benefit to anyone using quantitative (e.g., surveys, etc.) or qualitative (interviews, focus groups, etc.) measurement in their evaluations.

      TEI Certificate: This course fulfills the following requirements:


      TEI 331: Applying an Equity Lens to Visualizing and Communicating Data

      Instructor: Alice Feng

      Description: Data visualization can be a powerful means of communicating the insights found in data and analyses. However, it is important to not just stop at creating technically correct charts and graphs – data visualizations must also be designed with an equity lens in mind so that they do not perpetuate biases, stereotypes, or other kinds of harm and are accessible to all audiences.The first half of this intermediate-level class will cover considerations surrounding the use of language, color, ordering, icons, and more when it comes to applying an equity lens to the way data is visualized along with strategies to incorporate empathy into how to work with and communicate data. The second half will then focus on issues of accessibility including topics such as font selection, color contrast, plain language, and alt text.Recommended Audience: This course is designed for evaluators who have a mastery of the basics of data visualization including an understanding of data encodings, pre-attentive attributes, data types, chart types, and how to use different chart types appropriately. Students should also have experience making charts and/or maps using a tool of their choosing.

      TEI Certificate: This course fulfills the following requirements:


      TEI 337: Applying Appreciative Inquiry and Positive Psychology to Improve Your Evaluation Practice

      Instructor: Stewart I. Donaldson, PhD & Tessie Catsambas

      Description: Early work in embedding Appreciative Inquiry/Appreciative evaluation is known for its intentionality in crafting compelling questions about successful experiences, inviting affirming multi-stakeholder engagement, generating insightful stories about lived experience, and grounding the evaluation in a compelling vision of the future. Twenty years later, research in positive psychology has shown that the contribution of Appreciative Evaluation and other positive psychology tools center around the attention it offers to the intrapersonal experience and user experience (UX) of those engaged in evaluation – evaluation managers, evaluators, participants, commissioners, and other interested parties. When individuals engaged in evaluation and have a productive and constructive experience, they are more likely to use and embrace evaluation as an opportunity for reflection, learning, improvement, and transformation.

      Regardless of the evaluation design and methods selected, Appreciative Evaluation is an excellent addition to your evaluation toolkit that will help you enhance effectiveness, cultural competence, ethics, and equity in evaluation practice. A growing body of research in positive psychology helps us understand the impact of embedding appreciative evaluation into any design and method. In the course you will learn about specific activities and tools that you can apply to improve you evaluation and applied research practice. These activities and tools can be used across the various steps and stages of an evaluation, including to: engage stakeholders; develop theories of change, program theories, and logic models; formulate key evaluation questions; prepare the data collection and analysis plan; communicate findings; promote evaluation use and influence; and build evaluation capacity.

      Recommended Audience: Audiences for this course include those who have familiarity with evaluation who would like to learn more about ways of applying appreciative inquiry and positive psychology in their work and life.

      TEI Certificate: This course fulfills the following requirements:


      TEI 336: Artificial Intelligence for Equity and Justice in Evaluation: Bridging Technology and Practice 

      Instructor: Jennifer P. Villalobos

      Description: Navigating the intersection of artificial intelligence (AI) and evaluation practice presents unique challenges and opportunities, especially when focusing on equity and social justice. This course aims to demystify AI by demonstrating that emerging technological tools can be strategically leveraged to enhance the assessment of programs aimed at social betterment, ensuring that evaluations are methodologically sound and ethically aligned with principles of equity and justice.

      Participants will be introduced to various AI methodologies and their applications to assist in everything from literature searches and evaluation design to data analyses and reporting. The course will critically examine the use of AI in evaluation settings, emphasizing the importance of culturally responsive and equity-focused approaches. We will explore both the potential and the pitfalls of overreliance on AI and address the complexity of its use in dynamic and diverse contexts.

      Related topics such as social justice practice standards, culturally responsive practices, data integrity, algorithmic bias, and the interpretation of AI-generated data will be discussed.

      Recommended Materials: Most of the tools used in this course are free to download and use, and will be shared with you during the course by your instructor.  Participants are encouraged to download Chat GPT4 before the start of the course.

      Recommended Audience: This course is ideal for evaluators at all stages of their careers, from emerging to seasoned professionals, who are interested in harnessing AI to foster social justice and equity through their evaluation work. Familiarity with basic evaluation principles and research methods and a willingness to engage with AI’s technical aspects will enhance the learning experience.

      TEI Certificate: This course fulfills the following requirements:


      TEI 301: Basics of Program Evaluation: Strengths-Informed and Cross-Cultural Applications

      Instructor: Stewart I. Donaldson, PhD

      Description: With an emphasis on constructing a sound foundational knowledge base guided by the American Evaluation Association (AEA) evaluator competencies and public statement on cultural competence in evaluation, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; the view of evaluation as transdisciplinary; the logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods; reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, design sensitivity, and validity; the function of program theory and logic models in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; strengths-informed and cross-cultural applications; and emerging and enduring issues in evaluation theory, method, and practice.

      Although the major focus of the course is program evaluation in multiple settings (e.g., public health, education, human and social services, and international development), examples from personnel evaluation, product evaluation, organizational evaluation, and systems evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct ethical and high-quality program evaluations using a contingency-based and contextually/culturally responsive approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies.

      Recommended Text: Donaldson, S. I. (2022). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. New York: Routledge.

      Recommended Audience: Audiences for this course include those who have familiarity with social science research but are unfamiliar with program evaluation, and evaluators who wish to review current theories, methods, and practices.

      TEI Certificate: This course fulfills the following requirements:


      TEI 329: Blue Marble Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Blue Marble refers to the iconic image of the Earth from space without borders or boundaries, a whole Earth perspective. Blue Marble Evaluation consists of principles and criteria for evaluating transformational initiatives aimed at a more equitable and sustainable world.

      We humans are using our planet’s resources, and polluting and warming it, in ways that are unsustainable. Many people, organizations, and networks are working to ensure the future is more sustainable and equitable. Blue Marble evaluators enter the fray by helping design, implement, and evaluate transformational initiatives based on a theory of transformation. Blue Marble evaluation is utilization-focused, developmental, and principles-based in providing ongoing feedback for adaptation and enhanced systems transformation impact.

      Incorporating the Blue Marble perspective means looking beyond nation-state boundaries and across sector and issue silos to connect the global and local, connect the human and ecological, and connect evaluative thinking and methods with those trying to bring about global systems transformation. Forecasts for the future of humanity run the gamut from doom-and-gloom to utopia. Evaluation as a transdisciplinary, global profession has much to offer in navigating the risks and opportunities that arise as global change initiatives and interventions are designed and undertaken to ensure a more sustainable and equitable future. This workshop will provide a framework and tools (a thoughtkit) for evaluating global systems transformation.

      Recommended Text: Patton, M. (2019). Blue Marble Evaluation: Premises and Principles. Guilford Press.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.

      TEI Certificate: This course fulfills the following requirements:


      TEI 338: Case Studies in Evaluation

      Instructor: Delwyn Goodrick, PhD

      DescriptionCase study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context influences program and project interventions. While case study designs are often adopted to describe or depict program processes, they can also contribute to an understanding of the mechanisms responsible for program outcomes.

      The literature on case studies is impressive, but there remains tension in descriptions about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case studies are designed and undertaken. This course aims to disentangle the discussions and debates and highlight the central principles critical to effective case study practice and reporting.

      In this two day class participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in implementing a case study design in evaluation contexts.

      Specific topics to be addressed over the two days include:

      • The utility of case studies in evaluation and circumstances where case studies may not be appropriate
      • Evaluation questions that are suitable for a case study approach
      • Selecting the unit of analysis in the case study
      • Design frameworks in case studies
      • Developing case study protocols and case study guides
      • Analyzing case study materials including case and cross-case analysis and matrix and template displays
      • Transferability/generalizability of case studies
      • Synthesizing case materials
      • Issues of representation in reporting

      Recommended Audience: This course is appropriate for those who have completed some evaluation projects, and who are keen to plan an evaluation using case studies as part of the design or as the core design framework.

      TEI Certificate: This course fulfills the following requirements:


      TEI 302: Creating and Implementing Successful Evaluation Surveys

      Instructor: Jason T. Siegel, PhD

      Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducting surveys.

      Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); question ordering; increasing effortful responding; and increasing response rates.

      The course is made up of a mixture of PowerPoint presentations, discussions, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants.

      Recommended Text: Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.

      Recommended Audience: This course will be of interest to anyone using or planning to use surveys in their evaluations.

      TEI Certificate: This course fulfills the following requirements:


      TEI 303: Culture, Equity, and Evaluation

      Instructor: Leona Ba, EdD

      Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive and equitable evaluations, which require integrating diversity, inclusion, and equity principles into all phases of program design and evaluation. The course will use Theory-Driven Evaluation as a framework because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

      • Develop program impact theory
      • Formulate and prioritize evaluation questions
      • Answer evaluation questions

      During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive and equitable evaluations. In addition, they will explore strategies for applying cultural responsiveness and equity to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

      Recommended Text: Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 3-27). Information Age Publishing, Inc.

      Recommended Audience: This course is recommended for commissioners or practitioners who wish to ensure their evaluations are culturally responsive and equitable.

      TEI Certificate: This course fulfills the following requirements:


      TEI 341: Developing Ethical Leadership in Evaluation in an Era of Change

      Instructor: Jennifer P. Villalobos, PhD

      Description: In our rapidly changing world, the ability to lead and adapt is fundamental to ensuring high-quality evaluation practices and achieving transformative impacts. This course caters to current and aspiring leaders in the field of evaluation, focusing on the essential skills needed to navigate and effectively drive change. Participants will delve into ethical leadership practices essential for these times, exploring change readiness, the critical role of followership in leadership, reducing evaluator bias, practical goal-setting, reflective practice, and the importance of incorporating systems and culture in evaluator decision-making. Additionally, attendees will learn how storytelling can powerfully convey visions, engage interest holders, and advance social justice within evaluation processes.

      Key components of the course include:

      • Tools and strategies for effectively managing change, employing an interdisciplinary approach to understand and influence the dynamics of change.
      • Development of a ‘change mindset,’ enabling leaders to reframe challenges and redefine problems in ways that promote ethical and inclusive outcomes.
      • Techniques for leading oneself and interest holders through the change journey, emphasizing ethical decision-making aligned with social justice principles.
      • Commitment to reflective practice, encouraging ongoing refinement and enhancement of leadership approaches in response to a constantly evolving evaluation landscape.

      Through interactive activities, evidence-based strategies, and practical exercises, this workshop equips participants with the skills necessary to lead both ethically and effectively, ensuring that changes within evaluations and communities are positive and impactful.

      Recommended Audience: This workshop is ideally suited for both emerging and experienced evaluators who currently hold or aspire to leadership roles within various evaluation contexts, including evaluation contracts, firms, and organizations. It is specifically designed for those committed to leading ethically and effectively and driving transformative change in their fields. A foundational understanding of the evaluation process is recommended.

      TEI Certificate: This course fulfills the following requirements:


      TEI 304: Developmental Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Developmental Evaluation (DE) supports those involved in social change innovation by guiding adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

      The COVID Pandemic significantly increased use of DE as programs around the world had to pivot and adapt to the turbulence of responding to efforts to control the pandemic. This led to innovations and new directions in DE as it served to guide adaptations to the challenges of the pandemic. This course includes those new applications and directions.

      The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches when faced with systems change dynamics and complex change initiatives. Participants will learn the unique niche of developmental evaluation, different kinds of DE, and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer in applying DE.

      Learning Outcomes: Participants will know (1) the niche, nature, and purpose of developmental evaluation; and (2) the evaluation criteria for conducting a developmental evaluation.

      Recommended Text: Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and Use. Guilford Press.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.

      TEI Certificate: This course fulfills the following requirements:


      TEI 305: Evaluability Assessment

      Instructor: Debra J. Rog, PhD

      Description: Evaluability assessment (EA) is a key tool that evaluators have at their disposal, but too often is left unused. However, in recent years, both public and private funders have been increasingly supporting EAs as the first step in evaluating programs. An EA offers an opportunity to determine if a program is ready for an evaluation, and if not, what can be done to improve its readiness.  EA is one of the few systematic tools used for evaluation planning, helping ground the evaluation in the reality of the program, ensuring it is focused on the right questions, engaging interested parties in the process, and ensuring that the appropriate design is implemented at the right time in the program process.

      EA helps to focus evaluation on programs that are designed and implemented with plausibility to achieve their outcomes, and consequently, can provide for wiser investment and use of evaluation funding.  In addition, beyond being used as a tool for assessing a program’s readiness for evaluation, EA can be used to help develop programs, select sites to use in multi-site evaluations, providing quick information on a program, and other purposes.

      Following this two-day course, students with the ability to conduct an EA on their own. The course includes hands-on learning through the incorporation of an exercise following each step of the  methods. Having conducted over 100 EAs, Dr. Rog will draw upon her experiences and examples to bring the method to life as the class learns the steps to designing the EA, implementing it, and analyzing and reporting the results.

      Recommended Audience: This course is suitable for new and experienced evaluators responsible for evaluating programs and initiatives.

      TEI Certificate: This course fulfills the following requirements:


      TEI 306: Evaluating Training Programs and MEL (Monitoring, Evaluation, Learning) Initiatives

      Instructor: Ann Doucette, PhD

      Description: Many of our social programs focus on knowledge acquisition, increasing and building constructive awareness, changing attitudes, and influencing and promoting behavioral change to optimize the experience of program/intervention participants. This type of effort is collectively referred to as training – advancing meaningful competencies for a specific purpose – filling knowledge, skill, and capacity gaps to achieve favorable improvement and/or progress.

      This course examines training within the sphere of demonstrated capacities (knowledge gained, attitudinal change, behavioral intent, and demonstration, institutional/organizational benefits). What makes training work? How will such changes affect the participating individuals and their social networks? What is the impact of training/capacity building at the organizational and system levels? The evaluation of training programs, especially behavioral application of content, and organizational benefits from these efforts continues to be a significant evaluation challenge.

      The course is interactive and provides a practical approach for planning, implementing, conducting, or managing such evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics, and measures; training evaluation designs; data collection – instrumentation and administration, data quality; reporting progress, change at the individual and institutional levels, and results. The course addresses institutional outcomes of training related efforts – knowledge management and MEL (monitoring, evaluation, and learning) initiatives. Case examples are included throughout the course to illustrate the course content. The challenges in evaluating training capacity building, knowledge management and MEL efforts, and strategies for mitigating these challenges are highlighted.

      Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.

      TEI Certificate: This course fulfills the following requirements:


      TEI 333: Evaluation Design: Alignment with Evaluation Objectives

      Instructor: Ann Doucette, PhD

      Description: Design is essentially the structure, the recipe that is used to assess program/intervention outcomes. This course focuses on design decisions and their alignment with evaluation questions; the precision and strength of outcome evidence needed from the evaluation, the resources that are available for the evaluation; as well as practical considerations in conducting the evaluation study. Design choice speaks to validity –the evaluator’s ability to draw conclusions in terms of the cause and effect or association between the program/intervention and outcomes (internal validity), and to generalize likely outcomes to broader samples/populations (external validity). As Cook and Campbell (1979) assert, there is no single best design approach. Designs are grouped into three primary categories – experimental, quasi-experimental and non-experimental, with a range of choices within each category. Traditionally, experimental designs have been characterized as the “gold standard,” a decidedly biased representation, as the best. While experimental designs continue to be characterized as the gold standard, they are not automatically appropriate for all evaluations. Design choice should be informed by the evaluation questions to be addressed, and the precision needed in outcome estimates (evidence), along with the practical considerations of implementing the design. Design choices, whether experimental, quasi-experimental or non-experimental, have limitations and practical considerations in terms of their use in evaluation studies.

      The course covers the design categories noted above, highlights advantages and disadvantages of each, and identifies when best to use specific design approaches, as well as building a rationale for selecting a particular design approach. International and domestic case examples will be used throughout the course.

      Recommended Audience: The course is geared to individuals having familiarity with evaluation or applied research.

      TEI Certificate: This course fulfills the following requirements:


      TEI 307: Evaluation Research Methods: A Survey of Quantitative & Qualitative Approaches

      Instructor: Jason T. Siegel, PhD

      Description: This course will introduce a range of basic quantitative and qualitative social science research methods that apply to evaluating various programs. This foundational course introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts. Topics will include qualitative research with a special emphasis on focus groups and interviews, experimental design, quasi-experimental design, and survey research methods.

      Recommended Text: There are no recommended textbooks, but there will be optional readings available on the course website before the start of the course.

      Recommended Audience: This course is suitable for those who want to update their existing knowledge and skills, and will serve as an introduction for those new to the topic.

      TEI Certificate: This course fulfills the following requirements:


      TEI 308: How to Enhance the Learning Function of Evaluation: Principles and Strategies

      Instructors: J. Bradley Cousins, PhD and Jill A. Chouinard, PhD

      Description: Historically, organizations have conducted and used evaluation to meet internal and external accountability demands with approaches focused on impact assessment and value for money. In practice, rigid focus on accountability-oriented objectives can lead to evaluation outcomes that are at best symbolic. Yet we know from research that evaluations which contribute significantly to learning about program functioning and context tend to leverage higher degrees of evaluation use and provide more credible, actionable outcomes. They can be used to improve the effectiveness and enhance the sustainability of interventions, for example.

      This two-day course situates learning-oriented evaluations within the organizational landscape of evaluation options. The focus is on the value of the learning function of evaluation and practical strategies to enhance it. Participants can expect to:

      • Develop knowledge, skills, and strategies to plan useful learning-oriented evaluations in the context of traditional domestic and international development interventions.
      • Understand how a range of evaluation approaches privilege learning about programs, the contexts within which they operate and evaluation itself. Examples, include collaborative approaches to evaluation (CAE) and culturally responsive evaluation (CRE).
      • Grasp evaluation’s potential to leverage planned learning and program improvement through organizational evaluation policy reform and the development of evaluation capacity building (ECB) strategies.

      This course will be run with a mix of instructor input and opportunities for participants to apply what they have learned in practical activities (e.g., case analyses). Practical resources will be made available.

      Recommended Audience: The course is open to new and experienced evaluators looking to augment their working knowledge of program evaluation logic and methods.

      TEI Certificate: This course fulfills the following requirements:


      TEI 309: Informing Practice using Evaluation Models

      Instructor: Melvin Mark, PhD

      Description: Evaluators who are not aware of the contemporary and historical aspects of the profession “are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes” (Madaus, Scriven & Stufflebeam, 1983). “Evaluation theories are like military strategy and tactics; methods are like military weapons and logistics. The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations. The good evaluator needs theories for the same reasons in choosing and deploying methods” (Shadish, Cook & Leviton, 1991).

      These quotes provide a rationale for why the serious evaluator should know about models and theories of evaluation. The primary purpose of this class is to overview major streams of evaluation theories (or models), and to consider their implications for practice. Topics include: (1) why evaluation theories matter, (2) frameworks describing the overall lay of the land of evaluation theory, (3) in-depth examination of several major theories, (4) identification of key issues on which evaluation theories and models differ, (5) benefits and risks of relying heavily on any one theory, and (6) tools and skills that can help you in picking and choosing from, and combining across, different theoretical perspectives for a particular evaluation in a specific context. The overarching theme is on practice implications, that is, on what difference it would make to follow one theory or some other.

      The theories to be discussed have had a significant impact on the evaluation field. They offer perspectives with major implications for practice and represent different, important streams of evaluation. Case examples will be used to illustrate key aspects of each theory’s approach to practice and class exercises will asked participants to apply the theories.

      Recommended Audience: The instructor’s assumption will be that most people attending the session may have some general familiarity with the work of a few evaluation theorists, but will not themselves be scholars of evaluation theory. At the same time, the course should be useful, whatever one’s level of familiarity with evaluation theory.

      TEI Certificate: This course fulfills the following requirements:


      TEI 310: Intermediate Qualitative Data Analysis

      Instructor: Delwyn Goodrick, PhD

      Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as enumerative and interpretive content analysis, thematic analysis, narrative analysis, and the framework method of analysis are presented and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology.

      The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods.

      Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.

      Recommended Text: Bazeley, P. (2020). Qualitative Data Analysis: Practical Strategies. [Second edition] Sage.

      Recommended Audience: This course is best suited for evaluators with some experience of basic coding processes who are looking to extend their toolkit of options for qualitative data analysis.

      TEI Certificate: This course fulfills the following requirements:


      TEI 334: Introduction to Chat GPT4 for Evaluation and Evaluation Capacity Building

      Instructor: Robert Klitgaard, PhD

      Description: AI tools such as ChatGPT4 have the potential to transform evaluation. In this interactive workshop, we’ll see how ChatGPT4 can be your tutor on theoretical and practical aspects of evaluation. We’ll also see how ChatGPT4 can help you do a literature review and summarize technical articles. Explore alternative perspectives and hypotheses. Create a teaching case. Help you with data analysis and presentation. Edit your writing. Design training programs in evaluation for different audiences. Finally, we’ll apply ChatGPT4 to practical questions like proposal writing, fundraising, and counseling us on career choices.

      Recommended Materials: Participants should have Chat GPT4 ready to use from the beginning of the course.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.

      TEI Certificate: This course fulfills the following requirements:


      TEI 311: Introduction to Cost-Benefit and Cost-Effectiveness Analysis

      InstructorRobert D. Shand, PhD

      Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).

      Recommended Text: Levin, H. M., McEwan, P. J., Belfield, C. R., Bowden, A. B., & Shand, R. D. (2017). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis (3rd ed.). SAGE.

      Recommended Audience: This course is best suited for entry-level and mid-career evaluators with some background and experience in impact evaluation looking to complement these skills with economic evaluation methods.

      TEI Certificate: This course fulfills the following requirements:


      TEI 312: Introduction to Data Analysis for Evaluators and Applied Researchers

      Instructor: P. Wesley Schultz

      Description: In this course we will introduce and review basic data analysis tools and concepts commonly used in applied research and evaluation. The focus will be on fundamental concepts that are needed to guide decisions for appropriate data analyses, interpretations, and presentations. The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses.

      Topics include data screening and cleaning, selecting appropriate methods for analysis, detecting statistical pitfalls and dealing with them, avoiding silly statistical mistakes, interpreting statistical output, and presenting findings to lay and professional audiences. Examples will include applications of basic distributions and statistical tests (e.g., z, t, chi-square, correlation, regression).

      Recommended Audience: The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses. This course is especially suited for entry-level evaluators looking to develop their expertise with the foundational logic and methods of data analysis.  Mid-level professionals seeking a refresher and greater facility with data analysis will also find this course helpful.

      TEI Certificate: This course fulfills the following requirements:


      TEI 314: Introduction to Data Visualization

      Instructor: Alice Feng

      Description: In today’s increasingly data-driven world, the ability to clearly communicate the insights in one’s data is more important than ever. Data visualizations can help make data and analyses more easily understood, accessible, and impactful to broader audiences.

      In this introductory course, participants will learn the fundamentals of creating effective data visualizations, including how to identify interesting stories in their data, how to choose appropriate chart forms to convey that story, and how to finesse the design of their charts to maximize the impact of the message being conveyed. This course will be interactive and hands-on, with opportunities to practice creating charts using DataWrapper or a tool of the participant’s choosing. Ultimately, participants will create a visualization using their own data that applies the concepts covered in this course.

      Recommended Audience: This course is designed for evaluators who have some experience developing graphs, visual aids, and reports for evaluation work, but no formal knowledge of data visualization concepts. Familiarity with data analysis is recommended but not required.

      TEI Certificate: This course fulfills the following requirements:


      TEI 332: Introduction to Machine Learning for Evaluators

      Instructor: Peter York

      Description: There is a growing demand from public and private policymakers and funders to apply big data science and machine learning for evaluation. The demand is growing due to public awareness of how the private sector uses machine learning algorithms to create on-demand tools that cost-effectively augment human planning, assessment, prediction, and decision-making. In fact, government agencies like the National Science Foundation and the U.S. Department of Health and Human Services are currently using big data science and machine learning to evaluate their impact. When applied correctly, machine learning algorithms can significantly reduce the cost and time of conducting evaluations, including producing on-demand quasi-experimental actionable evidence on an ongoing basis.

      In this introductory course, participants will learn the fundamentals of integrating the theory, methods, and machine learning algorithms of big data science into their evaluation approach. This will include an introduction to Bayesian theory, machine learning algorithms, predictive and prescriptive analytics, causal modeling, and addressing selection and algorithmic bias. The course will guide participants through an interactive step-by-step process of building evaluation models using primary and secondary datasets. This course will introduce machine learning algorithms for structured (quantitative, ordinal, and categorical) and unstructured (qualitative text) data modeling, including how to train machine learning algorithms to support conducting a mixed methods evaluation. For text analytics, participants will learn about natural language processing (NLP) algorithms that are used to improve the breadth and depth of qualitative analyses while significantly reducing the time it takes. The course will use an open-source, no-cost, no-code (knowledge of R or Python is not required) visual-based analytics platform – KNIME – and will introduce participants to its suite of analytic tools and machine learning algorithms.

      Recommended Audience: This course is best suited for mid to late-career evaluators with experience conducting quantitative and mixed methods evaluations, especially preparing and analyzing primary and secondary datasets using analytic software packages like SPSS, SAS, and Stata.

      TEI Certificate: This course fulfills the following requirements:


      TEI 313: Introduction to R Programming for Data Analysis and Visualization

      Instructor: David Wilson, PhD

      Description: This course will introduce you to the R programming language for data analysis and data visualization. The course will introduce you to importing data into R, basic data manipulations and clean-up, common graphing methods, and basic statistical analyses such as t-tests, chi-square, ANOVA, and regression, as well as standard descriptive statistics. The course will use the RStudio interface for R and will introduce you to using RMarkdown for enhancing analysis replicability and documentation. The course will focus on the programming language and assumes you are already familiar with basic statistical methods.

      Note: Attendees should bring their own laptops loaded with R and RStudio to class each day.

      Recommended Audience: This is best suited to program evaluators with at least some prior data analysis experience using software other than R, such as SPSS.

      TEI Certificate: This course fulfills the following requirements:


      TEI 339: Longitudinal Evaluation Design: Building and Maintaining Participant Commitment

      Instructor: Anna Woodcock, PhD

      Description: Are you ready to take your longitudinal evaluation projects to the next level? This immersive 2-day course is designed for professional evaluators who want to harness the power of longitudinal research while overcoming its most persistent challenge: participant attrition.

      Over the course of this class, you’ll learn how to implement the Tailored Panel Management (TPM) approach, a proven method inspired by psychological research, to maximize participant retention and ensure the integrity of your findings. Through real-world examples and case studies from over a decade of research, you’ll discover actionable strategies for recruitment, retention, and engagement that will transform the way you approach longitudinal studies.

      What You’ll Gain:

      • Expert insights on the critical role of participant commitment in longitudinal evaluations.
      • Practical tools and techniques for reducing attrition and maintaining data reliability.
      • A deep dive into the TPM approach, focusing on the “4 C’s” – Compensation, Communication, Consistency, and Credibility – to foster long-term participant engagement.

      Recommended Text: Estrada, M., Woodcock, A., & Schultz, P. W. (2014). Tailored panel management: A theory-based approach to building and maintaining participant commitment to a longitudinal study. Evaluation Review, 38, 3-28. doi: 10.1177/0193841×14524956

      Recommended Audience: This workshop is ideal for new and seasoned evaluators looking to design and manage longitudinal evaluations in their professional practice. Whether you’re just starting out or have years of experience, this course will equip you with the skills to achieve more reliable, generalizable, and impactful results.

      TEI Certificate: This course fulfills the following requirements:

       


      TEI 315: Managing for Success: Planning, Implementation, and Reporting

      Instructor: Tiffany Berry, Ph.D.

      Description: Program evaluations are often complex, challenging, multi-faceted endeavors that require evaluators to juggle stakeholder interests, funder requirements, data collection logistics, and their internal teams. Fortunately, many of these challenges can be minimized with effective evaluation management. In this interactive workshop, we provide tools, resources, and strategies that intentionally build evaluators’ project management toolkit so that evaluators can manage their evaluations successfully.

      During Day 1, using case studies, mini-lectures, and group discussions we explore traditional evaluation management practices focusing on the processes and logistics of how to manage an evaluation team and the entire evaluation process from project initiation and contracting through final reporting.

      During Day 2, we continue to build participants’ evaluation management toolkit by introducing four essential, experience-tested strategies that will elevate all participants’ project management game. That is, effective evaluation management is more than a series of steps or procedures to follow, but requires a deep understanding of (1) the competencies you and your team bring to the evaluation, (2) extent to which you are responsive to program context; (3) how you collaborate with stakeholders throughout the evaluation process; and (4) and how you use strategic reporting. Throughout our discussion, we’ll also encourage participants to think critically about how each strategy facilitates evaluation management and/or prevents mismanagement. Across both days, there will be ample opportunities to share your own perspective, ask relevant questions, and apply content covered to your own work.

      Recommended Audience: This course is best suited for novice and mid-level professionals seeking to strategically build project management skills in the evaluation context.

      TEI Certificate: This course fulfills the following requirements:


      TEI 316: Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

      Instructor: Debra J. Rog, PhD

      Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative. Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

      The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design and implement their studies to incorporate both qualitative and quantitative approaches. The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches. The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation). A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants. The course will be highly interactive, with ample time for participants to discuss how the course can be applied to their own work. Participants will work in small groups on an example that will carry through the three days of the course.

      Participants will be sent materials prior to the course as a foundation for the method.

      Recommended Audience: The course is best suited for evaluators who have some prior experience in conducting evaluations, but have not had formal training in designing, conducting, and analyzing mixed methods studies.

      TEI Certificate: This course fulfills the following requirements:


      TEI 317: Monitoring and Evaluation: Frameworks and Fundamentals

      Instructor: Ann Doucette, PhD

      Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

      This interactive two-day course focuses on practical application and will cover: the purpose and scope of M&E; engaging stakeholders and establishing an evaluative climate; connecting program design and M&E frameworks; performance and results-based M&E approaches; data collection and methods; measuring program and success; and sustaining M&E culture.

      Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches. Case examples will be used to illustrate the M&E process. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

      Recommended audience: Familiarity with evaluation is helpful, but not required, for this course.

      TEI Certificate: This course fulfills the following requirements:


      TEI 318: Outcome and Impact Evaluation

      Instructor: Melvin M. Mark, Ph.D.

      Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. Multiple approaches exist for tracking or detecting a program’s outcomes, and multiple methods and designs exist for trying to estimate a program’s impact. This course will overview alternative approaches that may be more appropriate under different conditions. This includes monitoring approaches based on a small-t theory of the program’s chain of outcomes, as well as approaches to use when the complexity of the situation precludes placing one’s confidence in such a theory of the program. Considerable attention will be given to the experimental and quasi-experimental methods that are the foundation for much of contemporary impact evaluation. Related topics, including issues in the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects, will be covered, some briefly. Emphasis will primarily be conceptual, focusing on the logic of outcome and impact evaluation, the appropriateness of different approaches under different circumstances, and the conceptual and methodological nature of the approaches. Nonetheless, we’ll cover key statistical analysis methods for impact evaluation.

      Recommended Audience: This course is best suited for mid-career evaluators. Some familiarity with program evaluation, research methods, and statistical analysis is necessary to effectively engage in the various topics that are covered.

      TEI Certificate: This course fulfills the following requirements:


      TEI 330: Policy Analysis, Implementation, and Evaluation

      Instructor: Doreen Cavanaugh, PhD

      Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course, students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change through program implementation.

      Participants will explore a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, and conflicting expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy area from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.

      Recommended Audience: This course is best suited to professionals interested in evaluating policies and programs supported in full or in part by international, national, state or local public funding as well as policies/programs supported in full or in part by international, national or local private resources or combinations of support from multiple public and private sources.

      TEI Certificate: This course fulfills the following requirements:


      TEI 319: Policy Design and Evaluation Across Cultures

      Instructor: Robert Klitgaard, PhD

      Description: Policy design and evaluation share the task of assessing what treatments (policies, programs) work well, for whom, and in what settings. Two big challenges emerge: inference and extrapolation. Inference refers to estimating how treatments affect outcomes, other things remaining equal. Extrapolation refers to transporting those estimates to other cultures. This course reviews both challenges and suggests practical ways forward. The highly interactive pedagogy presents analytical material and case studies from around the world. Of particular interest is a field-tested method for combining generic international expertise with local knowledge, with the goal of creative, evidence-based policy design. Participants will have the chance to apply the ideas to an issue important to them.

      Recommended Audience: People who use the results of evaluations in the design of effective and equitable public policies, especially when the policies apply to diverse cultural settings.

      TEI Certificate: This course fulfills the following requirements:


      TEI 320: Principles-Focused Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Principles-driven leaders engage in principles-based initiatives that call for principles-focused evaluation. Principles-focused evaluation makes principles the focus of evaluation. Three questions are the focus of evaluation: (1) To what extent and in what ways are the principles meaningful to those meant to be guided by the principles? (2) if meaningful, to what extent and in what ways are the principles adhered to? (3) if adhered to, to what extent and in what ways do principles guide results? The webinar will present and explain the GUIDE approach to developing and evaluating principles. GUIDE calls for principles to be directive, useful, inspiring, adaptable to contexts, and evaluable. Examples of principles-focused initiatives and corresponding principles-focused evaluations will be shared. This innovative approach to evaluation is on the leading edge of the field and is attracting attention around the world as a way of engaging with change and transformation in complex dynamic systems.

      Learning Outcomes: Participants will know (1) the niche, nature, and purpose of principles-focused evaluation; (2) the evaluation criteria for conducting a principles-focused evaluation; and (3) the GUIDE framework for principles-focused evaluation.

      Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.

      TEI Certificate: This course fulfills the following requirements:


      TEI 321: Qualitative Methods

      Instructor: Michael Quinn Patton, PhD

      Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified.  Exercises will provide experience in applying qualitative methods and analysis in evaluations.

      Recommended Text: Patton, M. (2015). Qualitative research and evaluation methods (4th ed.). Sage.

      Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of qualitative evaluation methods. Mid-level professionals seeking a refresher on the basics of qualitative evaluation will also find this course helpful.

      TEI Certificate: This course fulfills the following requirements:


      TEI 328: Quantitative Methods

      Instructor: Emily E. Tanner-Smith, PhD

      Description: The goal of this course is to provide an introduction to basic quantitative social science research methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as an introductory course designed to ensure basic familiarity with a range of social science research methods and concepts.

      Topics covered in the course will include experimental and quasi-experimental designs, observational and correlational designs, validity, sampling methods, measurement considerations, and survey and interview techniques. This course is for those who want to update their existing knowledge about these quantitative designs and methods, but can also serve as an introduction for those new to program evaluation.

      Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of quantitative evaluation designs and methods. Mid-level professionals seeking a refresher on the basics of quantitative evaluation designs will also find this course helpful.

      TEI Certificate: This course fulfills the following requirements:


      TEI 335: Sampling

      Instructor: Ann Doucette, PhD

      Description: Sampling – Who is included in the evaluation study? Who is not? How do we select individuals for inclusion? How might the sampling choices we make affect our evaluations?

      It is seldom possible to include the entire population of interest in our evaluations. The feasibility of doing so is limited by evaluation resources (budget, staffing, time), practical considerations (outreach to all individuals who might benefit from the program), and the fact that participation in evaluation studies is voluntary – while the program may be of interest, not everyone wants to answer our questions or fill out surveys. Alternatively, we typically focus our evaluation efforts on a subset of the population – a sample.

      The content focus of this course includes: a) defining the sampling frame – linkage to evaluation questions to be addressed; b) sampling methods – probability (random, systematic, clusters, stratified), non-probability (convenience, quota, snowball, judgmental) samples, and multi-phase/stage sampling; c) sample size and sampling error; d) response rates; e) interpreting evaluation results using samples – how sampling strategies affect evaluation validity; f) sampling issues and considerations. Case examples will be provided.

      The course focuses on practical application of sampling strategies, and the effects of sound versus inappropriate sampling approaches on conclusions drawn from evaluation results. The course is intentionally interactive, with opportunity to work in small groups on case exercises.

      Recommended Audience: This course would be of interest and benefit to anyone conducting or mandating evaluations, as well as to anyone who might wonder why political and opinion polls sometimes go awry.

      TEI Certificate: This course fulfills the following requirements:


      TEI 322: Strategic Planning with Evaluation in Mind

      Instructor: John Bryson, PhD

      Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.

      This course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible. The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.

      The following topics are covered:

      • Understanding why strategic planning has become so important
      • Understanding what strategic planning is – and is not
      • Gaining knowledge of the range of different strategic planning approaches
      • Understanding the Strategy Change Cycle
      • Gaining experience with key strategic planning tools and techniques, including stakeholder analysis, SWOT analyses, and causal mapping for purposes of understanding issues, developing strategies, and conducting evaluations
      • Knowing how to appropriately design formative, summative, and developmental evaluations of strategic planning processes, missions, strategies, and organizational performance

      Recommended Audience: The course is suitable for anyone wanting to know more about strategic planning theory and practice, including leaders, managers, board members, policymakers, and, of course, evaluators. Evaluation topics will include approaches to evaluating strategic planning processes for organizations and coalitions, missions, strategies, strategic plans, and performance.

      TEI Certificate: This course fulfills the following requirements:


      TEI 323: Systems-based Culturally Responsive Evaluation (SysCRE)

      Instructor: Wanda Casillas

      Description: Culturally Responsive Evaluation (CRE) is often described as a way of thinking, a stance taken, or an emerging approach to evaluation that centers culture and context in all steps of an evaluation process. As an evaluation approach, CRE is often used in service of promoting equitable outcomes across many sectors such as education, health, social services, etc. However, large-scale social problems require evaluation and applied research strategies that can further our thinking about complex issues and equip us to engage with the complex and layered contextual factors that impact equity.

      CRE is an essential tool in a practitioner’s toolkit when evaluating large-scale systems change efforts that emphasize equity; and CRE married with relevant and overlapping systems principles leads to a robust evaluation and applied research practice. In this course, we will engage with a core set of CRE and systems principles to anchor evaluation practice in an approach that identifies and addresses important cultural and contextual systems in which evaluations and their stakeholders are embedded.

      The first day of the workshop will focus on establishing a foundation of important historical underpinnings, concepts, and tenets of CRE and systems approaches and engage with exemplars of SysCRE practice to operationalize these concepts. On Days 2 and 3 of the workshop, we will simulate a step-wise SysCRE design using a case study and other interactive exercises to inform personal and professional practices and support group learning.

      Recommended Audience: This course is best suited for early- to mid-level evaluators that have familiarity of evaluation designs and theoretical approaches.

      TEI Certificate: This course fulfills the following requirements:


      TEI 324: Using Logic Models, Program Theory, Research and ChatGPT to Design and Evaluate Programs

      InstructorStewart I. Donaldson, PhD

      Description: It is now commonplace to use these tools in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, logic models, program theories, prior research and ChatGPT can help provide an evidence base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focus on developing useful evidence-based logic models and program theories and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use logic models, program theories, theory and research, and ChatGPT to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

      Recommended Text: Donaldson, S. I. (2022). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. Routledge.

      Students may also be interested in Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.). (2015). Credible and actionable evidence: The foundation for rigorous and influential evaluations. Sage.

      Recommended Audience: Audiences for this course include those who have familiarity and some experience in evaluation practice, and who want to explore using interest holder and research-informed program theories and logic models to guide the design and evaluation of programs.

      TEI Certificate: This course fulfills the following requirements:


      TEI 325: Utilization-Focused Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on the intended use by intended users.

      Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between the evaluator and primary intended users. Psychology of use undergirds and informs utilization-focused evaluation: intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

      Participants will learn:

      • key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
      • Implications of focusing an evaluation on the intended use by intended users.
      • Options for evaluation design and methods based on situational responsiveness, adaptability, and creativity.
      • Ways of building evaluation into the programming process to increase use.

      Recommended Text: Patton, M. Q., & Campbell-Patton, C. E. (2022). Utilization-focused evaluation (5th ed.). Sage.

      Recommended Audience: This course is suitable for new and experienced evaluators who work closely with the primary intended users of their evaluations.

      TEI Certificate: This course fulfills the following requirements:


      TEI 326: Utilizing Culturally Responsive and Racially Equitable Evaluation

      Instructors: Tracy Hilliard, PhD, Kantahyanee Murray, PhD, LaShaune Johnson, PhD, & Ashley Barnes

      Description: The field of evaluation is being challenged to utilize a process that considers who is being evaluated and who is conducting the evaluation. MPHI has worked to develop useful frameworks, tools, and approaches that evaluators could consider focusing on the ways that race and culture might influence an evaluation process; this has resulted in the development of a framework for conducting evaluation using a culturally responsive and racial equity lens.

      This workshop focuses on the practical use of a racial equity lens when conducting evaluation. The framework argues that culture and race are important considerations when conducting an evaluation because we believe that there are both critical and substantive nuances that are often missed, ignored, and/or misinterpreted when an evaluator is not aware of the culture of those being evaluated. Participants will be provided with a Template for Analyzing Programs through a Culturally Responsive and Racial Equity Lens, designed to focus deliberately on an evaluation process that takes race, culture, equity, and community context into consideration.

      Presenters will also share a “How-to Process” focused on the cultural competencies of individuals conducting evaluations, how such competencies might be improved, and strategies for doing so. This “How-to Process” is the result of thinking around developing a self-assessment instrument for evaluators, is based primarily on the cultural-proficiencies literature, and relates specifically to components of the template. Participants will have the opportunity to engage in small-group exercises to apply the concepts contained in the template to real-world evaluation processes. Based on these experiences, participants will gain practical knowledge on the use of the lens.

      Recommended Audience: This course is designed for evaluators at any level who are interested in furthering their understanding of culturally responsive, racially equitable evaluation and its practical applications.

      TEI Certificate: This course fulfills the following requirements:


      TEI 327: Working with Evaluation Stakeholders

      InstructorJohn Bryson, PhD

      Description: The purpose of this course is to help participants understand and use stakeholder identification, analysis, and influence techniques to produce a credible evaluation that enhances primary intended use by primary intended users.  We will explore the analytic, managerial, political, and ethical challenges of taking stakeholders seriously in evaluations of programs, projects, and other evaluands. The focus will always be on how to address stakeholder interests and concerns in such a way that credible evaluations are created that increase use by primary intended users.

      Specifically, the course objective is to help participants understand how to design an evaluation process that: makes prominent use of stakeholder identification, analysis, and influence techniques; involves appropriate stakeholders in appropriate ways; and produces a credible, useful evaluation for primary intended users.

      The course is designed to achieve the objective by:

      • providing a systematic approach to thinking about stakeholders, including how to think about representation issues and how to identify and prioritize stakeholders
      • helping participants gain skill in the use of specific stakeholder analysis techniques
      • providing tools for responding to stakeholder expectations and addressing stakeholder needs
      • offering advice on how to improve evaluation process management through use of stakeholder identification, analysis, and influence techniques

      Recommended Audience: Audiences for this course include those who wish to explore ways to more effectively understand and engage stakeholders and improve the design and implementation of evaluation designs.

      TEI Certificate: This course fulfills the following requirements:


      TEI 401: Sponsored Evaluation Development

      Customized courses are offered at the request of a sponsoring company or organization to support their evaluation capacity development needs/to develop their evaluation skillsets; enrollment for a customized course is limited to identified individuals from the sponsoring company. The content and learning outcomes for a customized course are developed in collaboration between TEI faculty and the sponsoring organization. As part of the course, participants engage in case studies and other interactive exercises specific to the sponsoring organization’s context/needs. Topics frequently covered in customized courses include mixed methods research, applied measurement for evaluation, and using research, program theory, and logic models to design and evaluate programs.