Courses Offered

    • Course Descriptions

      TEI 300: Applied Measurement for Evaluation

      Instructor: Ann Doucette, PhD

      Description:

      Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error.

      The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose.  This course will cover:

      • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated.
      • Quantification: Do response options/coding categories segment the respondent sample in meaningful and useful ways?
      • Issues and considerations for using existing measures versus developing your own measures
      • Criteria for choosing measures
      • Balancing measurement precision and error

      Recommended Audience: This course would be of interest and benefit to anyone using quantitative (e.g., surveys, etc.) or qualitative (interviews, focus groups, etc.) measurement in their evaluations.


      TEI 331: Applying an Equity Lens to Visualizing and Communicating Data

      Instructor: Alice Feng

      Description:

      Data visualization can be a powerful means of communicating the insights found in data and analyses. However, it is important to not just stop at creating technically correct charts and graphs – data visualizations must also be designed with an equity lens in mind so that they do not perpetuate biases, stereotypes, or other kinds of harm and are accessible to all audiences.The first half of this intermediate-level class will cover considerations surrounding the use of language, color, ordering, icons, and more when it comes to applying an equity lens to the way data is visualized along with strategies to incorporate empathy into how to work with and communicate data. The second half will then focus on issues of accessibility including topics such as font selection, color contrast, plain language, and alt text.Recommended Audience:This course is designed for evaluators who have a mastery of the basics of data visualization including an understanding of data encodings, pre-attentive attributes, data types, chart types, and how to use different chart types appropriately. Students should also have experience making charts and/or maps using a tool of their choosing.


      TEI 337: Applying Appreciative Inquiry and Positive Psychology to Improve Your Evaluation Practice

      Instructor: Stewart I. Donaldson

      Description: Early work in embedding Appreciative Inquiry/Appreciative evaluation is known for its intentionality in crafting compelling questions about successful experiences, inviting affirming multi-stakeholder engagement, generating insightful stories about lived experience, and grounding the evaluation in a compelling vision of the future. Twenty years later, research in positive psychology has shown that the contribution of Appreciative Evaluation and other positive psychology tools center around the attention it offers to the intrapersonal experience and user experience (UX) of those engaged in evaluation – evaluation managers, evaluators, participants, commissioners, and other interested parties. When individuals engaged in evaluation and have a productive and constructive experience, they are more likely to use and embrace evaluation as an opportunity for reflection, learning, improvement, and transformation.

      Regardless of the evaluation design and methods selected, Appreciative Evaluation is an excellent addition to your evaluation toolkit that will help you enhance effectiveness, cultural competence, ethics, and equity in evaluation practice. A growing body of research in positive psychology helps us understand the impact of embedding appreciative evaluation into any design and method. In the course you will learn about specific activities and tools that you can apply to improve you evaluation and applied research practice. These activities and tools can be used across the various steps and stages of an evaluation, including to:

      • Engage stakeholders
      • Develop theories of change, program theories, and logic models
      • Formulate key evaluation questions
      • Prepare the data collection and analysis plan
      • Communicate findings
      • Promote evaluation use and influence
      • Build evaluation capacity

      Recommended Audience: Audiences for this course include those who have familiarity with evaluation who would like to learn more about ways of applying appreciative inquiry and positive psychology in their work and life.


      TEI 336: Artificial Intelligence for Equity and Justice in Evaluation: Bridging Technology and Practice 

      Instructor: Jennifer P. Villalobos

      Description: Navigating the intersection of artificial intelligence (AI) and evaluation practice presents unique challenges and opportunities, especially when focusing on equity and social justice. This course aims to demystify AI by demonstrating that emerging technological tools can be strategically leveraged to enhance the assessment of programs aimed at social betterment, ensuring that evaluations are methodologically sound and ethically aligned with principles of equity and justice.

      Participants will be introduced to various AI methodologies and their applications to assist in everything from literature searches and evaluation design to data analyses and reporting. The course will critically examine the use of AI in evaluation settings, emphasizing the importance of culturally responsive and equity-focused approaches. We will explore both the potential and the pitfalls of overreliance on AI and address the complexity of its use in dynamic and diverse contexts.

      Related topics such as social justice practice standards, culturally responsive practices, data integrity, algorithmic bias, and the interpretation of AI-generated data will be discussed.

      Recommended Materials: Most of the tools used in this course are free to download and use, and will be shared with you during the course by your instructor.  Participants are encouraged to download Chat GPT4 before the start of the course.

      Recommended Audience: This course is ideal for evaluators at all stages of their careers, from emerging to seasoned professionals, who are interested in harnessing AI to foster social justice and equity through their evaluation work. Familiarity with basic evaluation principles and research methods and a willingness to engage with AI’s technical aspects will enhance the learning experience.


      TEI 301: Basics of Program Evaluation: Strengths-Informed and Cross-Cultural Applications

      Instructor: Stewart I. Donaldson, PhD

      Description: With an emphasis on constructing a sound foundational knowledge base guided by the American Evaluation Association (AEA) evaluator competencies and public statement on cultural competence in evaluation, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; the view of evaluation as transdisciplinary; the logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods; reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, design sensitivity, and validity; the function of program theory and logic models in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; strengths-informed and cross-cultural applications; and emerging and enduring issues in evaluation theory, method, and practice.

      Although the major focus of the course is program evaluation in multiple settings (e.g., public health, education, human and social services, and international development), examples from personnel evaluation, product evaluation, organizational evaluation, and systems evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct ethical and high-quality program evaluations using a contingency-based and contextually/culturally responsive approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of mini-presentations, breakout room discussions, and application exercises.

      Recommended Text: Donaldson, S. I. (2022). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. New York: Routledge.

      Recommended Audience: Audiences for this course include those who have familiarity with social science research but are unfamiliar with program evaluation, and evaluators who wish to review current theories, methods, and practices.


      TEI 329: Blue Marble Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Blue Marble refers to the iconic image of the Earth from space without borders or boundaries, a whole Earth perspective. Blue Marble Evaluation consists of principles and criteria for evaluating transformational initiatives aimed at a more equitable and sustainable world.

      We humans are using our planet’s resources, and polluting and warming it, in ways that are unsustainable. Many people, organizations, and networks are working to ensure the future is more sustainable and equitable. Blue Marble evaluators enter the fray by helping design, implement, and evaluate transformational initiatives based on a theory of transformation. Blue Marble evaluation is utilization-focused, developmental, and principles-based in providing ongoing feedback for adaptation and enhanced systems transformation impact.

      Incorporating the Blue Marble perspective means looking beyond nation-state boundaries and across sector and issue silos to connect the global and local, connect the human and ecological, and connect evaluative thinking and methods with those trying to bring about global systems transformation. Forecasts for the future of humanity run the gamut from doom-and-gloom to utopia. Evaluation as a transdisciplinary, global profession has much to offer in navigating the risks and opportunities that arise as global change initiatives and interventions are designed and undertaken to ensure a more sustainable and equitable future. This workshop will provide a framework and tools (a thoughtkit) for evaluating global systems transformation.

      Recommended Text: Patton, M. (2019). Blue Marble Evaluation: Premises and Principles. Guilford Press.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.


      TEI 302: Creating and Implementing Successful Evaluation Surveys

      Instructor: Jason T. Siegel, PhD

      Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducting surveys.

      Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); question ordering; increasing effortful responding; and increasing response rates.

      The course is made up of a mixture of PowerPoint presentations, discussions, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants.

      Recommended Text: Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.

      Recommended Audience: This course will be of interest to anyone using or planning to use surveys in their evaluations.


      TEI 303: Culture, Equity, and Evaluation

      Instructor: Leona Ba, EdD

      Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive and equitable evaluations, which require integrating diversity, inclusion, and equity principles into all phases of program design and evaluation. The course will use Theory-Driven Evaluation as a framework because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

      • Develop program impact theory
      • Formulate and prioritize evaluation questions
      • Answer evaluation questions

      During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive and equitable evaluations. In addition, they will explore strategies for applying cultural responsiveness and equity to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

      Recommended Text: Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural context in evaluation theory and practice (pp. 3-27). Information Age Publishing, Inc.

      Recommended Audience: This course is recommended for commissioners or practitioners who wish to ensure their evaluations are culturally responsive and equitable.


      TEI 304: Developmental Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description:

      Developmental Evaluation (DE) supports those involved in social change innovation by guiding adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

      The COVID Pandemic significantly increased use of DE as programs around the world had to pivot and adapt to the turbulence of responding to efforts to control the pandemic. This led to innovations and new directions in DE as it served to guide adaptations to the challenges of the pandemic. This course includes those new applications and directions.

      The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches when faced with systems change dynamics and complex change initiatives. Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation, different kinds of DE, and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer in applying DE.

      Learning Outcomes: Participants will know (1) the niche, nature, and purpose of developmental evaluation; and (2) the evaluation criteria for conducting a developmental evaluation.

      Recommended Text: Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and Use. Guilford Press.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.


      TEI 305: Evaluability Assessment

      Instructor: Debra J. Rog, PhD

      Description:

      Evaluability assessment (EA) is a key tool that evaluators have at their disposal, but too often is left unused. However, in recent years, both public and private funders have been increasingly supporting EAs as the first step in evaluating programs. An EA offers an opportunity to determine if a program is ready for an evaluation, and if not, what can be done to improve its readiness.  EA is one of the few systematic tools used for evaluation planning, helping ground the evaluation in the reality of the program, ensuring it is focused on the right questions, engaging interested parties in the process, and ensuring that the appropriate design is implemented at the right time in the program process.

      EA helps to focus evaluation on programs that are designed and implemented with plausibility to achieve their outcomes, and consequently, can provide for wiser investment and use of evaluation funding.  In addition, beyond being used as a tool for assessing a program’s readiness for evaluation, EA can be used to help develop programs, select sites to use in multi-site evaluations, providing quick information on a program, and other purposes.

      Following this two-day course, students with the ability to conduct an EA on their own. The course includes hands-on learning through the incorporation of an exercise following each step of the  methods. Having conducted over 100 EAs, Dr. Rog will draw upon her experiences and examples to bring the method to life as the class learns the steps to designing the EA, implementing it, and analyzing and reporting the results.

      Recommended Audience: This course is suitable for new and experienced evaluators responsible for evaluating programs and initiatives.


      TEI 306: Evaluating Training Programs and MEL (Monitoring, Evaluation, Learning) Initiatives

      Instructor: Ann Doucette, PhD

      Description: Many of our social programs focus on providing added information; building awareness; changing attitudes and influencing behavioral change to mitigate adversity. This type of effort is often referred to as training – teaching meaningful competencies for a specific purpose – filling knowledge, skill, and capacity gaps.

      This course examines training within the sphere of demonstrated capacities. What makes training work? How will such changes affect the participating individuals, their social networks, organizational and system levels? The evaluation of training programs, especially behavioral application of content, and organizational benefits from training continues to be an evaluation challenge.

      The course is designed to be interactive and to provide a practical approach for planning (those leading or commissioning training or capacity building/development evaluations), implementing, conducting, or managing such evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics, and measures; training evaluation designs; data collection – instrumentation and administration, data quality; reporting progress, change, results; and, disseminating findings and recommendations – knowledge management resulting from training initiatives. Case examples are included throughout the course to illustrate the course content. Measurement, methodology, design issues, challenges in conducting such evaluations, and strategies for mitigating such issues are highlighted.

      Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.


      TEI 333: Evaluation Design: Alignment with Evaluation Objectives

      Instructor: Ann Doucette, PhD

      Description: Design is essentially the structure, the recipe that is used to assess program/intervention outcomes. This course focuses on design decisions and their alignment with evaluation questions, the precision and strength of outcome evidence needed from the evaluation, the resources that are available for the evaluation, as well as practical considerations in conducting the evaluation study. Design choice speaks to validity –the evaluator’s ability to draw conclusions in terms of the cause and effect or association between the program/intervention and outcomes (internal validity), and to generalize likely outcomes to broader samples/populations (external validity). As Cook and Campbell (1979) assert, there is no single best design approach. Designs are grouped into three primary categories – experimental, quasi-experimental and non-experimental, with a range of choices within each category. Traditionally, experimental designs have been characterized as the “gold standard,” a decidedly biased representation, as the “best;” when in fact, design choice should be informed by the evaluation questions to be addressed, and the precision needed in outcome estimates, along with practical considerations. Design choices, whether experimental, quasi-experimental or non-experimental all have limitations and practical considerations in terms of their use in evaluation studies.

      The course will cover the following design categories, highlighting advantages and disadvantages of each; identifying when best to use specific design types; and will provide case examples of each.

      Experimental Designs: completely randomized, randomized block, post-test only control group

      Quasi-experimental Designs: non-random, pre-existing, non-equivalent groups

        • Score-matching (propensity score – statistically matching program and comparison groups)
        • Regression discontinuity – using a cutoff score to identify program and non-program groups
        • Natural experiments – difference-in-difference (program and comparison group)
        • Interrupted time series

      Correlation – Ex-pot Facto Designs: Identification of conditions that have occurred or are present, and investigating the presumed cause – association with prior implemented program

      Non-experimental Designs: no manipulation of independent variables (program versus non-program)

        • Cross-sectional, panel studies
        • Observational
        • Single variable
        • Correlational – relationship between two variables, but no control over possible confounding factors

      The course is intentionally interactive. Participants will work with case materials, identifying design types; selecting designs that are best aligned with evaluation questions; building a rationale for the strength of evidence design choices yield; and characterizing the pros and cons of design choices.

      Participants will be sent materials and resources prior to the course.

      Recommended Audience: The course is geared to individuals having familiarity with evaluation or applied research.


      TEI 307: Evaluation Research Methods: A Survey of Quantitative & Qualitative Approaches

      Instructor: Jason T. Siegel, PhD

      Description: This course will introduce a range of basic quantitative and qualitative social science research methods that apply to evaluating various programs. This foundational course introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts. Topics will include qualitative research with a special emphasis on focus groups and interviews, experimental design, quasi-experimental design, and survey research methods.

      Recommended Text: There are no recommended textbooks, but there will be optional readings available on the course website before the start of the course.

      Recommended Audience: This course is suitable for those who want to update their existing knowledge and skills, and will serve as an introduction for those new to the topic.


      TEI 308: How to Enhance the Learning Function of Evaluation: Principles and Strategies

      Instructors: J. Bradley Cousins, PhD and Jill A. Chouinard, PhD

      Description: Historically, organizations have conducted and used evaluation to meet internal and external accountability demands with approaches focused on impact assessment and value for money. In practice, rigid focus on accountability-oriented objectives can lead to evaluation outcomes that are at best symbolic. Yet we know from research that evaluations which contribute significantly to learning about program functioning and context tend to leverage higher degrees of evaluation use and provide more credible, actionable outcomes. They can be used to improve the effectiveness and enhance the sustainability of interventions, for example.

      This two-day course situates learning-oriented evaluations within the organizational landscape of evaluation options. The focus is on the value of the learning function of evaluation and practical strategies to enhance it. Participants can expect to:

      • Develop knowledge, skills, and strategies to plan useful learning-oriented evaluations in the context of traditional domestic and international development interventions.
      • Understand how a range of evaluation approaches privilege learning about programs, the contexts within which they operate and evaluation itself. Examples, include collaborative approaches to evaluation (CAE) and culturally responsive evaluation (CRE).
      • Grasp evaluation’s potential to leverage planned learning and program improvement through organizational evaluation policy reform and the development of evaluation capacity building (ECB) strategies.

      This course will be run with a mix of instructor input and opportunities for participants to apply what they have learned in practical activities (e.g., case analyses). Practical resources will be made available.

      Recommended Audience: The course is open to new and experienced evaluators looking to augment their working knowledge of program evaluation logic and methods.


      TEI 309: Informing Practice using Evaluation Models

      Instructor: Melvin Mark, PhD

      Description:

      Evaluators who are not aware of the contemporary and historical aspects of the profession “are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes” (Madaus, Scriven & Stufflebeam, 1983). “Evaluation theories are like military strategy and tactics; methods are like military weapons and logistics. The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations. The good evaluator needs theories for the same reasons in choosing and deploying methods” (Shadish, Cook & Leviton, 1991).

      These quotes provide a rationale for why the serious evaluator should know about models and theories of evaluation. The primary purpose of this class is to overview major streams of evaluation theories (or models), and to consider their implications for practice. Topics include: (1) why evaluation theories matter, (2) frameworks describing the overall lay of the land of evaluation theory, (3) in-depth examination of several major theories, (4) identification of key issues on which evaluation theories and models differ, (5) benefits and risks of relying heavily on any one theory, and (6) tools and skills that can help you in picking and choosing from, and combining across, different theoretical perspectives for a particular evaluation in a specific context. The overarching theme is on practice implications, that is, on what difference it would make to follow one theory or some other.

      The theories to be discussed have had a significant impact on the evaluation field. They offer perspectives with major implications for practice and represent different, important streams of evaluation. Case examples will be used to illustrate key aspects of each theory’s approach to practice and class exercises will asked participants to apply the theories.

      Recommended Audience: The instructor’s assumption will be that most people attending the session may have some general familiarity with the work of a few evaluation theorists, but will not themselves be scholars of evaluation theory. At the same time, the course should be useful, whatever one’s level of familiarity with evaluation theory.


      TEI 310: Intermediate Qualitative Data Analysis

      Instructor: Delwyn Goodrick, PhD

      Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as enumerative and interpretive content analysis, thematic analysis, narrative analysis, and the framework method of analysis are presented and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology.

      The course also provides a brief overview of the use of qualitative comparative analysis (QCA) and the Qualitative Impact Assessment protocol [QUiP] as tools to support inferences about the contribution of programs to outcomes.

      The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods. A range of activities to support critical thinking and application of principles is integrated within the workshop program.

      Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.

      Specific issues to be addressed:

      • What are the implications of an evaluator’s worldview for selection of qualitative data analysis (QDA) strategies?
      • Are there analytic options that are best suited to particular kinds of qualitative data?
      • How can participant experiences be portrayed through QDA without fracturing the data through formal coding?
      • What types of analysis may be appropriate for particular types of evaluation (program theory, realist, transformative)
      • What strategies can be used to address interpretive dissent when working in evaluation teams?
      • What are some ways that qualitative and quantitative findings can be integrated in an evaluation report?
      • How can I sell the value of qualitative evidence to evaluation audiences?

      Recommended Text: Bazeley, P. (2020). Qualitative Data Analysis: Practical Strategies. [Second edition] Sage.

      Recommended Audience: This course is best suited for evaluators with some experience of basic coding processes who are looking to extend their toolkit of options for qualitative data analysis.


      TEI 334: Introduction to Chat GPT4 for Evaluation and Evaluation Capacity Building

      Instructor: Robert Klitgaard, PhD

      Description: AI tools such as ChatGPT4 have the potential to transform evaluation. In this interactive workshop, we’ll see how ChatGPT4 can be your tutor on theoretical and practical aspects of evaluation. We’ll also see how ChatGPT4 can help you do a literature review and summarize technical articles. Explore alternative perspectives and hypotheses. Create a teaching case. Help you with data analysis and presentation. Edit your writing. Design training programs in evaluation for different audiences. Finally, we’ll apply ChatGPT4 to practical questions like proposal writing, fundraising, and counseling us on career choices.

      Recommended Materials: Participants should have Chat GPT4 ready to use from the beginning of the course.

      Recommended Audience: This course is suitable for new and experienced evaluators who work with innovative initiatives of all kinds at any level anywhere in the world.


      TEI 311: Introduction to Cost-Benefit and Cost-Effectiveness Analysis

      InstructorRobert D. Shand, PhD

      Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).

      Recommended Text: Levin, H. M., McEwan, P. J., Belfield, C. R., Bowden, A. B., & Shand, R. D. (2017). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis (3rd ed.). SAGE.

      Recommended Audience: This course is best suited for entry-level and mid-career evaluators with some background and experience in impact evaluation looking to complement these skills with economic evaluation methods.


      TEI 312: Introduction to Data Analysis for Evaluators and Applied Researchers

      Instructor: P. Wesley Schultz

      Description: In this course we will introduce and review basic data analysis tools and concepts commonly used in applied research and evaluation. The focus will be on fundamental concepts that are needed to guide decisions for appropriate data analyses, interpretations, and presentations. The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses.

      Topics include data screening and cleaning, selecting appropriate methods for analysis, detecting statistical pitfalls and dealing with them, avoiding silly statistical mistakes, interpreting statistical output, and presenting findings to lay and professional audiences. Examples will include applications of basic distributions and statistical tests (e.g., z, t, chi-square, correlation, regression).

      Recommended Audience: The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses. This course is especially suited for entry-level evaluators looking to develop their expertise with the foundational logic and methods of data analysis.  Mid-level professionals seeking a refresher and greater facility with data analysis will also find this course helpful.


      TEI 314: Introduction to Data Visualization

      Instructor: Alice Feng

      Description: In today’s increasingly data-driven world, the ability to clearly communicate the insights in one’s data is more important than ever. Data visualizations can help make data and analyses more easily understood, accessible, and impactful to broader audiences.

      In this introductory course, participants will learn the fundamentals of creating effective data visualizations, including how to identify interesting stories in their data, how to choose appropriate chart forms to convey that story, and how to finesse the design of their charts to maximize the impact of the message being conveyed. This course will be interactive and hands-on, with opportunities to practice creating charts using DataWrapper or a tool of the participant’s choosing. Ultimately, participants will create a visualization using their own data that applies the concepts covered in this course.

      Recommended Audience: This course is designed for evaluators who have some experience developing graphs, visual aids, and reports for evaluation work, but no formal knowledge of data visualization concepts. Familiarity with data analysis is recommended but not required.


      TEI 332: Introduction to Machine Learning for Evaluators

      Instructor: Peter York

      Description: There is a growing demand from public and private policymakers and funders to apply big data science and machine learning for evaluation. The demand is growing due to public awareness of how the private sector uses machine learning algorithms to create on-demand tools that cost-effectively augment human planning, assessment, prediction, and decision-making. In fact, government agencies like the National Science Foundation and the U.S. Department of Health and Human Services are currently using big data science and machine learning to evaluate their impact. When applied correctly, machine learning algorithms can significantly reduce the cost and time of conducting evaluations, including producing on-demand quasi-experimental actionable evidence on an ongoing basis.

      In this introductory course, participants will learn the fundamentals of integrating the theory, methods, and machine learning algorithms of big data science into their evaluation approach. This will include an introduction to Bayesian theory, machine learning algorithms, predictive and prescriptive analytics, causal modeling, and addressing selection and algorithmic bias. The course will guide participants through an interactive step-by-step process of building evaluation models using primary and secondary datasets. This will include (1) finding and assessing the quality of existing data; (2) cleaning and preparing the data; (3) framing and aligning the data to your theory of change or logic model; (4) staging the evaluation to mitigate selection bias; (5) training machine learning algorithms to find and evaluate naturally occurring counterfactual experiments in history; and (6) evaluating and addressing the level and types of algorithmic bias in the results. This course will introduce machine learning algorithms for structured (quantitative, ordinal, and categorical) and unstructured (qualitative text) data modeling, including how to train machine learning algorithms to support conducting a mixed methods evaluation. For text analytics, participants will learn about natural language processing (NLP) algorithms that are used to improve the breadth and depth of qualitative analyses while significantly reducing the time it takes. The course will use an open-source, no-cost, no-code (knowledge of R or Python is not required) visual-based analytics platform – KNIME – and will introduce participants to its suite of analytic tools and machine learning algorithms.

      Recommended Audience: This course is best suited for mid to late-career evaluators with experience conducting quantitative and mixed methods evaluations, especially preparing and analyzing primary and secondary datasets using analytic software packages like SPSS, SAS, and Stata.


      TEI 313: Introduction to R Programming for Data Analysis and Visualization

      Instructor: David Wilson, PhD

      Description: This course will introduce you to the R programming language for data analysis and data visualization. The course will introduce you to importing data into R, basic data manipulations and clean-up, common graphing methods, and basic statistical analyses such as t-tests, chi-square, ANOVA, and regression, as well as standard descriptive statistics. The course will use the RStudio interface for R and will introduce you to using RMarkdown for enhancing analysis replicability and documentation. The course will focus on the programming language and assumes you are already familiar with basic statistical methods.

      Note: Attendees should bring their own laptops loaded with R and RStudio to class each day.

      Recommended Audience: This is best suited to program evaluators with at least some prior data analysis experience using software other than R, such as SPSS.


      TEI 315: Managing for Success: Planning, Implementation, and Reporting

      Instructor: Tiffany Berry, Ph.D.

      Description: Program evaluations are often complex, challenging, multi-faceted endeavors that require evaluators to juggle stakeholder interests, funder requirements, data collection logistics, and their internal teams. Fortunately, many of these challenges can be minimized with effective evaluation management. In this interactive workshop, we provide tools, resources, and strategies that intentionally build evaluators’ project management toolkit so that evaluators can manage their evaluations successfully.

      During Day 1, using case studies, mini-lectures, and group discussions we explore traditional evaluation management practices focusing on the processes and logistics of how to manage an evaluation team and the entire evaluation process from project initiation and contracting through final reporting. To reinforce and practice the content covered, participants will also engage in a variety of simulation exercises that explore how evaluation managers effectively mitigate challenges as they inevitably arise during an evaluation.

      During Day 2, we continue to build participants’ evaluation management toolkit by introducing four essential, experience-tested strategies that will elevate all participants’ project management game. That is, effective evaluation management is more than a series of steps or procedures to follow, but requires a deep understanding of (1) the competencies you and your team bring to the evaluation, (2) extent to which you are responsive to program context; (3) how you collaborate with stakeholders throughout the evaluation process; and (4) and how you use strategic reporting. Through interactive activities, we’ll explore these strategies (and the interconnections among them) as well as discuss how they help evaluators’ “manage for success.” Throughout our discussion, we’ll also encourage participants to think critically about how each strategy facilitates evaluation management and/or prevents mismanagement. Across both days, there will be ample opportunities to share your own perspective, ask relevant questions, and apply content covered to your own work.

      Recommended Audience: This course is best suited for novice and mid-level professionals seeking to strategically build project management skills in the evaluation context.


      TEI 316: Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

      Instructor: Debra J. Rog, PhD

      Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative. Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

      The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design and implement their studies to incorporate both qualitative and quantitative approaches. The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches. The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation). A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants. The course will be highly interactive, with ample time for participants to discuss how the course can be applied to their own work. Participants will work in small groups on an example that will carry through the three days of the course.

      Participants will be sent materials prior to the course as a foundation for the method.

      Recommended Audience: The course is best suited for evaluators who have some prior experience in conducting evaluations, but have not had formal training in designing, conducting, and analyzing mixed methods studies.


      TEI 317: Monitoring and Evaluation: Frameworks and Fundamentals

      Instructor: Ann Doucette, PhD

      Description:

      The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

      This interactive two-day course focuses on practical application and will cover: the purpose and scope of M&E; engaging stakeholders and establishing an evaluative climate; connecting program design and M&E frameworks; performance and results-based M&E approaches; data collection and methods; measuring program and success; and sustaining M&E culture.

      Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches. Case examples will be used to illustrate the M&E process. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

      Recommended audience: Familiarity with evaluation is helpful, but not required, for this course.


      TEI 318: Outcome and Impact Evaluation

      Instructor: Melvin M. Mark, Ph.D.

      Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. Multiple approaches exist for tracking or detecting a program’s outcomes, and multiple methods and designs exist for trying to estimate a program’s impact. This course will overview alternative approaches that may be more appropriate under different conditions. This includes monitoring approaches based on a small-t theory of the program’s chain of outcomes, as well as approaches to use when the complexity of the situation precludes placing one’s confidence in such a theory of the program. Considerable attention will be given to the experimental and quasi-experimental methods that are the foundation for much of contemporary impact evaluation. Related topics, including issues in the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects, will be covered, some briefly. Emphasis will primarily be conceptual, focusing on the logic of outcome and impact evaluation, the appropriateness of different approaches under different circumstances, and the conceptual and methodological nature of the approaches. Nonetheless, we’ll cover key statistical analysis methods for impact evaluation.

      Recommended Audience: This course is best suited for mid-career evaluators. Some familiarity with program evaluation, research methods, and statistical analysis is necessary to effectively engage in the various topics that are covered.


      TEI 330: Policy Analysis, Implementation, and Evaluation

      Instructor: Doreen Cavanaugh, PhD

      Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course, students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change through program implementation.

      Participants will explore a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, and conflicting expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy area from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.

      Recommended Audience: This course is best suited to professionals interested in evaluating policies and programs supported in full or in part by international, national, state or local public funding as well as policies/programs supported in full or in part by international, national or local private resources or combinations of support from multiple public and private sources.


      TEI 319: Policy Design and Evaluation Across Cultures

      Instructor: Robert Klitgaard, PhD

      Description: Policy design and evaluation share the task of assessing what treatments (policies, programs) work well, for whom, and in what settings. Two big challenges emerge: inference and extrapolation. Inference refers to estimating how treatments affect outcomes, other things remaining equal. Extrapolation refers to transporting those estimates to other cultures. This course reviews both challenges and suggests practical ways forward. The highly interactive pedagogy presents analytical material and case studies from around the world. Of particular interest is a field-tested method for combining generic international expertise with local knowledge, with the goal of creative, evidence-based policy design. Participants will have the chance to apply the ideas to an issue important to them.

      Recommended Audience: People who use the results of evaluations in the design of effective and equitable public policies, especially when the policies apply to diverse cultural settings.


      TEI 320: Principles-Focused Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Principles-driven leaders engage in principles-based initiatives that call for principles-focused evaluation. Principles-focused evaluation makes principles the focus of evaluation. Three questions are the focus of evaluation: (1) To what extent and in what ways are the principles meaningful to those meant to be guided by the principles? (2) if meaningful, to what extent and in what ways are the principles adhered to? (3) if adhered to, to what extent and in what ways do principles guide results? The webinar will present and explain the GUIDE approach to developing and evaluating principles. GUIDE calls for principles to be directive, useful, inspiring, adaptable to contexts, and evaluable. Examples of principles-focused initiatives and corresponding principles-focused evaluations will be shared. This innovative approach to evaluation is on the leading edge of the field and is attracting attention around the world as a way of engaging with change and transformation in complex dynamic systems.

      Learning Outcomes: Participants will know (1) the niche, nature, and purpose of principles-focused evaluation; (2) the evaluation criteria for conducting a principles-focused evaluation; and (3) the GUIDE framework for principles-focused evaluation.

      Recommended Audience: Familiarity with evaluation is helpful, but not required, for this course.


      TEI 321: Qualitative Methods

      Instructor: Michael Quinn Patton, PhD

      Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified.  Exercises will provide experience in applying qualitative methods and analysis in evaluations.

      Recommended Text: Patton, M. (2015). Qualitative research and evaluation methods (4th ed.). Sage.

      Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of qualitative evaluation methods. Mid-level professionals seeking a refresher on the basics of qualitative evaluation will also find this course helpful.


      TEI 328: Quantitative Methods

      Instructor: Emily E. Tanner-Smith, PhD

      Description: This course will introduce a range of basic quantitative social science research designs and methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research designs and concepts.

      This course will introduce a range of basic quantitative social science research designs and methods that are applicable to the evaluation of programs. This is a foundational course that introduces basic quantitative methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research designs and concepts.

      Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of quantitative evaluation designs and methods. Mid-level professionals seeking a refresher on the basics of quantitative evaluation designs will also find this course helpful.


      TEI 335: Sampling

      Instructor: Ann Doucette, PhD

      Description: Sampling – Who is included in the evaluation study? Who is not? How do we select individuals for inclusion? How might the sampling choices we make affect our evaluations?

      It is seldom possible to include the entire population of interest in our evaluations. The feasibility of doing so is limited by evaluation resources (budget, staffing, time), practical considerations (outreach to all individuals who might benefit from the program), and the fact that participation in evaluation studies is voluntary – while the program may be of interest, not everyone wants to answer our questions or fill out surveys. Alternatively, we typically focus our evaluation efforts on a subset of the population – a sample.

      The content focus of this course includes: a) defining the sampling frame – linkage to evaluation questions to be addressed; b) sampling methods – probability (random, systematic, clusters, stratified), non-probability (convenience, quota, snowball, judgmental) samples, and multi-phase/stage sampling; c) sample size and sampling error; d) response rates; e) interpreting evaluation results using samples – how sampling strategies affect evaluation validity; f) sampling issues and considerations. Case examples will be provided.

      The course focuses on practical application of sampling strategies, and the effects of sound versus inappropriate sampling approaches on conclusions drawn from evaluation results. The course is intentionally interactive, with opportunity to work in small groups on case exercises.

      Recommended Audience: This course would be of interest and benefit to anyone conducting or mandating evaluations, as well as to anyone who might wonder why political and opinion polls sometimes go awry.


      TEI 322: Strategic Planning with Evaluation in Mind

      Instructor: John Bryson, PhD

      Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses – along with the many opportunities – facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.

      This course presents a pragmatic approach to strategic planning based on John Bryson’s best-selling and award-winning book, Strategic Planning for Public and Nonprofit Organizations, Fifth Edition (Jossey-Bass, 2018). The course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible. The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.

      The following topics are covered through a mixture of mini-lectures, case analyses, individual and small group exercises, and plenary discussions:

      • Understanding why strategic planning has become so important
      • Understanding what strategic planning is – and is not
      • Gaining knowledge of the range of different strategic planning approaches
      • Understanding the Strategy Change Cycle (Prof. Bryson’s preferred approach)
      • Gaining experience with key strategic planning tools and techniques, including stakeholder analysis, SWOT analyses, and causal mapping for purposes of understanding issues, developing strategies, and conducting evaluations
      • Knowing how to appropriately design formative, summative, and developmental evaluations of strategic planning processes, missions, strategies, and organizational performance
      • Knowing what it takes to initiate strategic planning successfully
      • Understanding the importance of leadership of many kinds for strategic planning success
      • Understanding what can be institutionalized
      • Making sure ongoing strategic planning, acting, learning, and evaluation are linked

      Recommended Audience: The course is suitable for anyone wanting to know more about strategic planning theory and practice, including leaders, managers, board members, policymakers, and, of course, evaluators. Evaluation topics will include approaches to evaluating strategic planning processes for organizations and coalitions, missions, strategies, strategic plans, and performance.


      TEI 323: Systems-based Culturally Responsive Evaluation (SysCRE)

      Instructor: Wanda Casillas

      Description: Culturally Responsive Evaluation (CRE) is often described as a way of thinking, a stance taken, or an emerging approach to evaluation that centers culture and context in all steps of an evaluation process. As an evaluation approach, CRE is often used in service of promoting equitable outcomes across many sectors such as education, health, social services, etc. However, large-scale social problems require evaluation and applied research strategies that can further our thinking about complex issues and equip us to engage with the complex and layered contextual factors that impact equity.

      CRE is an essential tool in a practitioner’s toolkit when evaluating large-scale systems change efforts that emphasize equity; and CRE married with relevant and overlapping systems principles leads to a robust evaluation and applied research practice. In this course, we will engage with a core set of CRE and systems principles to anchor evaluation practice in an approach that identifies and addresses important cultural and contextual systems in which evaluations and their stakeholders are embedded.

      The first day of the workshop will focus on establishing a foundation of important historical underpinnings, concepts, and tenets of CRE and systems approaches and engage with exemplars of SysCRE practice to operationalize these concepts. On Days 2 and 3 of the workshop, we will simulate a step-wise SysCRE design using a case study and other interactive exercises to inform personal and professional practices and support group learning.

      Recommended Audience: This course is best suited for early- to mid-level evaluators that have familiarity of evaluation designs and theoretical approaches.


      TEI 324: Using Research, Program Theory, and Logic Models to Design and Evaluate Programs

      InstructorStewart I. Donaldson, PhD

      Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focus on developing useful evidence-based program theories and logic models and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

      Recommended Text: Donaldson, S. I. (2021). Introduction to theory-driven program evaluation: Culturally responsive and strengths-focused applications. Routledge.

      Students may also be interested in Donaldson, S. I., Christie, C. A., & Mark, M. M. (Eds.). (2014). Credible and actionable evidence: The foundation for rigorous and influential evaluations. Sage.

      Recommended Audience: Audiences for this course include those who have familiarity and some experience in evaluation practice, and who want to explore using stakeholder and research-informed program theories and logic models to guide the design and evaluation of programs.


      TEI 325: Utilization-Focused Evaluation

      Instructor: Michael Quinn Patton, PhD

      Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on the intended use by intended users.

      Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between the evaluator and primary intended users. Psychology of use undergirds and informs utilization-focused evaluation: intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

      Participants will learn:

      • key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
      • Implications of focusing an evaluation on the intended use by intended users.
      • Options for evaluation design and methods based on situational responsiveness, adaptability, and creativity.
      • Ways of building evaluation into the programming process to increase use.

      Recommended Text: Patton, M. Q., & Campbell-Patton, C. E. (2022). Utilization-focused evaluation (5th ed.). Sage.

      Recommended Audience: This course is suitable for new and experienced evaluators who work closely with the primary intended users of their evaluations.


      TEI 326: Utilizing Culturally Responsive and Racially Equitable Evaluation

      Instructors: Tracy Hilliard, PhD, Kantahyanee Murray, PhD, LaShaune Johnson, PhD, & Ashley Barnes

      Description: The field of evaluation is being challenged to utilize a process that considers who is being evaluated and who is conducting the evaluation. MPHI has worked to develop useful frameworks, tools, and approaches that evaluators could consider focusing on the ways that race and culture might influence an evaluation process; this has resulted in the development of a framework for conducting evaluation using a culturally responsive and racial equity lens.

      This workshop focuses on the practical use of a racial equity lens when conducting evaluation. The framework argues that culture and race are important considerations when conducting an evaluation because we believe that there are both critical and substantive nuances that are often missed, ignored, and/or misinterpreted when an evaluator is not aware of the culture of those being evaluated. Participants will be provided with a Template for Analyzing Programs through a Culturally Responsive and Racial Equity Lens, designed to focus deliberately on an evaluation process that takes race, culture, equity, and community context into consideration.

      Presenters will also share a “How-to Process” focused on the cultural competencies of individuals conducting evaluations, how such competencies might be improved, and strategies for doing so. This “How-to Process” is the result of thinking around developing a self-assessment instrument for evaluators, is based primarily on the cultural-proficiencies literature, and relates specifically to components of the template. Participants will have the opportunity to engage in small-group exercises to apply the concepts contained in the template to real-world evaluation processes. Based on these experiences, participants will gain practical knowledge on the use of the lens.

      Recommended Audience: This course is designed for evaluators at any level who are interested in furthering their understanding of culturally responsive, racially equitable evaluation and its practical applications.


      TEI 327: Working with Evaluation Stakeholders

      InstructorJohn Bryson, PhD

      Description:

      The purpose of this course is to help participants understand and use stakeholder identification, analysis, and influence techniques to produce a credible evaluation that enhances primary intended use by primary intended users.

      We will explore the analytic, managerial, political, and ethical challenges of taking stakeholders seriously in evaluations of programs, projects, and other evaluands. The focus will always be on how to address stakeholder interests and concerns in such a way that credible evaluations are created that increase use by primary intended users.

      Specifically, the course objective is to help participants understand how to design an evaluation process that:

      • makes prominent use of stakeholder identification, analysis, and influence techniques
      • involves appropriate stakeholders in appropriate ways
      • produces a credible, useful evaluation for primary intended users

      The course is designed to achieve the objective by:

      • providing a systematic approach to thinking about stakeholders, including how to think about representation issues
      • offering ways to identify who key stakeholders are and how to prioritize stakeholders in general
      • helping participants gain skill in the use of specific stakeholder identification and analysis techniques
      • providing tools for helping understand the possibilities for dealing with different stakeholder expectations and addressing different stakeholder needs
      • offering advice on how to improve evaluation process management through wise use of stakeholder identification, analysis and influence techniques

      Recommended Audience: Audiences for this course include those who wish to explore ways to more effectively understand and engage stakeholders and improve the design and implementation of evaluation designs.


      TEI 401: Sponsored Evaluation Development

      Customized courses are offered at the request of a sponsoring company or organization to support their evaluation capacity development needs/to develop their evaluation skillsets; enrollment for a customized course is limited to identified individuals from the sponsoring company. The content and learning outcomes for a customized course are developed in collaboration between TEI faculty and the sponsoring organization. As part of the course, participants engage in case studies and other interactive exercises specific to the sponsoring organization’s context/needs. Topics frequently covered in customized courses include mixed methods research, applied measurement for evaluation, and using research, program theory, and logic models to design and evaluate programs.