Courses Offered

Course Descriptions

Applied Measurement for Evaluation

Instructor: Ann Doucette, PhD

Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error. The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose. This class content will cover:

  • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated. Issues to be addressed include: measurement/item bias, the sensitivity of measures in terms of developmental and cultural issues, scientific soundness (reliability, validity, error, etc.), and the ability of the measure to detect change over time.
  • Quantification: Measurement is essentially assigning numbers to what is observed (direct and inferential). Decisions about how we quantify observations and the implications these decisions have for using the data resulting from the measures, as well as for the objectivity and certainty we bring to the judgment made in our evaluations will be examined. This section of the course will focus on the quality of response options, coding categories – Do response options/coding categories segment the respondent sample in meaningful and useful ways?
  • Issues and Considerations – using existing measures versus developing your own measures: What to look for and how to assess whether existing measures are suitable for your evaluation project will be examined. Issues associated with the development and use of new measures will be addressed in terms of how to establish sound psychometric properties, and what cautionary statements should accompanying interpretation and evaluation findings using these new measures.
  • Criteria for choosing measures: Assessing the adequacy of measures in terms of the characteristics of measurement – choosing measures that fit your evaluation theory and evaluation focus (exploration, needs assessment, level of implementation, process, impact and outcome). Measurement feasibility, practicability and relevance will be examined. Various measurement techniques will be examined in terms of precision and adequacy, as well as the implications of using screening, broad-range, and peaked tests.
  • Error-influences on measurement precision: The characteristics of various measurement techniques, assessment conditions (setting, respondent interest, etc.), and evaluator characteristics will be addressed.

Recommended Audience: This course would be of interest and benefit to anyone using quantitative (e.g., surveys, etc.) or qualitative (interviews, focus groups, etc.) measurement in their evaluations.

The course focuses focuses heavily on the application of measurement, and the effects of sound versus poorly developed or inappropriately used measures on evaluation results. The course covers traditional measurement topics (reliability, validity, dimensionality, sensitivity to change, etc.) but emphasizes how these topics affect our evaluations, not the mathematical algorithms.


Basics of Program Evaluation

Instructor: Stewart I. Donaldson, PhD

Description: With an emphasis on constructing a sound foundational knowledge base guided by the new AEA evaluator competencies, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; the view of evaluation as transdisciplinary; the logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods; reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, design sensitivity, and validity; the function of program theory and logic models in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; and emerging and enduring issues in evaluation theory, method, and practice. Although the major focus of the course is program evaluation in multiple settings (e.g., public health, education, human and social services, and international development), examples from personnel evaluation, product evaluation, organizational evaluation, and systems evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct ethical and high quality program evaluations using a contingency-based and contextually/culturally responsive approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of mini-presentations, breakout room discussions and application exercises.

Recommended Text: Donaldson, S. I. (2021). Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-Focused Applications. New York, NY: Routledge.

Recommended Audience: Audiences for this course include those who have familiarity with social science research but are unfamiliar with program evaluation, and evaluators who wish to review current theories, methods, and practices.


Culture and Evaluation

Instructor: Leona Ba, EdD

Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive evaluations. In addition, they will explore strategies for applying cultural responsiveness to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

This course uses some material from Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In Hood, S., Hopson, R., & Frierson, H. (Eds.) Continuing the journey to reposition culture and cultural context in evaluation theory and practice, (pp. 3-27). Charlotte, NC; Information Age Publishing, Inc.

Recommended Audience: This course is recommended for commissioners or practitioners who wish to ensure their evaluations are culturally responsive.


How to Enhance the Learning Function of Evaluation: Principles and Strategies

Instructors: J. Bradley Cousins, PhD and Jill A. Chouinard, PhD

Description: Historically, organizations have conducted and used evaluation to meet internal and external accountability demands with approaches focused on impact assessment and value for money. In practice, rigid focus on accountability-oriented objectives can lead to evaluation outcomes that are at best symbolic. Yet we know from research that evaluations which contribute significantly to learning about program functioning and context tend to leverage higher degrees of evaluation use and provide more credible, actionable outcomes. They can be used to improve the effectiveness and enhance the sustainability of interventions, for example.

This two-day course situates learning-oriented evaluations within the organizational landscape of evaluation options. The focus is on the value of the learning function of evaluation and practical strategies to enhance it. Participants can expect to:

1. Develop knowledge, skills, and strategies to plan useful learning-oriented evaluations in the context of traditional domestic and international development interventions.
2. Understand how collaborative approaches to evaluation (CAE) and culturally responsive evaluation (CRE) can be integrated in the context of results-based approaches.
3. Grasp evaluation’s potential to leverage planned learning and program improvement through organizational evaluation policy reform and the development of evaluation capacity building (ECB) strategies.

This course will be run with a mix of instructor input and opportunities for participants to apply what they have learned in practical activities (e.g., case analyses). Practical resources will be made available.

Recommended Audience: This course is open to new and experienced evaluators looking to augment their working knowledge of program evaluation logic and methods.


Intermediate Qualitative Data Analysis

Instructor: Delwyn Goodrick, PhD

Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as, thematic analysis, pattern matching, template analysis, process tracing, schema analysis and qualitative comparative analysis are outlined and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology. The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods. A range of activities to support critical thinking and application of principles is integrated within the workshop program. Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.

Specific issues to be addressed:

  • What are the implications of an evaluator’s worldview for selection of qualitative data analysis (QDA) strategies?
  • Are there analytic options that are best suited to particular kinds of qualitative data?
  • How can participant experiences be portrayed through QDA without fracturing the data through formal coding?
  • What types of analysis may be appropriate for particular types of evaluation (program theory, realist, transformative)
  • What strategies can be used to address interpretive dissent when working in evaluation teams?
  • What are some ways that qualitative and quantitative findings can be integrated in an evaluation report?
  • How can I sell the value of qualitative evidence to evaluation audiences?

Participants interested in continuing their learning beyond this workshop may wish to purchase Qualitative Data Analysis: Practical Strategies by Patricia Bazeley (Sage, 2013).

Recommended Audience: This course is best suited for evaluators with some experience of basic coding processes who are looking to extend their analysis toolkit.


Introduction to Cost-Benefit and Cost-Effectiveness Analysis

InstructorRobert D. Shand, PhD

Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).

Recommended Audience: This course is best suited for entry-level and mid-career evaluators with some background and experience in impact evaluation looking to complement these skills with economic evaluation methods.


Introduction to Data Analysis for Evaluators and Applied Researchers

Instructor: Dale Berger, Ph.D.

Description: In this course we will introduce and review basic data analysis tools and concepts commonly used in applied research and evaluation. The focus will be on fundamental concepts that are needed to guide decisions for appropriate data analyses, interpretations, and presentations. The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses.

Topics include data screening and cleaning, selecting appropriate methods for analysis, detecting statistical pitfalls and dealing with them, avoiding silly statistical mistakes, interpreting statistical output, and presenting findings to lay and professional audiences. Examples will include applications of basic distributions and statistical tests (e.g., z, t, chi-square, correlation, regression).

Recommended Audience: The goal of the course is to help participants avoid errors and improve skills as data analysts, communicators of statistical findings, and consumers of data analyses. This course is especially suited for entry-level evaluators looking to develop their expertise with the foundational logic and methods of data analysis.  Mid-level professionals seeking a refresher and greater facility with data analysis will also find this course helpful.


Managing for Success: Planning, Implementation, and Reporting

Instructor: Tiffany Berry, Ph.D.

Description: Program evaluations are often complex, challenging, multi-faceted endeavors that require evaluators to juggle stakeholder interests, funder requirements, data collection logistics, and their internal teams. Fortunately, many of these challenges can be minimized with effective evaluation management. In this interactive workshop, we provide tools, resources, and strategies that intentionally build evaluators’ project management toolkit so that evaluators can manage their evaluations successfully. During Day 1, using case studies, mini-lectures, and group discussions we explore traditional evaluation management practices focusing on the processes and logistics of how to manage an evaluation team and the entire evaluation process from project initiation and contracting through final reporting. To reinforce and practice the content covered, participants will also engage in a variety of simulation exercises that explore how evaluation managers effectively mitigate challenges as they inevitably arise during an evaluation.

During Day 2, we continue to build participants’ evaluation management toolkit by introducing four essential, experience-tested strategies that will elevate all participants’ project management game. That is, effective evaluation management is more than a series of steps or procedures to follow, but requires a deep understanding of (1) the competencies you and your team bring to the evaluation, (2) extent to which you are responsive to program context; (3) how you collaborate with stakeholders throughout the evaluation process; and (4) and how you use strategic reporting. Through interactive activities, we’ll explore these strategies (and the interconnections among them) as well as discuss how they help evaluators’ “manage for success.” Throughout our discussion, we’ll also encourage participants to think critically about how each strategy facilitates evaluation management and/or prevents mismanagement. Across both days, there will be ample opportunities to share your own perspective, ask relevant questions, and apply content covered to your own work.

Recommended Audience: This course is best suited for novice and mid-level professions seeking to strategically build project management skills in the evaluation context.


Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

Instructor: Debra J. Rog, PhD

Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative.  Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches.  The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches.  The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation).  A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants.  The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work.  Participants will work in small groups on an example that will carry through the two days of the course.

Participants will be sent materials prior to the course as a foundation for the method.

Recommended Audience: The course is best suited for evaluators who have some prior experience in conducting evaluations, but have not had formal training in designing, conducting, and analyzing mixed methods studies.


Outcome and Impact Evaluation

Instructor: Melvin Mark, Ph.D.

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. Multiple approaches exist for tracking or detecting a program’s outcomes, and multiple methods and designs exist for trying to estimate a program’s impact. This course will overview alternative approaches that may be more appropriate under different conditions. This includes monitoring approaches based on a small-t theory of the program’s chain of outcomes, as well as approaches to use when the complexity of the situation precludes placing one’s confidence in such a theory of the program. Considerable attention will be given to the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Related topics, including issues in the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects, will be covered. Emphasis will primarily be conceptual, such as regarding the logic of outcome evaluation and the conceptual and methodological nature of the approaches. Nonetheless, we’ll cover key statistical analysis methods for impact evaluation.

Recommended Audience: This course is best suited for mid-career evaluators. Familiarity with program evaluation, research methods, and statistical analysis is necessary to effectively engage in the topics covered.


Qualitative Methods

Instructor: Michael Quinn Patton, PhD

Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analysis, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified.  Exercises will provide experience in applying qualitative methods and analysis in evaluations. The course will utilize content from Dr. Patton’s text: Qualitative Research and Evaluation Methods, (Sage, 2015, 4th Edition).

Recommended Audience: This course is best suited for entry-level evaluators looking to develop their knowledge of qualitative evaluation methods. Mid-level professionals seeking a refresher on the basics of qualitative evaluation will also find this course helpful.


Systems-based Culturally Responsive Evaluation (SysCRE)

Instructor: Wanda Casillas

Description: Culturally Responsive Evaluation (CRE) is often described as a way of thinking, a stance taken, or an emerging approach to evaluation that centers culture and context in all steps of an evaluation process. As an evaluation approach, CRE is often used in service of promoting equitable outcomes across many sectors such as education, health, social services, etc. However, large-scale social problems require evaluation and applied research strategies that can further our thinking about complex issues and equip us to engage with the complex and layered contextual factors that impact equity.

CRE is an essential tool in a practitioner’s toolkit when evaluating large-scale systems change efforts that emphasize equity; and CRE married with relevant and overlapping systems principles leads to a robust evaluation and applied research practice. In this course, we will engage with a core set of CRE and systems principles to anchor evaluation practice in an approach that identifies and addresses important cultural and contextual systems in which evaluations and their stakeholders are embedded.

The first day of the workshop will focus on establishing a foundation of important historical underpinnings, concepts, and tenets of CRE and systems approaches and engage with exemplars of SysCRE practice to operationalize these concepts. On Days 2 and 3 of the workshop, we will simulate a step-wise SysCRE design using a case study and other interactive exercises to inform personal and professional practices and support group learning.

Recommended Audience: This course is best suited for early- to mid-level evaluators that have familiarity of evaluation designs and theoretical approaches.


Using Research, Program Theory, and Logic Models to Design and Evaluate Programs

InstructorStewart I. Donaldson, PhD

Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence-base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focuses on developing useful evidence-based program theories and logic models, and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

Recommended Text: Donaldson, S. I. (2021). Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-Focused Applications.  New York, NY: Routledge.

Students may also be interested in: Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (Sage).

Recommended Audience: Audiences for this course include those who have familiarity and some experience in evaluation practice, and who want to explore using stakeholder and research-informed program theories and logic models to guide the design and evaluation of programs.