Evaluation Approaches and Techniques Courses
- Comparative Effectiveness: Balancing Design with Quality Evidence
- Developmental Evaluation: Systems and Complexity
- Evaluability Assessment
- Evaluating Resource Allocation in Complex Environments
- Evaluating Training Programs: Frameworks and Fundamentals
- Internal Evaluation: Building Organizations from Within
- Linking Evaluation Questions to Analysis Techniques
- Measuring Performance and Managing for Results in Government and Nonprofit Organizations
- Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches
- Participatory Evaluation: Frameworks, Approaches, Appropriateness and Challenges
- Policy Analysis, Implementation and Evaluation
- Policy Evaluation and Analysis
- Practical Strategies for Improving Collaborative Approaches to Evaluation
- Utilization-Focused Evaluation
Comparative Effectiveness: Balancing Design with Quality Evidence
Instructor: Ann Doucette, PhD
Description: Evidence is the foundation on which we make judgments, decisions, and policy. Gathering evidence can be a challenging and time-intensive process. Although there are many approaches to gathering evidence, random clinical trials (RCTs) have remained the “gold standard” in establishing effectiveness, impact and causality, despite the fact that strong proponents of RCTs sometimes assert that RCTs are not the only valid method, nor necessarily the optimal approach in gathering evidence. RCTs can be costly in terms of time and resources; can raise ethical concerns regarding the exclusion of individuals from treatments or interventions from which they might benefit; can be inappropriate if the intervention is not sufficiently and stably implemented or if the program/service is so complex that such a design would be challenging at best and likely not to yield ecologically valid results.
Comparative effectiveness (CE) has emerged as an accepted approach in gathering evidence for healthcare decision and policymaking. CE emerged as a consequence of the worldwide concern about rising health care costs and the variability of healthcare quality, and a more immediate need for evidence of effective healthcare. RCTs, while yielding strong evidence were time intensive and posed significant delays in providing data on which to make timely policy and care decisions. CE provided a new approach to gather objective evidence, and emphasized how rigorous evaluation of the data yielded across existing studies (qualitative and quantitative) could answer the questions what works for whom and under what conditions does it work. Essentially, CE is a rigorous evaluation of the impact of various intervention options, based on existing studies that are available for specific populations. The CE evaluation of existing studies focuses not only on the benefits and risks of various interventions, but can also incorporates the costs associated them. CE takes advantage of both quantitative and qualitative methods, using a standardized protocol in judging the strength and synthesis of the evidence provided by existing studies.
The basic CE questions are: Is the available evidence good enough to support high stakes decisions? If we rely solely on RCTs for evidence, will it result in a risk that available non-RCT evidence will not be considered sufficient as a basis for policy decisions? Will sufficient evidence be available for decision-makers at the time when they need it? What alternatives can be used to ensure that rigorous findings be made available to decision-makers when they need to act? CE has become an accepted alternative to RCTs in medicine and health. While CE approach has focused on medical intervention, the approach has potential for human and social interventions that are implemented in other areas (education, justice, environment, etc.).
This course will provide an overview of CE from an international perspective (U.S., U.K., Canada, France, Germany, Turkey), illustrating how different countries have defined and established CE frameworks; how data are gathered, analyzed and used in health care decision-making; and how information is disseminated and whether it leads to change in healthcare delivery. Though CE has been targeted toward enhancing the impact of health care intervention, this course will consistently focus on whether and how CE (definition, methods, analytical models, dissemination strategies, etc.) can be applied to other human and social program areas (education, justice, poverty, environment, etc.).
No prerequisites are required for this one-day course.
Developmental Evaluation: Systems and Complexity
(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)
Instructor: Michael Quinn Patton, PhD
Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.
Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.
Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. The course will utilize the instructor’s book: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).
Instructor: Debra J. Rog, PhD
Description: Increasingly, both public and private funders are looking to evaluation not only as a tool for determining the accountability of interventions, but also to add to our evidence base on what works in particular fields. With scarce evaluation resources, however, funders are interested in targeting those resources in the most judicious fashion and with the highest yield. Evaluability assessment is a tool that can inform decisions on whether a program or initiative is suitable for an evaluation and the type of evaluation that would be most feasible, credible, and useful.
This course will provide students with the background, knowledge, and skills needed to conduct an evaluability assessment. Using materials and data from actual EA studies and programs, students will participate in the various stages of the method, including the assessment of the logic of a program’s design and the consistency of its implementation; the examination of the availability, quality, and appropriateness of existing measurement and data capacities; the analysis of the plausibility that the program/initiative can achieve its goals; and the assessment of appropriate options for either evaluating the program, improving the program design/implementation, or strengthening the measurement. The development and analysis of logic models will be stressed, and an emphasis will be placed on the variety of products that can emerge from the process.
Students will be sent several articles prior to the course as a foundation for the method.
Prerequisites: Background in evaluation is useful and desirable, as is familiarity with conducting program level site visits.
Evaluating Resource Allocations in Complex Environments
Instructor: Doreen Cavanaugh, PhD
Description: Evaluators are increasingly asked to examine efficiency as well as the effectiveness of programs and interventions. This course puts systems change under a microscope by examining three essential infrastructure elements of successful program implementation: collaboration, leadership and resource allocation, and the methods used to evaluate them.
- Collaboration – Programs often seek to achieve both efficiency and effectiveness by improving collaboration across participating stakeholders. This course will look at different types of collaboration, and ways to evaluate the impact of partnerships and collaboration on project/program outcomes.
- Leadership – Collaborative frameworks yield new styles of leadership, the effect of which needs to be taken into account in evaluating a system. This course will provide participants with an understanding of different leadership styles, linking the style to the project/program objectives, with an emphasis on methods of evaluating the effect of leadership on intermediate and long-term project/program outcomes.
- Resource Allocation – In changing systems, human and financial resources are often reallocated. This course examines the role of resource allocation in project/program outcomes and how to evaluate the resulting effects of resource reallocation on systems change and project/program outcomes. Participants will learn how to use a method of tracking, called resource mapping, to determine whether the resources allocated are best used for achieving a program’s stated goals and objectives.
- Methods- Resource maps help decision-makers to identify gaps, inefficiencies, overlaps, and opportunities for collaboration with all participating partners. Evaluators can use this information to identify which resources might be combined in pooled, braided or blended arrangements that assure optimal outcomes for projects and/or programs.
On Day 1, participants will use examples from their own experience to apply the essential infrastructure elements of collaboration, leadership and resource allocation to a real life, evaluation situation.
Day 2 will focus on ways to evaluate the contributions of collaboration, leadership, and resource allocation strategies to systems change goals, outcome, and impact.
Evaluating Training Programs: Frameworks and Fundamentals
Instructor: Ann Doucette, PhD
Description: The evaluation of training programs typically emphasizes participants’ initial acceptance and reaction to training content; learning, knowledge and skill acquisition; participant performance and behavioral application of training; and, benefits at the organizational and societal levels that result from training participation. The evaluation of training programs, especially behavioral application of content and organizational benefits from training, continues to be an evaluation challenge. Today’s training approaches are wide-ranging, including classroom type presentations, self-directed online study courses, online tutorials and coaching components, supportive technical assistance, and so forth. Evaluation approaches must be sufficiently facile to accommodate training modalities and the individual and organizational outcomes that result from training efforts.
The Kirkpatrick (1959, 1976) training model has been a longstanding evaluation approach; however, it is not without criticism or suggested modification. The course provides an overview of two training program evaluation frameworks: 1) the Kirkpatrick model and modifications, which emphasizes participant reaction, learning, behavioral application and organizational benefits, and 2) the Concerns-based Adoption Model (CBAM), a diagnostic approach that assesses stages of participant concern about how training will affect individual job performance, describes how training will be configured and practiced within the workplace, and gauges the actual level of training use.
The course is designed to be interactive and to provide a practical approach for planning (those leading or commissioning training evaluations), implementing, conducting or managing training evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics and measures; training evaluation design; data collection – instrumentation and administration, data quality; reporting progress, change, results; and, disseminating findings and recommendations – knowledge management resulting from training initiatives. Case examples will be used throughout the course to illustrate course content.
Internal Evaluation: Building Organizations from Within
Instructor: Arnold Love, PhD
Description: Internal evaluations are conducted by an organization’s own staff members rather than by outside evaluators. Internal evaluators have the enormous advantage of an insider’s knowledge so they can rapidly focus evaluations on areas managers and staff know are important, develop systems that spot problems before they occur, constantly evaluate ways to improve service delivery processes, strengthen accountability for results, and build organizational learning that empowers staff and program participants alike.
This course begins with the fundamentals of designing and managing effective internal evaluation, including an examination of internal evaluation with its advantages and disadvantages, understanding internal evaluation within the organizational context, recognizing both positive and potentially negative roles for internal evaluators, defining the tasks of managers and evaluators, identifying the major steps in the internal evaluation process, strategies for selecting the right internal evaluation tools, and key methods for making information essential for decision making available to management, staff, board members, and program participants.
The second day will focus on practical ways of designing and managing internal evaluations that make a difference, including: methods for reducing the potential for bias and threats to validity, practical steps for organizing the internal evaluation function, outlining the specific skills the internal evaluator needs, strategies to build internal evaluation capacity in your organization, and ways for building links between internal evaluation and organizational development. Teaching will be interactive, combining presentations with opportunities for participation and discussion. Time will be set aside on the second day for an in-depth discussion of key issues and concerns raised by participants. The instructor’s book on Internal Evaluation: Building Organizations from Within (Sage) is provided with other resource materials.
Linking Evaluation Questions to Analysis Techniques
Instructor: Melvin M. Mark, PhD
Description: Statistics are a mainstay in the toolkit of program and policy evaluators. Human memory being what it is, however, even evaluators with reasonable statistical training, over the years, often forget the basics. And the basics aren’t always enough. If evaluators are going to make sensible use of consultants, communicate effectively with funders, and understand others’ evaluation reports, then they often need at least a conceptual understanding of relatively complex, recently developed statistical techniques. The purposes of this course are: to link common evaluation questions with appropriate statistical procedures; to offer a strong conceptual grounding in several important statistical procedures; and to describe how to interpret the results from the statistics in ways that are principled and will be persuasive to intended audiences. The general format for the class will be to start with an evaluation question and then discuss the choice and interpretation of the most-suited statistical test(s). Emphasis will be on creating a basic understanding of what statistical procedures do, of when to use them, and why, and then on how to learn more from the data. Little attention is given to equations or computer programs, with the emphasis instead being on conceptual understanding and practical choices. Within a framework of common evaluation questions, statistical procedures and principled data inquiry will be explored.
(A) More fundamental topics to be covered include (1) basic data quality checks and basic exploratory data analysis procedures, (2) basic descriptive statistics, (3) the core functions of inferential statistics (why we use them), (4) common inferential statistics, including t-tests, the correlation coefficient, and chi square, and (5) the fundamentals of regression analysis.
(B) For certain types of evaluation questions, more complex statistical techniques need to be considered. More complex techniques to be discussed (again, at a conceptual level) include (1) structural equation modeling, (2) multi-level modeling, and (3) cluster analysis and other classification techniques.
(C) Examples of methods for learning from data, i.e., for “snooping” with validity, for making new discoveries principled, and for more persuasive reporting of findings will include (1) planned and unplanned tests of moderation, (2) graphical methods for unequal treatment effects, (3) use of previously-discussed techniques such as clustering, (4) identifying and describing converging patterns of evidence, and (5) iterating between findings and explanations.
Each participant will receive a set of readings and current support materials.
Prerequisites: Familiarity with basic statistics.
Measuring Performance and Managing Results in Government and Nonprofit Organizations
Instructor: Theodore H. Poister, PhD
Description: A commitment to performance measurement has become pervasive throughout government, the nonprofit sector, foundations, and other nongovernmental organizations in response to demands for increased accountability, pressures for improved quality and customer service, and mandates to “do more with less,” as well as the drive to strengthen the capacity for results oriented management among professional public and nonprofit administrators.
While the idea of setting goals, identifying and monitoring measures of success in achieving them, and using the resulting performance information in a variety of decision venues might appear to be a straightforward process, a myriad of conceptual, political, managerial, cultural, psychological, and organizational constraints – as well as serious methodological issues – make this a very challenging enterprise. This course presents a step-by-step process for designing and implementing effective performance management systems in public and nonprofit agencies, with an emphasis on maximizing their effectiveness in improving organizational and program performance. The focus is on the interplay between performance measurement and management, as well as the relationships among performance measurement, program evaluation, and evidence based policy, and all topics are illustrated with examples from a wide variety of program areas including those drawn from the instructor’s experience in such areas as local government services, child support enforcement, public health, and nursing regulation as well as transportation.
Day 1 overviews the basics of performance measurement and looks at frameworks for identifying outcomes and other dimensions of performance, data sources and the definition of performance indicators, and criteria for systematically evaluating the usefulness of potential indicators. Day 2 looks at the analysis and reporting of performance information and its incorporation in a number of critical management processes such as strategic planning, results based budgeting, program management and evaluation, quality improvement, performance contracting and grants management, stakeholder engagement, and the management of employees and organizations. The course concludes with a discussion of the “process side” of the design and implementation of performance measures and discusses strategies for building effective performance management systems.
The text, Managing and Measuring Performance in the Public and Nonprofit Organizations by Theodore H. Poister, Maria P. Aristigueta, and Jeremy Hall, 2nd Edition (Jossey-Bass, 2015), case studies, and other materials are provided.
Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches
Instructor: Debra J. Rog, PhD
Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative. Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.
The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches. The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches. The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation). A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants. The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work. Participants will work in small groups on an example that will carry through the two days of the course.
Participants will be sent materials prior to the course as a foundation for the method.
Prerequisites: Background in evaluation is useful and desirable.
Participatory Evaluation: Frameworks, Approaches, Appropriateness and Challenges
Instructor: Ann Doucette, PhD
Description: Participatory evaluation builds on a sense of active construction and ownership – evaluators and stakeholder, regarding the evaluation process, what is learned through the evaluation, and what actions might be taken as a result of the evaluation. This course focuses on both the practical aspects of participatory evaluation (improved utilization of evaluation, enhanced decision-making relevance, etc.), as well as transformative features (empowerment of program participants, activation for social change, etc.). The course covers the principles of participatory evaluation; decision-processes in determining whether participatory evaluation is an appropriate evaluation approach; the role of the evaluator; stakeholder selection procedures; what to look for and how to build evaluation capacities of a participatory evaluation team; managing power, status differentials and conflicts; and, the advantages and disadvantages of conducting participatory evaluations. The course emphases practical application and incorporates small group exercises, using hypothetical and actual case examples from international and U.S. evaluations.
Course participants are encouraged to submit their own participatory evaluation case examples or questions about participatory evaluation, prior to the course for inclusion in the course discussions. The course is purposefully geared for practicing evaluators, those teaching evaluation, and those overseeing and/or commissioning evaluation.
Policy Analysis, Implementation and Evaluation
Instructor: Doreen Cavanaugh, PhD
Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.
Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.
Policy Evaluation and Analysis
Instructor: Gary T. Henry, PhD
Description: Policy evaluation and analysis produce evidence intended to influence policymaking. Just as there are many types of evaluation, policy analysis is conducted in different ways and for different purposes. One type of policy analysis – scientific policy analysis – has much in common with policy evaluation. Both usually involve an independent assessment of the social problem that is to be addressed through government action and an assessment of the costs and consequences of relevant policy alternatives. Another type of policy analysis is labeled professional and is intended to have more direct short-term influence on policy, often using data from previous evaluations and extrapolating results to a new setting. Advocacy policy analysis selectively uses data to make a case for pre-determined policy position.
This course will explore the types of policy analysis and the types of evaluation that are most likely to be influential in the policy process. Participants will develop major components of a professional policy analysis and design a policy evaluation. In addition, the class will focus on the development of a communication strategy for a policy evaluation.
Practical Strategies for Improving Collaborative Approaches to Evaluation
Instructor: J. Bradley Cousins, PhD
Description: Evaluators necessarily interact with members of the program community or stakeholders when planning and implementing an evaluation. In collaborative approaches to evaluation (CAE), evaluators actually work with stakeholders to jointly produced evaluative knowledge. Many approaches qualify but common examples include participatory evaluation, empowerment evaluation, and culturally responsive evaluation. Even well known approaches such as utilization-focused evaluation and contribution analysis are, at some level, collaborative. What do evaluators need to know about planning and conducting CAE?
In this two-day course, participants will learn when to use CAE, how to use the approach and what it can accomplish. The course will be structured by a set of recently developed and validated ‘evidence-based principles for CAE’ (Cousins, 2020; Shulha et al. 2016). The principles are comprehensive and cover common issues and challenges associated with CAE context, purposes, process implementation, and consequences. Although there exists a range of applications of the principles, they are primarily intended to guide CAE practice.
The course is open to new and experienced evaluators looking to augment their working knowledge of program evaluation logic and methods. The course will be run with a mix of instructor input and opportunities for participants to apply what they have learned in practical activities. Practical resources, including a CAE handbook, will be made available.
Cousins, J. B., Whitmore, E., & Shulha, L. M. (2013). Arguments for a common set of principles for collaborative inquiry in evaluation. American Journal of Evaluation, 34(1), 7-22. doi: 10.1177/1098214012464037
Shulha, L. M., Whitmore, E., Cousins, J. B., Gilbert, N., & Al Hudib, H. (2016). Introducing evidence-based principles to guide collaborative approaches to evaluation: Results of an empirical process. American Journal of Evaluation, 37(2), 193-215. doi: 10.1177/1098214015615230
Cousins, J. B. (Ed.). (2020). Collaborative approaches to evaluation: Principles in use. Thousand Oaks, CA: SAGE.
Instructor: Michael Quinn Patton, PhD
Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on intended use by intended users.
Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between evaluator and primary intended users. A psychology of use undergirds and informs utilization-focused evaluation: intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.
Participants will learn:
- Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
- Implications of focusing an evaluation on intended use by intended users.
- Options for evaluation design and methods based on situational responsiveness, adaptability and creativity.
- Ways of building evaluation into the programming process to increase use.
The course will utilize the instructor’s text: Utilization-Focused Evaluation, 4th Ed., (Sage, 2008).