February 26-27

Evaluation Management

Instructor: Tessie Catsambas, MPP

Description: The purpose of this course is to provide new and experienced evaluation professionals and funders with strategies, tools and skills to: (1) develop realistic evaluation plans; (2) negotiate needed adjustments when issues arise; (3) organize and manage evaluation teams; (4) monitor evaluation activities and budgets; (5) protect evaluation independence and rigor while responding to client needs; and (6) ensure the quality of evaluation products and briefings.

Evaluation managers have a complex job. They oversee the evaluation process and are responsible for safeguarding the methodological integrity, evaluation activities, and budgets. In many cases they must also manage people, including clients, various stakeholders, and other evaluation team members. Evaluation managers shoulder the responsibility for the success of the evaluation, frequently dealing with unexpected challenges, and making decisions that influence the quality and usefulness of the evaluation.

Against a backdrop of demanding technical requirements and a dynamic political environment, the goal of evaluation management is to develop, with available resources and time, valid and useful measurement information and findings, and ensure the quality of the process, products and services included in the contract. Management decisions influence methodological decisions and vice versa, as method choice has cost implications.

The course methodology will be experiential and didactic, drawing on participants’ experience and engaging them with diverse material. It will include paper and online tools for managing teams, work products and clients; an in-class simulation game with expert judges; case examples; reading; and a master checklist of processes and sample forms to organize and manage an evaluation effectively. At the end of this training, participants will be prepared to follow a systematic process with support tools for commissioning and managing evaluations, and will feel more confident to lead evaluation teams and negotiate with clients and evaluators for better evaluations.


February 28-March 1

Introduction to Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Robert D. Shand, PhD

Description: The tools and techniques of cost-benefit and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret cost-benefit and cost-effectiveness analyses. Content includes identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, cost-benefit ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects, and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).


February 28-March 1

Implementation Analysis for Feedback on Program Progress and Results

Instructor: Arnold Love, PhD

Description: Many programs do not achieve intended outcomes because of how they are implemented. Thus, implementation analysis (IA) is very important for policy and funding decisions. IA fills the methodological gap between outcome evaluations that treat a program as a “black box” and process evaluations that present a flood of descriptive data. IA provides essential feedback on the “critical ingredients” of a program, and helps drive change through an understanding of factors affecting implementation and short-term results. Topics include: importance of IA; conceptual and theoretical foundations of IA; how IA drives change and complements other program evaluation approaches; major models of IA and their strengths/weaknesses; how to build an IA framework and select appropriate IA methods; concrete examples of how IA can keep programs on-track, spot problems early, enhance outcomes, and strengthen collaborative ventures; and suggestions for employing IA in your organization. Detailed course materials and in-class exercises are provided.


March 2

Intermediate Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Joseph Cordes, PhD

Description: The Intermediate Cost-Benefit Analysis course provides a more advanced and detailed review of the principles of social cost and social benefit estimation than is provided in TEI’s Introduction to Cost-Benefit and Cost Effectiveness Analysis. Working with the instructor, students will undertake hands-on estimation of the costs and benefits of actual programs in the computer lab. The objective is to develop the ability both to critically evaluate and use cost-benefit analyses of programs in the public and nonprofit sectors, and to use basic cost-benefit analysis tools to actively undertake such analyses. Topics covered in the course will include:

I. Principles of Social Cost and Social Benefit Estimation

  1. Social Cost Estimation: (a) Components (capital, operating, administrative); (b) Budgetary and Social Opportunity Cost
  2. Social Benefit Estimation: (a) Social vs. private benefits; (b) revealed benefit measures (Price/cost changes in primary market, Price/cost changes in analogous markets, Benefits inferred from market-trade-offs, and cost/damages avoided as benefit measures)
  3. Stated preference measures: Inferring benefits from survey data
  4. Benefit/Cost Transfer: Borrowing estimates of benefits and costs from elsewhere.
  5. Timing of Benefits and Costs: (a) Discounting and net present value, (b) Dealing with inflation, (c). Choosing a discount rate
  6. Presenting Results: (a) Sensitivity analysis (partial sensitivity analysis, best/worst case scenarios, break-even analysis, and Monte-Carlo analysis); (b) Present value of net social benefits, (c) Benefit Cost Ratio, (d) Internal rate of Return

II. Social Cost and Social Benefit Estimation in Practice

The use of the above principles of cost and benefit estimation will be illustrated using data drawn from several actual benefit cost analysis of real programs. The cases will be chosen to illustrate the application of the benefit/cost estimation principles in the case of social programs, health programs, and environmental programs. Working with the instructor in the computer lab, students will create a benefit-cost analysis template and then use that template to estimate social benefits and social costs, and to present a benefit-cost bottom line.

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with cost-benefit and/or cost-effectiveness analysis equivalent to the TEI course Introduction to Cost-Benefit and Cost-Effectiveness Analysis.


March 2-3

Evaluating Training Programs: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The evaluation of training programs typically emphasizes participants’ initial acceptance and reaction to training content; learning, knowledge and skill acquisition; participant performance and behavioral application of training; and, benefits at the organizational and societal levels that result from training participation. The evaluation of training programs, especially behavioral application of content and organizational benefits from training, continues to be an evaluation challenge. Today’s training approaches are wide-ranging, including classroom type presentations, self-directed online study courses, online tutorials and coaching components, supportive technical assistance, and so forth. Evaluation approaches must be sufficiently facile to accommodate training modalities and the individual and organizational outcomes that result from training efforts.

The Kirkpatrick (1959, 1976) training model has been a longstanding evaluation approach; however, it is not without criticism or suggested modification. The course provides an overview of two training program evaluation frameworks: 1) the Kirkpatrick model and modifications, which emphasizes participant reaction, learning, behavioral application and organizational benefits, and 2) the Concerns-based Adoption Model (CBAM), a diagnostic approach that assesses stages of participant concern about how training will affect individual job performance, describes how training will be configured and practiced within the workplace, and gauges the actual level of training use.

The course is designed to be interactive and to provide a practical approach for planning (those leading or commissioning training evaluations), implementing, conducting or managing training evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics and measures; training evaluation design; data collection – instrumentation and administration, data quality; reporting progress, change, results; and, disseminating findings and recommendations – knowledge management resulting from training initiatives. Case examples will be used throughout the course to illustrate course content.


March 5

Using Program Evaluation in Nonprofit Environments

Instructor: Kathryn Newcomer, PhD

Description: Funders and oversight boards typically need data on the results obtained by the programs they fund. Within foundations, program officers want information about grantees, and about lines of effort they fund to guide planning and future allocation of resources.  Executive officers and members of the boards that oversee nonprofit service providers also want to know what works and what does not. This class provides background that program officers and overseers need to understand how evaluation can serve their information needs, and how to assess the quality of the evidence they receive.

Drawing upon cases from foundations and nonprofits, the session will help attendees:

  • Clarify where to start in using evaluation to improve nonprofit social service programs
  • Learn what/who drives program evaluation and performance measurement in public and nonprofit service providers
  • Explore uses of evaluation and outcomes assessment in the non-profit sector
  • Understand how to frame useful scopes of work (SOWs) and requests for proposals (RFPs)  for evaluations and performance measurement systems
  • Identify and apply relevant criteria  in choosing contractors and consultants to provide evaluation assistance
  • Discuss challenges to measurement of social service outcomes
  • Understand what questions to ask of  internal evaluation staff and outside consultants about the quality of their work

March 6

Utilization-Focused Evaluation

Instructor: Michael Quinn Patton, PhD

Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process.  Therefore, the focus in utilization-focused evaluation is on intended use by intended users.

Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation.  Situational responsiveness guides the interactive process between evaluator and primary intended users.  A psychology of use undergirds and informs utilization-focused evaluation:  intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

Participants will learn:

  • Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
  • Implications of focusing an evaluation on intended use by intended users.
  • Options for evaluation design and methods based on situational responsiveness, adaptability and creativity.
  • Ways of building evaluation into the programming process to increase use.

The course will utilize the instructor’s text: Utilization-Focused Evaluation, 4th Ed., (Sage, 2008).


March 7

Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations.


March 7-8

Working with Evaluation Stakeholders

Instructor: John Bryson, PhD

Description: Working with stakeholders is a fact of life for evaluators. That interaction can be productive and beneficial to evaluation studies that inform decisions and produce positive outcomes for decision makers and program recipients. Or that interaction can be draining and conflictual for both the evaluator and the stakeholders and lead to studies that are misguided, cost too much, take too long, never get used, or never get done at all. So this is an incredibly important topic for evaluators to explore.  This course focuses on strategies and techniques to identify stakeholders who can and will be most beneficial for the achievement of study goals and how to achieve a productive working relationship with them.  Stakeholder characteristics like knowledge of the program, power and ability to influence, willingness to participate, etc., will be analyzed and strategies and techniques are presented to successfully engage stakeholders for effective collaboration. Detailed course materials, case examples, and readings are provided to illuminate course content and extend its long-term usefulness.


March 7-8

Developmental Evaluation: Systems and Complexity

(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)

Instructor: Michael Quinn Patton, PhD

Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity.  The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. The course will utilize the instructor’s book: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).


March 8-10

Outcome and Impact Assessment

Instructor: Melvin Mark, Ph.D.

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.


March 9-10

Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

Instructor: Debra J. Rog, PhD

Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative.  Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches.  The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches.  The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation).  A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants.  The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work.  Participants will work in small groups on an example that will carry through the two days of the course.

Participants will be sent materials prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable.


March 9-10

Policy Analysis, Implementation and Evaluation

Instructor: Doreen Cavanaugh, PhD

Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.

Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.

Contact Us

The Evaluators’ Institute

tei@cgu.edu