February 26-27

Project Management and Oversight for Evaluators

Instructor: Tessie Catsambas, MPP

Description: The purpose of this course is to provide new and experienced evaluation professionals and funders with strategies, tools and skills to: (1) develop realistic evaluation plans; (2) negotiate needed adjustments when issues arise; (3) organize and manage evaluation teams; (4) monitor evaluation activities and budgets; (5) protect evaluation independence and rigor while responding to client needs; and (6) ensure the quality of evaluation products and briefings.

Evaluation managers have a complex job: they oversee the evaluation process and are responsible for safeguarding the methodological integrity, evaluation activities, and budgets. In many cases they must also manage people, including clients, various stakeholders, and other evaluation team members. Evaluation managers shoulder the responsibility for the success of the evaluation, frequently dealing with unexpected challenges, and making decisions that influence the quality and usefulness of the evaluation.

Against a backdrop of demanding technical requirements and a dynamic political environment, the goal of evaluation management is to develop, with available resources and time, valid and useful measurement information and findings, and ensure the quality of the process, products and services included in the contract. Management decisions influence methodological decisions and vice versa, as method choice has cost implications.

The course methodology will be experiential and didactic, drawing on participants’ experience and engaging them with diverse material. It will include paper and online tools for managing teams, work products and clients; an in-class simulation game with expert judges; case examples; reading; and a master checklist of processes and sample forms to organize and manage an evaluation effectively. At the end of this training, participants will be prepared to follow a systematic process with support tools for commissioning and managing evaluations, and will feel more confident to lead evaluation teams and negotiate with clients and evaluators for better evaluations.


February 26-27

Needs Assessment

Instructor: Ryan Watkins, PhD

Description: The initial phase of a project or program is among the most critical in determining its long-term success. Needs assessments support this initial phase of project development with proven approaches to gathering information and making justifiable decisions. In a two-day course, you will learn how needs assessment tools and techniques help you identify, analyze, prioritize, and accomplish the results you really want to achieve. Filled with practical strategies, tools, and guides, the workshop covers both large-scale, formal needs assessments and less formal assessments that guide daily decisions. The workshop blends rigorous methods and realistic tools that can help you make informed and reasoned decisions. Together, these methods and tools offer a comprehensive, yet realistic, approach to identifying needs and selecting among alternative paths forward.

In this course, we will focus on the pragmatic application of many needs assessment tools, giving participants the opportunity to practice their skills while learning how needs assessment techniques can improve the achievement of desired results. With participants from a variety of sectors and organizational roles, the workshop will illustrate how needs assessments can be of value in a variety of operational, capacity development, and staff learning functions.


February 26-27

Internal Evaluation: Building Organizations from Within

Instructor: Arnold Love, PhD

Description: Internal evaluations are conducted by an organization’s own staff members rather than by outside evaluators. Internal evaluators have the enormous advantage of an insider’s knowledge so they can rapidly focus evaluations on areas managers and staff know are important, develop systems that spot problems before they occur, constantly evaluate ways to improve service delivery processes, strengthen accountability for results, and build organizational learning that empowers staff and program participants alike.

This course begins with the fundamentals of designing and managing effective internal evaluation, including an examination of internal evaluation with its advantages and disadvantages, understanding internal evaluation within the organizational context, recognizing both positive and potentially negative roles for internal evaluators, defining the tasks of managers and evaluators, identifying the major steps in the internal evaluation process, strategies for selecting the right internal evaluation tools, and key methods for making information essential for decision making available to management, staff, board members, and program participants.

The second day will focus on practical ways of designing and managing internal evaluations that make a difference, including: methods for reducing the potential for bias and threats to validity, practical steps for organizing the internal evaluation function, outlining the specific skills the internal evaluator needs, strategies to build internal evaluation capacity in your organization, and ways for building links between internal evaluation and organizational development. Teaching will be interactive, combining presentations with opportunities for participation and discussion. Time will be set aside on the second day for an in-depth discussion of key issues and concerns raised by participants. The instructor’s book on Internal Evaluation: Building Organizations from Within (Sage) is provided with other resource materials.


February 28

Outcome and Impact Assessment

Instructor: Mark W. Lipsey, PhD

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.


February 28-29

Introduction to Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Clive Belfield, PhD

Description: The tools and techniques of benefit-cost and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret benefit-cost and cost-effectiveness analyses. Content includes: identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, benefit-cost ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).


February 28-March 1

Implementation Analysis for Feedback on Program Progress and Results

Instructor: Arnold Love, PhD

Description: Many programs do not achieve intended outcomes because of how they are implemented. Thus, implementation analysis (IA) is very important for policy and funding decisions. IA fills the methodological gap between outcome evaluations that treat a program as a “black box” and process evaluations that present a flood of descriptive data. IA provides essential feedback on the “critical ingredients” of a program, and helps drive change through an understanding of factors affecting implementation and short-term results. Topics include: importance of IA; conceptual and theoretical foundations of IA; how IA drives change and complements other program evaluation approaches; major models of IA and their strengths/weaknesses; how to build an IA framework and select appropriate IA methods; concrete examples of how IA can keep programs on-track, spot problems early, enhance outcomes, and strengthen collaborative ventures; and suggestions for employing IA in your organization. Detailed course materials and in-class exercises are provided.


March 1-2

Evaluating Resource Allocation in Complex Environments

Instructor: Doreen Cavanaugh, PhD

Description: Worldwide financial crises challenge evaluators to examine efficiency as well as the effectiveness of the programs and interventions implemented to effect favorable systems change. This course puts systems change under a microscope by examining three essential infrastructure elements of successful program effort: collaboration, leadership and resource allocation, and the methods used to evaluate them.

The need to do more with less has increased the value of and emphasis on maximizing performance and results. Improved collaboration across participating stakeholders is one potential way of achieving both program efficiency and effectiveness. Existing studies identify that groups form partnerships by engaging in four increasingly complex activities: networking, coordination of services or resources, cooperation and finally collaboration. This course discusses each of these activities, their similarities and differences, their contributions to project/program outcomes and methods for evaluating them.

We know that collaborative frameworks yield new styles of leadership, and as a consequence, the need for new evaluation approaches. Frequently found, hierarchical top down management models give way to an array of new stakeholder positions – change champions and boundary-spanners, individuals who can manage across organizational boundaries, each contributing to the outcome and impact of a project or program. This course will provide participants with an understanding of differing leadership styles, linking the style to the project/program objectives, with an emphasis on methods of evaluating the effect of leadership on intermediate and long-term project/program outcomes.

Today, efficient and effective systems change often requires a reallocation of human and financial resources, and the need for flexible evaluation approaches. This course examines the role of resource allocation in project/program outcomes; pre-requisites for determining efficiency; a method of tracking – resource mapping, for redesigning resource deployment; and how to evaluate the resulting effects of resource reallocation on systems change and project/program outcomes.

Resource mapping is a process most often used to identify funds and in-kind contributions that are expended by an entity (governmental, donor, foundation, etc.) within a specific timeframe to address a certain issue/population of interest. The information gathered through this process is then available to inform the design and development of a comprehensive evaluation approach that will examine a proposed system change that will utilize available funds in the most efficient and effective ways.

Resource mapping may be employed to answer any number of evaluation questions. In some projects/programs, funders may wish to design, develop and support a healthcare, education, transportation, etc., system for a specific population group. In other cases stakeholders may want to evaluate the efficiency of these systems. Others may wish to harness resources specifically allocated to diverse divisions within one agency or organization. For any question of interest regarding resource allocation, this mapping strategy is a tool to help evaluators inform policymakers, program developers, and managers in answering essential questions such as:

  • What financial resources do we have to work with?
  • What is the best way to organize, allocate and administer these resources for maximum efficiency and effectiveness?
  • How will redesigning resource allocation contribute to the outcome and impact of the effort (project/program) at hand?

Participants will learn that completing a resource map is not an end in itself but rather a means to gathering evaluative information that informs the development of a comprehensive plan for resourcing project goals, asking whether resources are indeed sufficient to achieve the stated goals and objectives. Completing the mapping exercise will provide an x-ray of the system. It will identify gaps, inefficiencies, overlaps, and opportunities for collaboration with all participating partners. The map may assist evaluators in informing planners/stakeholders in identifying which resources might be combined in pooled, braided or blended arrangements that assure optimal outcomes for projects and/or programs.

On Day 1, participants will use examples from their own experience to apply the essential infrastructure elements of collaboration, leadership and resource allocation to a real life, evaluation situation. Day 2 will focus on ways to evaluate the contributions of collaboration, leadership and resource allocation strategies to systems change goals, outcome and impact.


March 2-3

Evaluating Training Programs: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The evaluation of training programs typically emphasizes participants’ initial acceptance and reaction to training content; learning, knowledge and skill acquisition; participant performance and behavioral application of training; and, benefits at the organizational and societal levels that result from training participation. The evaluation of training programs especially behavioral application of content and organizational benefits from training continue to be an evaluation challenge. Today’s training approaches are wide-ranging, including classroom type presentations, self-directed online study courses, online tutorials and coaching components, supportive technical assistance, and so forth. Evaluation approaches must be sufficiently facile to accommodate training modalities and the individual and organizational outcomes that result from training efforts.

The Kirkpatrick (1959, 1976) training model has been a longstanding evaluation approach; however, it is not without criticism or suggested modification. The course provides an overview of two training program evaluation frameworks: 1) the Kirkpatrick model and modifications, which emphasizes participant reaction, learning, behavioral application and organizational benefits, and 2) the Concerns-based Adoption Model (CBAM), a diagnostic approach that assesses stages of participant concern about how training will affect individual job performance, describes how training will be configured and practiced within the workplace, and gauges the actual level of training use.

The course is designed to be interactive and to provide a practical approach for planning (those leading or commissioning training evaluations), implementing, conducting or managing training evaluations. The course covers an overview of training evaluation models; pre-training assessment and training program expectations; training evaluation planning; development of key indicators, metrics and measures; training evaluation design; data collection – instrumentation and administration, data quality; reporting progress, change, results; and, disseminating findings and recommendations – knowledge management resulting from training initiatives. Case examples will be used throughout the course to illustrate course content.


March 2

Intermediate Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Joseph Cordes, PhD

Description: The Intermediate Cost-Benefit Analysis course provides a more advanced and detailed review of the principles of social cost and social benefit estimation than is provided in TEI’s Introduction to Cost-Benefit and Cost Effectiveness Analysis. Working with the instructor, students will undertake hands-on estimation of the costs and benefits of actual programs in the computer lab. The objective is to develop the ability both to critically evaluate and use cost-benefit analyses of programs in the public and nonprofit sectors, and to use basic cost-benefit analysis tools to actively undertake such analyses. Topics covered in the course will include:

I. Principles of Social Cost and Social Benefit Estimation

  1. Social Cost Estimation: (a) Components (capital, operating, administrative); (b) Budgetary and Social Opportunity Cost
  2. Social Benefit Estimation: (a) Social vs. private benefits; (b) revealed benefit measures (Price/cost changes in primary market, Price/cost changes in analogous markets, Benefits inferred from market-trade-offs, and cost/damages avoided as benefit measures)
  3. Stated preference measures: Inferring benefits from survey data
  4. Benefit/Cost Transfer: Borrowing estimates of benefits and costs from elsewhere.
  5. Timing of Benefits and Costs: (a) Discounting and net present value, (b) Dealing with inflation, (c). Choosing a discount rate
  6. Presenting Results: (a) Sensitivity analysis (partial sensitivity analysis, best/worst case scenarios, break-even analysis, and Monte-Carlo analysis); (b) Present value of net social benefits, (c) Benefit Cost Ratio, (d) Internal rate of Return

II. Social Cost and Social Benefit Estimation in Practice

The use of the above principles of cost and benefit estimation will be illustrated using data drawn from several actual benefit cost analysis of real programs. The cases will be chosen to illustrate the application of the benefit/cost estimation principles in the case of social programs, health programs, and environmental programs. Working with the instructor in the computer lab, students will create a benefit-cost analysis template and then use that template to estimate social benefits and social costs, and to present a benefit-cost bottom line.

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with cost-benefit and/or cost-effectiveness analysis equivalent to the TEI course Introduction to Cost-Benefit and Cost-Effectiveness Analysis.


March 3-4

Strategy Mapping

Instructor: John Bryson, PhD

Description: The world is often a muddled, complicated, dynamic place in which it seems as if everything connects to everything else–and that is the problem! The connections can be problematic because, while we know things are connected, sometimes we do not know how, or else there are so many connections we cannot comprehend them all. Alternatively, we may not realize how connected things are and our actions lead to unforeseen and unhappy consequences. Either way, we would benefit from an approach that helps us strategize, problem solve, manage conflict, and design evaluations that help us understand how connected the world is, what the effects of those connections are, and what might be done to change some of the connections and their effects.

Visual strategy mapping (ViSM) is a simple and useful technique for addressing situations where thinking–as an individual or as a group–matters. ViSM is a technique for linking strategic thinking, acting, and learning; helping make sense of complex problems; communicating to oneself and others what might be done about them; and also managing the inevitable conflicts that arise.

ViSM makes it possible to articulate a large number of ideas and their interconnections in such a way that people can know what to do in an area of concern, how to do it, and why. The technique is useful for formulating and implementing mission, goals, and strategies and for being clear about how to evaluate strategies. The bottom line is: ViSM is one of the most powerful strategic management tools in existence. ViSM is what to do when thinking matters!

When can mapping help? There are a number of situations that are tailor-made for mapping. Mapping is particularly useful when:

  • Effective strategies need to be developed
  • Persuasive arguments are needed
  • Effective and logical communication is essential
  • Effective understanding and management of conflict are needed
  • When it is vital that a situation be understood better as a prelude to any action
  • Organizational or strategic logic needs to be clarified in order to design useful evaluations

These situations are not meant to be mutually exclusive. Often they overlap in practice. In addition, mapping is very helpful for creating business models and balanced scorecards and dashboards. Visual strategy maps are related to logic models, as both are word-and-arrow diagrams, but are more tied to goals, strategies, and actions and are more careful about articulating causal connections.

Objectives: (Strategy Mapping)

At the end of the course, participants will:

  • Understand the theory of mapping
  • Know the difference between action-oriented strategy maps, business model maps, and balanced scorecard maps
  • Be able to create action-oriented strategy maps for individuals – that is, either for oneself or by interviewing another person
  • Be able to create action-oriented maps for groups
  • Be able to create a business model map linking competencies and distinctive competencies to goals and critical success factors
  • Know how to design and manage change processes in which mapping is prominent
  • Have an action plan for an individual project


March 3-4

Social and Organizational Network Analysis – Evaluating the Way Individuals and Organizations Interact

Instructor: Lynne Franco, ScD

Description: This course is an introductory course for evaluators who want to explore how social or organizational network analysis can be added to their repertoire of tools and methods. Network analysis is a technique that allows us to better understand social structures by enabling visualization (through its network plots) and analyzing interactions among actors (through its associated network statistics).

Social or organizational network analysis can help us build understanding of why a particular network may or may not be successful in achieving its goals, or be sustained over time. The linkages between actors (individuals or organizations) can include various types of connections, such as exchange of information, human and financial resources, power and influence, and social support.

This course will focus briefly on the theory behind social and organizational network analysis, but focus mostly on how social or organizational network analysis can add value to evaluation, covering when evaluations can make best use of it (what kinds of evaluation questions can it help answer?), key decisions in designing an SNA/ONA, strategies for and pitfalls in data collection, approaches to analysis, and how to help clients draw meaning from the results.

The course will also cover the steps of implementing SNA/ONA, and highlight issues in collecting and analyzing data, and interpreting findings or “reading” the SNA/ONA. Through discussions, group work, hands-on analysis of case study data, participants will experience the whole process of using social and organizational network analysis.

Objectives:

  • By the end of this course, participants will be able to:
  • Understand when social/organizational network analysis can be useful in evaluation
  • Outline the key components of a social/organizational network analysis design
  • Understand trade-offs in scope and sampling decisions
  • Develop network plots and statistics using software
  • Explain to a client how to make sense of network analysis data


March 3

Measuring Performance and Managing Results in Government and Nonprofit Organizations

Instructor: Theodore H. Poister, PhD

Description: A commitment to performance measurement has become pervasive throughout government, the nonprofit sector, foundations, and other nongovernmental organizations in response to demands for increased accountability, pressures for improved quality and customer service, and mandates to “do more with less,” as well as the drive to strengthen the capacity for results oriented management among professional public and nonprofit administrators.

While the idea of setting goals, identifying and monitoring measures of success in achieving them, and using the resulting performance information in a variety of decision venues might appear to be a straightforward process, a myriad of conceptual, political, managerial, cultural, psychological, and organizational constraints – as well as serious methodological issues – make this a very challenging enterprise. This course presents a step-by-step process for designing and implementing effective performance management systems in public and nonprofit agencies, with an emphasis on maximizing their effectiveness in improving organizational and program performance.  The focus is on the interplay between performance measurement and management, as well as the relationships among performance measurement, program evaluation, and evidence based policy, and all topics are illustrated with examples from a wide variety of program areas including those drawn from the instructor’s experience in such areas as local government services, child support enforcement, public health, and nursing regulation as well as transportation.

Day 1 overviews the basics of performance measurement and looks at frameworks for identifying outcomes and other dimensions of performance, data sources and the definition of performance indicators, and criteria for systematically evaluating the usefulness of potential indicators.  Day 2 looks at the analysis and reporting of performance information and its incorporation in a number of critical management processes such as strategic planning, results based budgeting, program management and evaluation, quality improvement, performance contracting and grants management, stakeholder engagement, and the management of employees and organizations.  The course concludes with a discussion of the “process side” of the design and implementation of performance measures and discusses strategies for building effective performance management systems.

The text, Managing and Measuring Performance in the Public and Nonprofit Organizations by Theodore H. Poister, Maria P. Aristigueta, and Jeremy Hall, 2nd Edition  (Jossey-Bass, 2015), case studies, and other materials are provided.


March 4

Utilization-Focused Evaluation

Instructor: Michael Quinn Patton, PhD

Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process.  Therefore, the focus in utilization-focused evaluation is on intended use by intended users.

Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation.  Situational responsiveness guides the interactive process between evaluator and primary intended users.  A psychology of use undergirds and informs utilization-focused evaluation:  intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

Participants will learn:

  • Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
  • Implications of focusing an evaluation on intended use by intended users.
  • Options for evaluation design and methods based on situational responsiveness, adaptability and creativity.
  • Ways of building evaluation into the programming process to increase use.

Participants will receive a copy of the instructor’s text: Utilization-Focused Evaluation, 4th Ed., (Sage, 2008).


March 7

Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations. The text provided as part of course fee is Torres et al., Evaluation Strategies for Communicating and Reporting (2nd Ed., Sage, 2005).


March 7-8

Working with Evaluation Stakeholders

Instructor: John Bryson, PhD

Description: Working with stakeholders is a fact of life for evaluators. That interaction can be productive and beneficial to evaluation studies that inform decisions and produce positive outcomes for decision makers and program recipients. Or that interaction can be draining and conflictual for both the evaluator and the stakeholders and lead to studies that are misguided, cost too much, take too long, never get used, or never get done at all. So this is an incredibly important topic for evaluators to explore.  This course focuses on strategies and techniques to identify stakeholders who can and will be most beneficial for the achievement of study goals and how to achieve a productive working relationship with them.  Stakeholder characteristics like knowledge of the program, power and ability to influence, willingness to participate, etc., will be analyzed and strategies and techniques are presented to successfully engage stakeholders for effective collaboration. Detailed course materials, case examples, and readings are provided to illuminate course content and extend its long-term usefulness.


March 7-8

Developmental Evaluation: Systems and Complexity

(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)

Instructor: Michael Quinn Patton, PhD

Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity.  The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. Participants will receive a copy of the instructor’s book: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).


March 8-10

Outcome and Impact Assessment

Instructor: Mark W. Lipsey, PhD

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.


March 9-10

Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

Instructor: Debra J. Rog, PhD

Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative.  Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches.  The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches.  The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation).  A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants.  The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work.  Participants will work in small groups on an example that will carry through the two days of the course.

Participants will be sent materials prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable.


March 9-10

Policy Analysis, Implementation and Evaluation

Instructor: Doreen Cavanaugh, PhD

Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.

Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.

Programs and Events
September 2017 Program
Project Management & Oversight for Evaluators, Sept 19-28, 2017
Project Management & Oversight for Evaluators, Dec 5-14, 2017
Feb 26-March 10, 2018
March 12-17, 2018
July 9-21, 2018
Contact Us

The Evaluators’ Institute

TEI Maryland Office
1451 Rockville Pike, Suite 600
Rockville, MD 20852
301-287-8745
tei@cgu.edu