July 9-10

Social and Organizational Network Analysis – Evaluating the Way Individuals and Organizations Interact

Instructor: Lynne Franco, ScD

Description: This course is an introductory course for evaluators who want to explore how social or organizational network analysis can be added to their repertoire of tools and methods. Network analysis is a technique that allows us to better understand social structures by enabling visualization (through its network plots) and analyzing interactions among actors (through its associated network statistics).

Social or organizational network analysis can help us build understanding of why a particular network may or may not be successful in achieving its goals, or be sustained over time. The linkages between actors (individuals or organizations) can include various types of connections, such as exchange of information, human and financial resources, power and influence, and social support.

This course will focus briefly on the theory behind social and organizational network analysis, but focus mostly on how social or organizational network analysis can add value to evaluation, covering when evaluations can make best use of it (what kinds of evaluation questions can it help answer?), key decisions in designing an SNA/ONA, strategies for and pitfalls in data collection, approaches to analysis, and how to help clients draw meaning from the results.

The course will also cover the steps of implementing SNA/ONA, and highlight issues in collecting and analyzing data, and interpreting findings or “reading” the SNA/ONA. Through discussions, group work, hands-on analysis of case study data, participants will experience the whole process of using social and organizational network analysis.

Objectives:

  • By the end of this course, participants will be able to:
  • Understand when social/organizational network analysis can be useful in evaluation
  • Outline the key components of a social/organizational network analysis design
  • Understand trade-offs in scope and sampling decisions
  • Develop network plots and statistics using software
  • Explain to a client how to make sense of network analysis data


July 9-11

Basics of Program Evaluation

(Previously taught as Foundations of Evaluation: Theory, Method, and Practice)

Instructor: Arnold Love, PhD

Description: With an emphasis on constructing a sound foundational knowledge base, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; evaluation as a cognitive activity; the view of evaluation as a transdiscipline; the general and working logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods (e.g., needs assessment, stakeholder analysis, identifying evaluative criteria, standard setting); reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, and validity; the function of program theory in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; and emerging and enduring issues in evaluation theory, method, and practice. Although the major focus of the course is program evaluation in multiple settings (e.g., education, criminal justice, health and medicine, human and social services, international development, science and technology), examples from personnel evaluation, policy analysis, and product evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct high quality evaluations using a contingency-based and situational approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of instructor-led presentations, discussions, and application exercises. Audiences for this course include those who have familiarity with social science research, but are unfamiliar with evaluation, and evaluators who wish to review current theories, methods, and practices.

Prerequisites: Basic knowledge of social science research methods.


July 9

Using Program Evaluation in Nonprofit Environments

Instructor: Kathryn Newcomer, PhD

Description: Funders and oversight boards typically need data on the results obtained by the programs they fund. Within foundations program officers want information about grantees, and about lines of effort they fund to guide planning and future allocation of resources.  Executive officers and members of the boards that oversee nonprofit service providers also want to know what works and what does not. This class provides background that program officers and overseers need to understand how evaluation can serve their information needs, and how to assess the quality of the evidence they receive.

Drawing upon cases from foundations and nonprofits, the session will help attendees:

  • Clarify where to start in using evaluation to improve nonprofit social service programss
  • Learn what/who drives program evaluation and performance measurement in public and nonprofit service providers
  • Explore uses of evaluation and outcomes assessment in the non-profit sector
  • Understand how to frame useful scopes of work (SOWs) and requests for proposals (RFPs)  for evaluations and performance measurement systems
  • Identify and apply relevant criteria  in choosing contractors and consultants to provide evaluation assistance
  • Discuss challenges to measurement of social service outcomes
  • Understand what questions to ask of  internal evaluation staff and outside consultants about the quality of their work


July 9-10

Presenting Data Effectively: Practical Methods for Improving Evaluation Communication

Instructor: Stephanie Evergreen, PhD

Description: Crystal clear charts and graphs are valuable–they save an audience’s mental energies, keep a reader engaged, and make you look smart. In this workshop, attendees will learn the science behind presenting data effectively. We will go behind-the-scenes in Excel and discuss how each part of a visualization can be modified to best tell the story in a particular dataset. We will discuss how to choose the best chart type, given audience needs, cognitive capacity, and the story that needs to be told about the data–and this will include both quantitative and qualitative visualizations. We will walk step-by-step through how to create newer types of data visualizations and how to manipulate the default settings to customize graphs so that they have a more powerful impact. Working in a computer lab, attendees will build with a prepared spreadsheet to learn the secrets to becoming an Excel dataviz ninja. Attendees will get hands-on practice implementing direct, practical steps that can be immediately implemented after the workshop to clarify data presentation and support clearer decision-making. Full of guidelines and examples, after this workshop you’ll be better able to package your data so it represents your smart, professional quality.

Note: Attendees are strongly encouraged to maximize the workshop experience by bringing a slideshow that contains graphs under current construction.

In the second day of workshop, Dr. Stephanie Evergreen will lead attendees through how to manipulate Excel into making impactful charts and graphs, step-by-step, using provided data sets distributed to the audience. Audience members will leave the session with more in depth knowledge about to craft effective data displays. The demonstration will occur in the computer lab on PCs running Office 2010. Completing the session moves one to Excel Ninja Level 10.

Attendees will learn:

  1. Visual processing theory and why it is relevant for evaluators
  2. How to apply graphic design best practices and visual processing theory to enhance data visualizations with simple, immediately implementable steps
  3. Which chart type to use, when
  4. How to construct data visualizations and other evaluation communication to best tell the story in the data
  5. Alternative methods for reporting

Workshop attendees will leave with helpful handouts and a copy of Effective Data Visualization (Sage, 2016).

Registrants should regularly develop graphs, slideshows, technical reports and other written communication for evaluation work and be familiar with the navigational and layout tools available in simple software programs, like Microsoft Office.


July 9-10

Needs Assessment

Instructor: Ryan Watkins, PhD

Description: The initial phase of a project or program is among the most critical in determining its long-term success. Needs assessments support this initial phase of project development with proven approaches to gathering information and making justifiable decisions. In a two-day course, you will learn how needs assessment tools and techniques help you identify, analyze, prioritize, and accomplish the results you really want to achieve. Filled with practical strategies, tools, and guides, the workshop covers both large-scale, formal needs assessments and less formal assessments that guide daily decisions. The workshop blends rigorous methods and realistic tools that can help you make informed and reasoned decisions. Together, these methods and tools offer a comprehensive, yet realistic, approach to identifying needs and selecting among alternative paths forward.

In this course, we will focus on the pragmatic application of many needs assessment tools, giving participants the opportunity to practice their skills while learning how needs assessment techniques can improve the achievement of desired results. With participants from a variety of sectors and organizational roles, the workshop will illustrate how needs assessments can be of value in a variety of operational, capacity development, and staff learning functions.


July 9-13

Applied Statistics for Evaluators

Instructor: Theodore H. Poister, PhD

Description:

In this class students will become familiar with a set of statistical tools that are often used in program evaluation, with a strong emphasis on appropriate application of techniques and interpretation of results. It is designed to “demystify” statistics and clarify how and when to use particular techniques. While the principal concern focuses on practical applications in program evaluations rather than the mathematical support underlying the procedures, a number of formulas and computations are covered to help students understand the logic of how the statistics work. Topics include introduction to data analysis; simple descriptive statistics; examination of statistical relationships; the basics of statistical inference from sample data; two-sample t tests, chi square and associated measures; analysis of variance; and an introduction to simple and multiple regression analysis.

Students will learn how to generate a wide variety of tables and graphs for presenting results, and a premium will be placed on clear presentation and interpretation of results. This “hands-on” class is conducted in a computer lab in which each participant has a computer for running statistical procedures on a wide range of real-world data sets, using SPSS software. However, no prior knowledge of statistics or SPSS is required. While this is an introductory course, it can also serve as a refresher for those with some training in statistics, and an “eye opener” for evaluators who are working with statistics now but are not comfortable with when and how they should be used.


July 10-11

Conducting Successful Evaluation Surveys

Instructor: Jolene D. Smyth, PhD

Description: The success of many evaluation projects depends on the quality of survey data collected. In the last decade, sample members have become increasingly reluctant to respond, especially in evaluation contexts. In response to these challenges and to technological innovation, methods for doing surveys are changing rapidly. This course will provide new and cutting-edge information about best practices for designing and conducing internet, mail, and mixed-mode surveys.

Students will gain an understanding of the multiple sources of survey error and how to identify and fix commonly occurring survey issues. The course will cover writing questions; visual design of questions (drawing on concepts from the vision sciences); putting individual questions together into a formatted questionnaire; designing web surveys; designing for multiple modes; and fielding surveys and encouraging response by mail, web, or in a mixed-mode design.

The course is made up of a mixture of PowerPoint presentation, discussion, and activities built around real-world survey examples and case studies. Participants will apply what they are learning in activities and will have ample opportunity to ask questions during the course (or during breaks) and to discuss the survey challenges they face with the instructor and other participants. Participants will receive a copy of course slides and of the text Internet, mail, and Mixed-Mode Surveys: The Tailored Design Method by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian (4th Edition, 2014).


July 11-12

Project Management and Oversight for Evaluators

Instructor: Tessie Catsambas, MPP

Description: The purpose of this course is to provide new and experienced evaluation professionals and funders with strategies, tools and skills to: (1) develop realistic evaluation plans; (2) negotiate needed adjustments when issues arise; (3) organize and manage evaluation teams; (4) monitor evaluation activities and budgets; (5) protect evaluation independence and rigor while responding to client needs; and (6) ensure the quality of evaluation products and briefings.

Evaluation managers have a complex job: they oversee the evaluation process and are responsible for safeguarding the methodological integrity, evaluation activities, and budgets. In many cases they must also manage people, including clients, various stakeholders, and other evaluation team members. Evaluation managers shoulder the responsibility for the success of the evaluation, frequently dealing with unexpected challenges, and making decisions that influence the quality and usefulness of the evaluation.

Against a backdrop of demanding technical requirements and a dynamic political environment, the goal of evaluation management is to develop, with available resources and time, valid and useful measurement information and findings, and ensure the quality of the process, products and services included in the contract. Management decisions influence methodological decisions and vice versa, as method choice has cost implications.

The course methodology will be experiential and didactic, drawing on participants’ experience and engaging them with diverse material. It will include paper and online tools for managing teams, work products and clients; an in-class simulation game with expert judges; case examples; reading; and a master checklist of processes and sample forms to organize and manage an evaluation effectively. At the end of this training, participants will be prepared to follow a systematic process with support tools for commissioning and managing evaluations, and will feel more confident to lead evaluation teams and negotiate with clients and evaluators for better evaluations.


July 11-12

Introduction to Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Clive Belfield, PhD

Description: The tools and techniques of benefit-cost and cost-effectiveness analysis will be presented. The goal of the course is to provide analysts with the skills to interpret benefit-cost and cost-effectiveness analyses. Content includes: identification and measurement of costs using the ingredients method; how to specify effectiveness; shadow pricing for benefits using revealed preference and contingent valuation methods; discounting; calculation of cost-effectiveness ratios, net present value, benefit-cost ratios, and internal rates of return. Sensitivity testing and uncertainty will also be addressed. Individuals will work in groups to assess various costs, effects and benefits applicable to selected case studies across various policy fields. Case studies will be selected from across policy fields (e.g. health, education, environmental sciences).


July 11-12

Policy Analysis, Implementation and Evaluation

Instructor: Doreen Cavanaugh, PhD

Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.

Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.


July 12

Comparative Effectiveness: Balancing Design with Quality Evidence

Instructor: Ann Doucette, PhD

Description: Evidence is the foundation on which we make judgments, decisions, and policy. Gathering evidence can be a challenging and time-intensive process. Although there are many approaches to gathering evidence, random clinical trials (RCTs) have remained the “gold standard” in establishing effectiveness, impact and causality, despite the fact that strong proponents of RCTs sometimes assert that RCTs are not the only valid method, nor necessarily the optimal approach in gathering evidence. RCTs can be costly in terms of time and resources; can raise ethical concerns regarding the exclusion of individuals from treatments or interventions from which they might benefit; can be inappropriate if the intervention is not sufficiently and stably implemented or if the program/service is so complex that such a design would be challenging at best and likely not to yield ecologically valid results.

Comparative effectiveness (CE) has emerged as an accepted approach in gathering evidence for healthcare decision and policymaking. CE emerged as a consequence of the worldwide concern about rising health care costs and the variability of healthcare quality, and a more immediate need for evidence of effective healthcare. RCTs, while yielding strong evidence were time intensive and posed significant delays in providing data on which to make timely policy and care decisions. CE provided a new approach to gather objective evidence, and emphasized how rigorous evaluation of the data yielded across existing studies (qualitative and quantitative) could answer the questions what works for whom and under what conditions does it work. Essentially, CE is a rigorous evaluation of the impact of various intervention options, based on existing studies that are available for specific populations. The CE evaluation of existing studies focuses not only on the benefits and risks of various interventions, but can also incorporates the costs associated them. CE takes advantage of both quantitative and qualitative methods, using a standardized protocol in judging the strength of, and synthesis of the evidence provided by existing studies.

The basic CE questions are: Is the available evidence good enough to support high stakes decisions? If we rely solely on RCTs for evidence, will it result in a risk that available non-RCT evidence will not be considered sufficient as a basis for policy decisions? Will sufficient evidence be available for decision-makers at the time when they need it? What alternatives can be used to ensure that rigorous findings be made available to decision-makers when they need to act? CE has become an accepted alternative to RCTs in medicine and health. While CE approach has focused on medical intervention, the approach has potential for human and social interventions that are implemented in other areas (education, justice, environment, etc.).

This course will provide an overview of CE from an international perspective (U.S., U.K., Canada, France, Germany, Turkey), illustrating how different countries have defined and established CE frameworks; how data are gathered, analyzed and used in health care decision-making; and how information is disseminated and whether it leads to change in healthcare delivery. Though CE has been targeted toward enhancing the impact of health care intervention, this course will consistently focus on whether and how CE (definition, methods, analytical models, dissemination strategies, etc.) can be applied to other human and social program areas (education, justice, poverty, environment, etc.).

No prerequisites are required for this one-day course.


July 12

Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations. The text provided as part of course fee is Torres et al., Evaluation Strategies for Communicating and Reporting (2nd Ed., Sage, 2005).


July 13-14

Implementation Analysis for Feedback on Program Progress and Results

Instructor: Arnold Love, PhD

Description: Many programs do not achieve intended outcomes because of how they are implemented. Thus, implementation analysis (IA) is very important for policy and funding decisions. IA fills the methodological gap between outcome evaluations that treat a program as a “black box” and process evaluations that present a flood of descriptive data. IA provides essential feedback on the “critical ingredients” of a program, and helps drive change through an understanding of factors affecting implementation and short-term results. Topics include: importance of IA; conceptual and theoretical foundations of IA; how IA drives change and complements other program evaluation approaches; major models of IA and their strengths/weaknesses; how to build an IA framework and select appropriate IA methods; concrete examples of how IA can keep programs on-track, spot problems early, enhance outcomes, and strengthen collaborative ventures; and suggestions for employing IA in your organization. Detailed course materials and in-class exercises are provided.


July 13-14

Monitoring and Evaluation: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

The two-day, interactive course will cover the following:

  • M&E introduction and overview
  • Defining the purpose and scope of M&E
  • Engaging stakeholders and establishing and evaluative climate
    • The role and effect of partnership and boundary spanners, policy, and advocacy
  • Identifying and supporting needed capabilities
  • M&E frameworks – agreement on M&E targets
    • Performance and Results-Based M&E approaches
  • Connecting program design and M&E frameworks
    • Comparisons – Is a counterfactual necessary?
    • Contribution versus attribution
  • Identification of key performance indicators (KPIs)
    • Addressing uncertainties and complexity
  • Data: collection and methods
    • Establishing indicator baselines (addressing the challenges of baseline estimates)
    • What data exists? What data/information needs to be collected?
  • Measuring progress and success – contextualizing outcomes and setting targets
    • Time to expectancy – what can be achieved by the program?
  • Using and reporting M&E findings
  • Sustaining M&E culture

The course focuses on practical application. Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches.  Case examples will be used to illustrate the M&E process. Course participants are encouraged to submit their own case examples, prior to the course for inclusion in the course discussion. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

Familiarity with evaluation is helpful, but not required, for this course.


July 13

Intermediate Cost-Benefit and Cost-Effectiveness Analysis

Instructor: Joseph Cordes, PhD

Description: The Intermediate Cost-Benefit Analysis course provides a more advanced and detailed review of the principles of social cost and social benefit estimation than is provided in TEI’s Introduction to Cost-Benefit and Cost Effectiveness Analysis. Working with the instructor, students will undertake hands-on estimation of the costs and benefits of actual programs in the computer lab. The objective is to develop the ability both to critically evaluate and use cost-benefit analyses of programs in the public and nonprofit sectors, and to use basic cost-benefit analysis tools to actively undertake such analyses. Topics covered in the course will include:

I. Principles of Social Cost and Social Benefit Estimation

  1. Social Cost Estimation: (a) Components (capital, operating, administrative); (b) Budgetary and Social Opportunity Cost
  2. Social Benefit Estimation: (a) Social vs. private benefits; (b) revealed benefit measures (Price/cost changes in primary market, Price/cost changes in analogous markets, Benefits inferred from market-trade-offs, and cost/damages avoided as benefit measures)
  3. Stated preference measures: Inferring benefits from survey data
  4. Benefit/Cost Transfer: Borrowing estimates of benefits and costs from elsewhere.
  5. Timing of Benefits and Costs: (a) Discounting and net present value, (b) Dealing with inflation, (c). Choosing a discount rate
  6. Presenting Results: (a) Sensitivity analysis (partial sensitivity analysis, best/worst case scenarios, break-even analysis, and Monte-Carlo analysis); (b) Present value of net social benefits, (c) Benefit Cost Ratio, (d) Internal rate of Return

II. Social Cost and Social Benefit Estimation in Practice

The use of the above principles of cost and benefit estimation will be illustrated using data drawn from several actual benefit cost analysis of real programs. The cases will be chosen to illustrate the application of the benefit/cost estimation principles in the case of social programs, health programs, and environmental programs. Working with the instructor in the computer lab, students will create a benefit-cost analysis template and then use that template to estimate social benefits and social costs, and to present a benefit-cost bottom line.

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with cost-benefit and/or cost-effectiveness analysis equivalent to the TEI course Introduction to Cost-Benefit and Cost-Effectiveness Analysis.


July 13-14

Mixed-Methods Evaluations: Integrating Qualitative and Quantitative Approaches

Instructor: Debra J. Rog, PhD

Description: Evaluators are frequently in evaluation situations in which they are collecting data through multiple methods, often both qualitative and quantitative.  Too often, however, these study components are conducted and reported independently, and do not maximize the explanation building that can occur through their integration.

The purpose of this course is to sensitize evaluators to the opportunities in their work for designing and implementing mixed methods, and to be more intentional in the ways that they design their studies to incorporate both qualitative and quantitative approaches.  The course will begin with an overview of the issues involved with mixed-methods research, highlighting the accolades and the criticisms of integrating approaches.  The course will then focus on the research questions and evaluation situations that are conducive for mixed-methods, and the variety of designs that are possible (e.g., parallel mixed methods that occur at the same time and are integrated in their inference; sequential designs in which one method follows another chronologically, either confirming or disconfirming the findings, or providing further explanation).  A key focus of the course will be on strategies for implementing mixed-methods designs, as well as analyzing and reporting data, using examples from the instructor’s work and those offered by course participants.  The course will be highly interactive, with ample time for participants to work on ways of applying the course to their own work.  Participants will work in small groups on an example that will carry through the two days of the course.

Participants will be sent materials prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable.


July 16-18

Evaluation Research Methods: A Survey of Quantitative and Qualitative Approaches

Instructor: David B. Wilson, PhD

Description: This course will introduce a range of basic quantitative and qualitative social science research methods that are applicable to the evaluation of various programs. This is a foundational course that introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts.

Topics will include observational and qualitative methods, survey and interview (structured and unstructured) techniques, experimental and quasi-experimental designs, and sampling methods. This course is for those who want to update their existing knowledge and skills and will serve as an introduction for those new to the topic..

Text provided: Creswell, J. Research Design (Sage, 2014).


July 16-17

Strategic Planning with Evaluation in Mind

Instructor: John Bryson, PhD

Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses – along with the many opportunities – facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.

This course presents a pragmatic approach to strategic planning based on John Bryson’s best-selling and award-winning book, Strategic Planning for Public and Nonprofit Organizations, Fifth Edition (Jossey-Bass, 2018). The course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible.  The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.

The following topics are covered though a mixture of mini-lectures, individual and small group exercises, and plenary discussion:

  • Understanding why strategic planning has become so important
  • Gaining knowledge of the range of different strategic planning approaches
  • Understanding the Strategy Change Cycle (Prof. Bryson’s preferred approach)
  • Knowing how to appropriately design formative, summative, and developmental evaluations into the strategy process
  • Knowing what it takes to initiate strategic planning successfully
  • Understanding what can be institutionalized
  • Making sure ongoing strategic planning, acting, learning, and evaluation are linked

 


July 16-18

Informing Practice Using Evaluation Models and Theories

Instructor: Melvin M. Mark, PhD

Description: Evaluators who are not aware of the contemporary and historical aspects of the profession. “are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes.” Madaus, Scriven and Stufflebeam (1983).

“Evaluation theories are like military strategy and tactics; methods are like military weapons and logistics. The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations.  The good evaluator needs theories for the same reasons in choosing and deploying methods.” Shadish, Cook and Leviton (1991).

These quotes from Madaus et al. (1983) and Shadish et al. (1991) provide the perfect rationale for why the serious evaluator should be concerned with models and theories of evaluation. The primary purpose of this class is to overview major streams of evaluation theories (or models), and to consider their implications for practice. Topics include: (1) why evaluation theories matter, (2) frameworks for classifying different theories, (3) in-depth examination of 4-6 major theories, (4) identification of key issues on which evaluation theories and models differ, (5) benefits and risks of relying heavily on any one theory, and (6) tools and skills that can help you in picking and choosing from different theoretical perspectives in planning an evaluation in a specific context. The overarching theme will be on practice implications, that is, on what difference it would make for practice to follow one theory or some other.

Theories to be discussed will be ones that have had a significant impact on the evaluation field, that offer perspectives with major implications for practice, and that represent important and different streams of theory and practice. Case examples from the past will be used to illustrate key aspects of each theory’s approach to practice.

Participants will be asked to use the theories to question their own and others’ practices, and to consider what characteristics of evaluations will help increase their potential for use. Each participant will receive Marvin Alkin’s text, Evaluation Roots (Sage, 2013) and other materials.

The instructor’s assumption will be that most people attending the session have some general familiarity with the work of a few evaluation theorists, but that most will not themselves be scholars of evaluation theory. At the same time, the course should be useful, whatever one’s level of familiarity with evaluation theory.


July 16-18

Outcome and Impact Assessment

Instructor: Mark W. Lipsey, PhD

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course.

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.


July 16-17

Using Program Theory, Theories of Change, & Logic Models in Evaluation

InstructorStewart I. Donaldson, PhD

Description: It is now commonplace to use program theory, theories of change, or logic models, in evaluation as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, program theories, theories of change, and logic models can provide conceptual clarity, motivate staff, and focus evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course focuses on developing useful program theories and theories of change, and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use program theories and theories of change to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

Included Book: Program Theory-Driven Evaluation Science: Strategies and Applications (Psychology Press)

Students may also be interested in: Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (Sage).

Prerequisites: None


July 16-18

Applied Regression Analysis for Evaluators

Instructor: Gary T. Henry, PhD

Description: Evaluators often face the situation where program outcomes vary across different participants and they want to explain those differences. To understand the contribution of the program to the outcomes, it is often necessary to control for the influence of other factors. In these situations, regression analysis is the most widely used statistical tool for evaluators to apply. The objective of this course is to describe and provide hands-on experience in conducting regression analysis, and to aid participants in interpreting regression results in an evaluation context. The course begins with a review of hypothesis testing (t-tests) and a non-mathematical explanation of how the regression line is computed for bivariate regression. A major focus is on accurately interpreting regression coefficients and tests of significance, including the slope of the line, the t-statistic, and the statistics that measure how well the regression line fits the data. Participants will also learn how to find outliers that may be unduly influencing the results. Participants will have opportunity to estimate multivariate regression models on cross-sectional data; diagnose the results to determine if they may be misleading; and test the effects of program participation with pretest-posttest and posttest-only data. Regression-based procedures for testing mediated and moderated effects will be covered. On the third day, students will be given the opportunity to conduct an independent analysis and write-up the findings. Both peer feedback and instructor feedback will be provided to build skills in interpreting findings and explaining them to interested audiences. Participants will use SPSS software to compute regression analyses and given opportunity to apply it on data from an actual evaluation. Students and instructor will work on interpreting the results and determining how to present them to evaluation audiences. The class will be in a lab where each person has a computer for application of content.


July 18-19

Intermediate Qualitative Data Analysis

Instructor: Delwyn Goodrick, PhD

Description: Data analysis involves creativity, sensitivity and rigor. In its most basic form qualitative data analysis involves some sort of labeling, coding and clustering in order to make sense of data collected from evaluation fieldwork, interviews, and/or document analysis. This intermediate level workshop builds on basic coding and categorizing familiar to most evaluators, and extends the array of strategies available to support rigorous interpretations. This workshop presents an array of approaches to support the analysis of qualitative data with an emphasis on procedures for the analysis of interview data. Strategies such as, thematic analysis, pattern matching, template analysis, process tracing, schema analysis and qualitative comparative analysis are outlined and illustrated with reference to examples from evaluation and from a range of disciplines, including sociology, education, political science and psychology. The core emphasis in the workshop is creating awareness of heuristics that support selection and application of appropriate analytic techniques that match the purpose of the evaluation, type of data, and practical considerations such as resource constraints. While a brief overview of qualitative analysis software is provided, the structure of the workshop focuses on analysis using manual methods. A range of activities to support critical thinking and application of principles is integrated within the workshop program. Qualitative data analysis and writing go hand in hand. In the second part of the workshop strategies for transforming analysis through processes of description, interpretation and judgment will be presented. These issues are particularly important in the assessment of the credibility of qualitative evidence by evaluation audiences. Issues of quality, including validity, trustworthiness and authenticity of qualitative data are integrated throughout the workshop.

Participants will receive a text, Qualitative Data Analysis: Practical Strategies by Patricia Bazeley (Sage, 2013) to support learning within and beyond the workshop. Specific issues to be addressed:

  • What are the implications of an evaluator’s worldview for selection of qualitative data analysis (QDA) strategies?
  • Are there analytic options that are best suited to particular kinds of qualitative data?
  • How can participant experiences be portrayed through QDA without fracturing the data through formal coding?
  • What types of analysis may be appropriate for particular types of evaluation (program theory, realist, transformative)
  • What strategies can be used to address interpretive dissent when working in evaluation teams?
  • What are some ways that qualitative and quantitative findings can be integrated in an evaluation report?
  • How can I sell the value of qualitative evidence to evaluation audiences?

Prerequisites: This is an intermediate level course. Participants are assumed to have some knowledge/or experience with qualitative data.


July 18-19

Developmental Evaluation: Systems and Complexity

(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)

Instructor: Michael Quinn Patton, PhD

Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity.  The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems are uncertain and key stakeholders are in conflict about how to proceed.

Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. Participants will receive a copy of the instructor’s book: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).


July 18-19

Working with Evaluation Stakeholders

Instructor: John Bryson, PhD

Description: Working with stakeholders is a fact of life for evaluators. That interaction can be productive and beneficial to evaluation studies that inform decisions and produce positive outcomes for decision makers and program recipients. Or that interaction can be draining and conflictual for both the evaluator and the stakeholders and lead to studies that are misguided, cost too much, take too long, never get used, or never get done at all. So this is an incredibly important topic for evaluators to explore.  This course focuses on strategies and techniques to identify stakeholders who can and will be most beneficial for the achievement of study goals and how to achieve a productive working relationship with them.  Stakeholder characteristics like knowledge of the program, power and ability to influence, willingness to participate, etc., will be analyzed and strategies and techniques are presented to successfully engage stakeholders for effective collaboration. Detailed course materials, case examples, and readings are provided to illuminate course content and extend its long-term usefulness.


July 19

Professional Standards and Principles for Ethical Evaluation Practice

Instructor: Michael Morris, PhD

Description: Participants will explore the ethical issues that can arise at various stages of the evaluation process, from entry/contracting all the way to the utilization of findings by stakeholders. Strategies for preventing ethical problems, as well for dealing with them once they have arisen, will be addressed. Case vignettes will be used throughout the course to provide participants with an opportunity to brainstorm such strategies, and participants will have a chance to share their own ethical challenges in evaluation with others. This course will also focus on the application of the American Evaluation Association’s Guiding Principles for Evaluators and the Joint Committee’s Program Evaluation Standards to the ethical responsibilities and challenges that evaluators encounter in their work.

The course is based on the TEI premise that ethical practice is a core competency in evaluation: Competent evaluators are ethical evaluators. Participants should emerge from the course with an enhanced understanding of how the standards and principles that inform the professional practice of evaluation can increase their chances of “doing the (ethically) right thing” when conducting evaluations in the field. Participants should also be better prepared to interact with stakeholders in a fashion that lessens the likelihood that the latter will engage in behaviors that lead to ethical difficulties.


July 19-21

Applied Measurement for Evaluation

Instructor: Ann Doucette, PhD

Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error. The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose. This class content will cover:

  • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated. Issues to be addressed include: measurement/item bias, the sensitivity of measures in terms of developmental and cultural issues, scientific soundness (reliability, validity, error, etc.), and the ability of the measure to detect change over time.
  • Quantification: Measurement is essentially assigning numbers to what is observed (direct and inferential). Decisions about how we quantify observations and the implications these decisions have for using the data resulting from the measures, as well as for the objectivity and certainty we bring to the judgment made in our evaluations will be examined. This section of the course will focus on the quality of response options, coding categories – Do response options/coding categories segment the respondent sample in meaningful and useful ways?
  • Issues and Considerations – using existing measures versus developing your own measures: What to look for and how to assess whether existing measures are suitable for your evaluation project will be examined. Issues associated with the development and use of new measures will be addressed in terms of how to establish sound psychometric properties, and what cautionary statements should accompanying interpretation and evaluation findings using these new measures.
  • Criteria for choosing measures: Assessing the adequacy of measures in terms of the characteristics of measurement – choosing measures that fit your evaluation theory and evaluation focus (exploration, needs assessment, level of implementation, process, impact and outcome). Measurement feasibility, practicability and relevance will be examined. Various measurement techniques will be examined in terms of precision and adequacy, as well as the implications of using screening, broad-range, and peaked tests.
  • Error-influences on measurement precision: The characteristics of various measurement techniques, assessment conditions (setting, respondent interest, etc.), and evaluator characteristics will be addressed.

Participants will be provided with a copy of the text: Scale Development: Theory and Applications by Robert F. DeVellis (Sage, 2012).


July 19-20

Culture and Evaluation

Instructor: Leona Ba, EdD

Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive evaluations. In addition, they will explore strategies for applying cultural responsiveness to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

Prerequisites: Understanding of evaluation and research design.

This course uses some material from Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In Hood, S., Hopson, R., & Frierson, H. (Eds.) Continuing the journey to reposition culture and cultural context in evaluation theory and practice, (pp. 3-27). Charlotte, NC; Information Age Publishing, Inc.


July 19

Hierarchical Linear Modeling

Instructor: Gary T. Henry, PhD

Description: In many evaluations, the program participants are nested within sites, schools, or groups. In addition, the nesting is sometimes multi-leveled, such as students within classes within schools within school districts. To make matters more complicated, we more frequently have multiple observations taken over time on the program participants, such as years of student achievement scores or measures of mental health status. Hierarchical linear models (HLM) have been developed to accurately analyze these types of data. These models make two important improvements over regular (ordinary least squares) regression. First, the standard errors that are used for testing statistical significance are corrected for the “nesting” or “clustering” of participants into groups. Usually, the participants in a “cluster” are more similar to each other than they are to participants in other “clusters” and this, when uncorrected, deflates the standard errors leading to “false positives” or concluding that a coefficient is statistically significant when it is not. HLM corrects the standard errors and test of statistical significance for nested data. Second, HLM appropriately apportions the variance that occurs at each level to that level, and provides realistic estimates of the effects across levels. In this course, we lay a foundation for understanding, using, and interpreting HLM. We begin with multiple regression, including the assumptions that must be fulfilled for the coefficients and tests of statistical significance to be unbiased. Using a step by step approach, we will introduce the basic concepts of HLM and the notation that has been developed for presenting HLM models. We will focus on practical aspects of the use of HLM and correctly putting the findings into language suitable for a report. The main objective of the course is to provide the participants with a better understanding of HLM, how it can improve the analysis of data in many evaluations, and how to read and interpret reports and articles that utilize it. The course will not offer hands on experiences writing and implementing HLM statistical programs.


July 20-21

Case Studies in Evaluation

Instructor: Delwyn Goodrick, PhD

Description: Case study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context mediates the influence of program and project interventions. While case study designs are often adopted to describe or depict program processes, their capacity to illuminate depth and detail can also contribute to an understanding of the mechanisms responsible for program outcomes.

The literature on case study is impressive, but there remains tension in perspectives about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case study is conceived and practiced within the evaluation profession.  This workshop aims to disentangle the discussions and debates, and highlight the central principles critical to effective case study practice and reporting.

This two day workshop will explore case study design, analysis and representation.  The workshop will address case study topics through brief lecture presentation, small group discussion and workshop activities with realistic case study scenarios.  Participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in doing case studies in evaluation contexts.  Discussion of the ethical principles underpinning case study will be integrated throughout the workshop.

Specific topics to be addressed over the two days include,

  • The utility of case studies useful in evaluation. Circumstance in which case studies may not be appropriate
  • Evaluation questions that are suitable for a case study approach
  • Selecting the unit of analysis in case study
  • Design frameworks in case studies – single and multiple case study; the intrinsic and instrumental case
  • The use of mixed methods in case study approaches – sequential and concurrent designs
  • Developing case study protocols and case study guides
  • Analyzing case study materials – within case and cross case analysis, matrix and template displays that facilitate analysis
  • Principles and protocols for effective team work in multiple case study approaches
  • Transferability/generalizability of case studies
  • Validity and trustworthiness of case studies
  • Synthesizing case materials
  • Issues of representation of the case and cases in reporting

Detailed course notes will be provided to all participants and practice examples referenced over the two days.  Text provided and used in the course: Yin, R.P. Applications of Case Study Research (Sage, 2012).


July 20-21

Evaluability Assessment

Instructor: Debra J. Rog, PhD

Description: Increasingly, both public and private funders are looking to evaluation not only as a tool for determining the accountability of interventions, but also to add to our evidence base on what works in particular fields. With scarce evaluation resources, however, funders are interested in targeting those resources in the most judicious fashion and with the highest yield. Evaluability assessment is a tool that can inform decisions on whether a program or initiative is suitable for an evaluation and the type of evaluation that would be most feasible, credible, and useful.

This course will provide students with the background, knowledge, and skills needed to conduct an evaluability assessment. Using materials and data from actual EA studies and programs, students will participate in the various stages of the method, including the assessment of the logic of a program’s design and the consistency of its implementation; the examination of the availability, quality, and appropriateness of existing measurement and data capacities; the analysis of the plausibility that the program/initiative can achieve its goals; and the assessment of appropriate options for either evaluating the program, improving the program design/implementation, or strengthening the measurement. The development and analysis of logic models will be stressed, and an emphasis will be placed on the variety of products that can emerge from the process.

Students will be sent several articles prior to the course as a foundation for the method.

Prerequisites: Background in evaluation is useful and desirable, as is familiarity with conducting program level site visits.


July 20

Utilization-Focused Evaluation

Instructor: Michael Quinn Patton, PhD

Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process.  Therefore, the focus in utilization-focused evaluation is on intended use by intended users.

Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation.  Situational responsiveness guides the interactive process between evaluator and primary intended users.  A psychology of use undergirds and informs utilization-focused evaluation:  intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they’ve been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

Participants will learn:

  • Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
  • Implications of focusing an evaluation on intended use by intended users.
  • Options for evaluation design and methods based on situational responsiveness, adaptability and creativity.
  • Ways of building evaluation into the programming process to increase use.

Participants will receive a copy of the instructor’s text: Utilization-Focused Evaluation, 4th Ed., (Sage, 2008).


July 20

Leveraging Technology in Evaluation

Instructor: Tarek Azzam, PhD

Description: This course will focus on how range of new technological tools can be used to improve program evaluations. Specifically, we will explore the application of tools to engage clients and a range of stakeholders, collect research and evaluation data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings.

After completing the course participants are expected to have an understanding of how technology can be used in evaluation practice, and some familiarity with some specific technological tools that can be used to collect data, interpret findings, conceptually map programs in an interactive way, produce interactive reports, and utilize crowdsourcing for quantitative and qualitative analysis.

Participants will be given information on how to access tools such as Mechanical Turk (MTurk) for crowdsourcing, Geographical Information Systems (GIS), interactive reporting software, and interactive conceptual mapping tools to improve the quality of their evaluation projects.

After completing the course participants are expected to have: 1) an understanding of how technology can be used in evaluation practice, 2) Become familiar with some specific technological tools that can be used to collect data, interpret findings, map programs in an interactive way, and display data 3) participants will be provided with a list of tools resources that can be used after the completion of the course.

Banner photo: courtesy of Thomas Guest
Programs and Events
September 2017 Program
Project Management & Oversight for Evaluators, Sept 19-28, 2017
Project Management & Oversight for Evaluators, Dec 5-14, 2017
Feb 26-March 10, 2018
March 12-17, 2018
July 9-21, 2018
Contact Us

The Evaluators’ Institute

TEI Maryland Office
1451 Rockville Pike, Suite 600
Rockville, MD 20852
301-287-8745
tei@cgu.edu