March 12-13

Monitoring and Evaluation: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

The two-day, interactive course will cover the following:

  • M&E introduction and overview
  • Defining the purpose and scope of M&E
  • Engaging stakeholders and establishing and evaluative climate
    • The role and effect of partnership and boundary spanners, policy, and advocacy
  • Identifying and supporting needed capabilities
  • M&E frameworks – agreement on M&E targets
    • Performance and Results-Based M&E approaches
  • Connecting program design and M&E frameworks
    • Comparisons – Is a counterfactual necessary?
    • Contribution versus attribution
  • Identification of key performance indicators (KPIs)
    • Addressing uncertainties and complexity
  • Data: collection and methods
    • Establishing indicator baselines (addressing the challenges of baseline estimates)
    • What data exists? What data/information needs to be collected?
  • Measuring progress and success – contextualizing outcomes and setting targets
    • Time to expectancy – what can be achieved by the program?
  • Using and reporting M&E findings
  • Sustaining M&E culture

The course focuses on practical application. Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches.  Case examples will be used to illustrate the M&E process. Course participants are encouraged to submit their own case examples, prior to the course for inclusion in the course discussion. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

Familiarity with evaluation is helpful, but not required, for this course.


March 12-13

Presenting Data Effectively: Practical Methods for Improving Evaluation Communication

Instructor: Stephanie Evergreen, PhD

Description: Crystal clear charts and graphs are valuable–they save an audience’s mental energies, keep a reader engaged, and make you look smart. In this workshop, attendees will learn the science behind presenting data effectively. We will go behind-the-scenes in Excel and discuss how each part of a visualization can be modified to best tell the story in a particular dataset. We will discuss how to choose the best chart type, given audience needs, cognitive capacity, and the story that needs to be told about the data–and this will include both quantitative and qualitative visualizations. We will walk step-by-step through how to create newer types of data visualizations and how to manipulate the default settings to customize graphs so that they have a more powerful impact. Attendees will build with a prepared spreadsheet to learn the secrets to becoming an Excel dataviz ninja. Attendees will get hands-on practice implementing direct, practical steps that can be immediately implemented after the workshop to clarify data presentation and support clearer decision-making. Full of guidelines and examples, after this workshop you’ll be better able to package your data so it represents your smart, professional quality.

Note: Attendees are strongly encouraged to maximize the workshop experience by bringing a slideshow that contains graphs under current construction. Attendees should bring their own laptops loaded with Microsoft Excel. No tablets or smart phones. PCs preferred; Macs okay.

In the second day of workshop, Dr. Stephanie Evergreen will lead attendees through how to manipulate Excel into making impactful charts and graphs, step-by-step, using provided data sets distributed to the audience. Audience members will leave the session with more in depth knowledge about to craft effective data displays. Completing the session moves one to Excel Ninja Level 10.

Attendees will learn:

  1. Visual processing theory and why it is relevant for evaluators
  2. How to apply graphic design best practices and visual processing theory to enhance data visualizations with simple, immediately implementable steps
  3. Which chart type to use, when
  4. How to construct data visualizations and other evaluation communication to best tell the story in the data
  5. Alternative methods for reporting

Workshop attendees will leave with helpful handouts and a copy of the instructor’s book, Effective Data Visualization.

Registrants should regularly develop graphs, slideshows, technical reports and other written communication for evaluation work and be familiar with the navigational and layout tools available in simple software programs, like Microsoft Office.


March 13


March 14-15

Using Research, Program Theory, & Logic Models to Design and Evaluate Programs

InstructorStewart I. Donaldson, PhD

Description: It is now commonplace to use research, program theory, and logic models in evaluation practice. They are often used to help design effective programs, and other times as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, prior research, program theories, and logic models can provide an evidence-base to guide action, conceptual clarity, motivate staff, and focus design and evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course will focuses on developing useful evidence-based program theories and logic models, and using them effectively to guide evaluation and avoid some of the most common traps. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use social science theory and research, program theories and logic models to positive advantage; (b) to formulate and prioritize key evaluation questions; (c) to gather credible and actionable evidence; (d) to understand and communicate ways they are used with negative results; and (e) strategies to avoid traps.

Recommended Book: Donaldson, S. I. (2021). Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-Focused Applications.  New York, NY: Routledge.

Students may also be interested in: Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations (Sage).

Prerequisites: None


March 15-17

Applied Measurement for Evaluation

Instructor: Ann Doucette, PhD

Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error. The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose. This class content will cover:

  • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated. Issues to be addressed include: measurement/item bias, the sensitivity of measures in terms of developmental and cultural issues, scientific soundness (reliability, validity, error, etc.), and the ability of the measure to detect change over time.
  • Quantification: Measurement is essentially assigning numbers to what is observed (direct and inferential). Decisions about how we quantify observations and the implications these decisions have for using the data resulting from the measures, as well as for the objectivity and certainty we bring to the judgment made in our evaluations will be examined. This section of the course will focus on the quality of response options, coding categories – Do response options/coding categories segment the respondent sample in meaningful and useful ways?
  • Issues and Considerations – using existing measures versus developing your own measures: What to look for and how to assess whether existing measures are suitable for your evaluation project will be examined. Issues associated with the development and use of new measures will be addressed in terms of how to establish sound psychometric properties, and what cautionary statements should accompanying interpretation and evaluation findings using these new measures.
  • Criteria for choosing measures: Assessing the adequacy of measures in terms of the characteristics of measurement – choosing measures that fit your evaluation theory and evaluation focus (exploration, needs assessment, level of implementation, process, impact and outcome). Measurement feasibility, practicability and relevance will be examined. Various measurement techniques will be examined in terms of precision and adequacy, as well as the implications of using screening, broad-range, and peaked tests.
  • Error-influences on measurement precision: The characteristics of various measurement techniques, assessment conditions (setting, respondent interest, etc.), and evaluator characteristics will be addressed.

Recommended text: Scale Development: Theory and Applications by Robert F. DeVellis (Sage, 2012).

Contact Us

The Evaluators’ Institute

tei@cgu.edu