Evaluation Foundations Courses

Course Descriptions


Applied Measurement for Evaluation

Instructor: Ann Doucette, PhD

Description: Successful evaluation depends on our ability to generate evidence attesting to the feasibility, relevance and/or effectiveness of the interventions, services, or products we study. While theory guides our designs and how we organize our work, it is measurement that provides the evidence we use in making judgments about the quality of what we evaluate. Measurement, whether it results from self-report survey, interview/focus groups, observation, document review, or administrative data must be systematic, replicable, interpretable, reliable, and valid. While hard sciences such as physics and engineering have advanced precise and accurate measurement (i.e., weight, length, mass, volume), the measurement used in evaluation studies is often imprecise and characterized by considerable error. The quality of the inferences made in evaluation studies is directly related to the quality of the measurement on which we base our judgments. Judgments attesting to the ineffective interventions may be flawed – the reflection of measures that are imprecise and not sensitive to the characteristics we chose to evaluate. Evaluation attempts to compensate for imprecise measurement with increasingly sophisticated statistical procedures to manipulate data. The emphasis on statistical analysis all too often obscures the important characteristics of the measures we choose. This class content will cover:

  • Assessing measurement precision: Examining the precision of measures in relationship to the degree of accuracy that is needed for what is being evaluated. Issues to be addressed include: measurement/item bias, the sensitivity of measures in terms of developmental and cultural issues, scientific soundness (reliability, validity, error, etc.), and the ability of the measure to detect change over time.
  • Quantification: Measurement is essentially assigning numbers to what is observed (direct and inferential). Decisions about how we quantify observations and the implications these decisions have for using the data resulting from the measures, as well as for the objectivity and certainty we bring to the judgment made in our evaluations will be examined. This section of the course will focus on the quality of response options, coding categories – Do response options/coding categories segment the respondent sample in meaningful and useful ways?
  • Issues and Considerations – using existing measures versus developing your own measures: What to look for and how to assess whether existing measures are suitable for your evaluation project will be examined. Issues associated with the development and use of new measures will be addressed in terms of how to establish sound psychometric properties, and what cautionary statements should accompanying interpretation and evaluation findings using these new measures.
  • Criteria for choosing measures: Assessing the adequacy of measures in terms of the characteristics of measurement – choosing measures that fit your evaluation theory and evaluation focus (exploration, needs assessment, level of implementation, process, impact and outcome). Measurement feasibility, practicability and relevance will be examined. Various measurement techniques will be examined in terms of precision and adequacy, as well as the implications of using screening, broad-range, and peaked tests.
  • Error-influences on measurement precision: The characteristics of various measurement techniques, assessment conditions (setting, respondent interest, etc.), and evaluator characteristics will be addressed.

Participants will be provided with a copy of the text: Scale Development: Theory and Applications by Robert F. DeVellis (Sage, 2012).



Basics of Program Evaluation

(Previously taught as Foundations of Evaluation: Theory, Method, and Practice)

Instructor: Arnold Love, PhD

Description: With an emphasis on constructing a sound foundational knowledge base, this course is designed to provide an overview of both past and contemporary perspectives on evaluation theory, method, and practice. Course topics include, but are not limited to, basic evaluation concepts and definitions; evaluation as a cognitive activity; the view of evaluation as a transdiscipline; the general and working logic of evaluation; an overview of the history of the field; distinctions between evaluation and basic and applied social science research; evaluation-specific methods (e.g., needs assessment, stakeholder analysis, identifying evaluative criteria, standard setting); reasons and motives for conducting evaluation; central types and purposes of evaluation; objectivity, bias, and validity; the function of program theory in evaluation; evaluator roles; core competencies required for conducting high quality, professional evaluation; audiences and users of evaluation; alternative evaluation models and approaches; the political nature of evaluation and its implications for practice; professional standards and codes of conduct; and emerging and enduring issues in evaluation theory, method, and practice. Although the major focus of the course is program evaluation in multiple settings (e.g., education, criminal justice, health and medicine, human and social services, international development, science and technology), examples from personnel evaluation, policy analysis, and product evaluation also will be used to illustrate foundational concepts. The course will conclude with how to plan, design, and conduct high quality evaluations using a contingency-based and situational approach, including evaluation purposes, resources (e.g., time, budget, expertise), uses and users, competing demands, and other relevant contingencies. Throughout the course, active learning is emphasized and, therefore, the instructional format consists of instructor-led presentations, discussions, and application exercises. Audiences for this course include those who have familiarity with social science research, but are unfamiliar with evaluation, and evaluators who wish to review current theories, methods, and practices.

Prerequisites: Basic knowledge of social science research methods.



Ethics in Practice: A Global Perspective

Instructor: Michael Quinn Patton, PhD

Description: The course will compare and contrast various ethical guidance statements for evaluators from around the world including the OECD/DAC Quality Standards for Development Evaluation, the Joint Committee Standards, and ethical guidance adopted by national evaluation associations. The course will examine overarching ethical frameworks for evaluation: Universal Declaration of Human Rights; Sustainability; the Paris Declaration Principles on Development Aid; and principles for conducting research with indigenous people.

Professional evaluation associations and networks around the world have adopted ethical guidelines, standards, and principles.   These recognize that evaluators can and do face a variety of daunting ethical challenges.  The political, cultural, and contextual variations that evaluators face mean that judgment must be exercised about what is appropriate in a particular situation.  Few rules can be applied.  Rather, ethical guidelines, standards, and principles have to be interpreted.  Tough judgment calls must be made about what to do. This course is about those interpretation and judgment processes.  Ethical judgments apply at every stage of evaluation, in initial interactions with stakeholders, in design decisions, throughout data collection, and in analyzing, reporting, and facilitating use of findings.    Much of the course will be examining specific ethical challenges commonly reported among evaluators working internationally.   Participants will also have an opportunity to share and discuss their own experiences in dealing with ethical challenges.

The course is based on the TEI premise that ethical practice is one of the emergent competencies in evaluation: Competent evaluators are ethical evaluators. The outcomes of the course are: Participants will know the ethical standards of evaluation as an international profession; have increased confidence that they can wisely, astutely, and effectively apply ethical standards in their own practice; and have a deeper sense of professionalism as a result of being more deeply grounded in the ethical foundations of evaluation.



Evaluation Foundations

Instructor: Tarek Azzam, PhD

Description: Every time we try something new, we often ask ourselves, “Is it better than similar items? What makes it good? What is its value?” This process of valuing may be applied to anything from purchasing a computer, to judging the quality of a school curriculum, or an organization’s training program. The art and science of valuing is called Evaluation. All human beings evaluate, albeit informally, but the ability to evaluate systematically is important to our society and has the power to help improve individual lives and society as a whole. This course aims to introduce you to some of the prevalent ideas that underpin the evaluation field and its practice.

This course will provide students with a basic understanding of evaluation. Students are introduced to some fundamental evaluation topics and concepts in four areas: practice, theory, methods and research. There are several course objectives:

  • Understand the history and influences of evaluation in society
  • Distinguish evaluation’s purposes and evaluators’ roles and activities
  • Become familiar with major evaluation concepts
  • Become familiar with some main evaluation methods
  • Become familiar with recent trends influencing evaluation field



Professional Standards and Principles for Ethical Evaluation Practice

Instructor: Michael Morris, PhD

Description: Participants will explore the ethical issues that can arise at various stages of the evaluation process, from entry/contracting all the way to the utilization of findings by stakeholders. Strategies for preventing ethical problems, as well for dealing with them once they have arisen, will be addressed. Case vignettes will be used throughout the course to provide participants with an opportunity to brainstorm such strategies, and participants will have a chance to share their own ethical challenges in evaluation with others. This course will also focus on the application of the American Evaluation Association’s Guiding Principles for Evaluators and the Joint Committee’s Program Evaluation Standards to the ethical responsibilities and challenges that evaluators encounter in their work.

The course is based on the TEI premise that ethical practice is a core competency in evaluation: Competent evaluators are ethical evaluators. Participants should emerge from the course with an enhanced understanding of how the standards and principles that inform the professional practice of evaluation can increase their chances of “doing the (ethically) right thing” when conducting evaluations in the field. Participants should also be better prepared to interact with stakeholders in a fashion that lessens the likelihood that the latter will engage in behaviors that lead to ethical difficulties.



Evaluation Research Methods: A Survey of Quantitative and Qualitative Approaches

Instructor: David B. Wilson, PhD

Description: This course will introduce a range of basic quantitative and qualitative social science research methods that are applicable to the evaluation of various programs. This is a foundational course that introduces methods developed more fully in other TEI courses and serves as a critical course designed to ensure a basic familiarity with a range of social science research methods and concepts.

Topics will include observational and qualitative methods, survey and interview (structured and unstructured) techniques, experimental and quasi-experimental designs, and sampling methods. This course is for those who want to update their existing knowledge and skills and will serve as an introduction for those new to the topic..

Text provided: Creswell, J. Research Design (Sage, 2014).



Informing Practice Using Evaluation Models and Theories

Instructor: Melvin M. Mark, PhD

Description: Evaluators who are not aware of the contemporary and historical aspects of the profession. “are doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes.” Madaus, Scriven and Stufflebeam (1983).

“Evaluation theories are like military strategy and tactics; methods are like military weapons and logistics. The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations.  The good evaluator needs theories for the same reasons in choosing and deploying methods.” Shadish, Cook and Leviton (1991).

These quotes from Madaus et al. (1983) and Shadish et al. (1991) provide the perfect rationale for why the serious evaluator should be concerned with models and theories of evaluation. The primary purpose of this class is to overview major streams of evaluation theories (or models), and to consider their implications for practice. Topics include: (1) why evaluation theories matter, (2) frameworks for classifying different theories, (3) in-depth examination of 4-6 major theories, (4) identification of key issues on which evaluation theories and models differ, (5) benefits and risks of relying heavily on any one theory, and (6) tools and skills that can help you in picking and choosing from different theoretical perspectives in planning an evaluation in a specific context. The overarching theme will be on practice implications, that is, on what difference it would make for practice to follow one theory or some other.

Theories to be discussed will be ones that have had a significant impact on the evaluation field, that offer perspectives with major implications for practice, and that represent important and different streams of theory and practice. Case examples from the past will be used to illustrate key aspects of each theory’s approach to practice.

Participants will be asked to use the theories to question their own and others’ practices, and to consider what characteristics of evaluations will help increase their potential for use. Each participant will receive Marvin Alkin’s text, Evaluation Roots (Sage, 2013) and other materials.

The instructor’s assumption will be that most people attending the session have some general familiarity with the work of a few evaluation theorists, but that most will not themselves be scholars of evaluation theory. At the same time, the course should be useful, whatever one’s level of familiarity with evaluation theory.



Monitoring and Evaluation: Frameworks and Fundamentals

Instructor: Ann Doucette, PhD

Description: The overall goal of Monitoring and Evaluation (M&E) is the assessment of program progress to optimize outcome and impact – program results. While M&E components overlap, there are distinct characteristics of each. Monitoring activities systemically observe (formal and informal) assumed indicators of favorable results, while evaluation activities, build on monitoring indicator data to assess intervention/program effectiveness, the adequacy of program impact pathways, likelihood of program sustainability, the presence of program strengths and weaknesses, the value, merit and worth of the initiative, and the like. The increased emphasis on effectively managing toward favorable results demands a more comprehensive M&E evaluation approach in order to identify whether programs are favorably on track, or whether improved program strategies and mid-course corrections are needed.

The two-day, interactive course will cover the following:

  • M&E introduction and overview
  • Defining the purpose and scope of M&E
  • Engaging stakeholders and establishing and evaluative climate
    • The role and effect of partnership and boundary spanners, policy, and advocacy
  • Identifying and supporting needed capabilities
  • M&E frameworks – agreement on M&E targets
    • Performance and Results-Based M&E approaches
  • Connecting program design and M&E frameworks
    • Comparisons – Is a counterfactual necessary?
    • Contribution versus attribution
  • Identification of key performance indicators (KPIs)
    • Addressing uncertainties and complexity
  • Data: collection and methods
    • Establishing indicator baselines (addressing the challenges of baseline estimates)
    • What data exists? What data/information needs to be collected?
  • Measuring progress and success – contextualizing outcomes and setting targets
    • Time to expectancy – what can be achieved by the program?
  • Using and reporting M&E findings
  • Sustaining M&E culture

The course focuses on practical application. Course participants will have a comprehensive understanding of M&E frameworks and fundamentals, M&E tools, and practice approaches.  Case examples will be used to illustrate the M&E process. Course participants are encouraged to submit their own case examples, prior to the course for inclusion in the course discussion. The course is purposefully geared for evaluators working in developing and developed countries; national and international agencies, organizations, NGOs; and, national, state, provincial and county governments.

Familiarity with evaluation is helpful, but not required, for this course.



Working with Evaluation Stakeholders

Instructor: John Bryson, PhD

Description: Working with stakeholders is a fact of life for evaluators. That interaction can be productive and beneficial to evaluation studies that inform decisions and produce positive outcomes for decision makers and program recipients. Or that interaction can be draining and conflictual for both the evaluator and the stakeholders and lead to studies that are misguided, cost too much, take too long, never get used, or never get done at all. So this is an incredibly important topic for evaluators to explore.  This course focuses on strategies and techniques to identify stakeholders who can and will be most beneficial for the achievement of study goals and how to achieve a productive working relationship with them.  Stakeholder characteristics like knowledge of the program, power and ability to influence, willingness to participate, etc., will be analyzed and strategies and techniques are presented to successfully engage stakeholders for effective collaboration. Detailed course materials, case examples, and readings are provided to illuminate course content and extend its long-term usefulness.

Programs and Events
September 2017 Program
Project Management & Oversight for Evaluators, Sept 19-28, 2017
Project Management & Oversight for Evaluators, Dec 5-14, 2017
Feb 26-March 10, 2018
March 12-17, 2018
July 9-21, 2018
Contact Us

The Evaluators’ Institute

TEI Maryland Office
1451 Rockville Pike, Suite 600
Rockville, MD 20852
301-287-8745
tei@cgu.edu