Using Evaluation – Strategies and Capacity Courses


Course Descriptions


Culture and Evaluation

Instructor: Leona Ba, EdD

Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive evaluations. In addition, they will explore strategies for applying cultural responsiveness to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

Prerequisites: Understanding of evaluation and research design.

This course uses some material from Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In Hood, S., Hopson, R., & Frierson, H. (Eds.) Continuing the journey to reposition culture and cultural context in evaluation theory and practice, (pp. 3-27). Charlotte, NC; Information Age Publishing, Inc.



Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations. The text provided as part of course fee is Torres et al., Evaluation Strategies for Communicating and Reporting (2nd Ed., Sage, 2005).



Evaluation Capacity Building in Organizations

Instructor: Leslie Fierro, PhD

Description: Evaluation capacity building (ECB) involves implementing one or more interventions to build individual and organizational capacities to engage in and sustain the act of evaluation – including, but not limited to commissioning, planning, implementing, and using findings from evaluations. ECB has received increased levels of attention in the evaluation community over the past decade as funders of government and nonprofit institutions frequently request evaluations as part of the funding award. As a result, grantees are faced with questions about how they will perform this work – hire an external contractor, create a full or half-time position internally for an evaluator, etc.

Given this context, it perhaps is no surprise that when we think of ECB, we tend to think about training people how to plan and implement evaluations. This is certainly very important. However, if what we want to do is effectively sustain high-quality evaluation practice in organizations and learn from the findings, it is critically important to also consider: (1) What capacities beyond individual knowledge and skill to do evaluation are needed to support a program’s ability to plan, conduct, learn from, and sustain evaluation and (2) Who needs to be the “target audience” for ECB activities in order to facilitate this support. Not everybody needs to be an evaluator to support high-quality evaluation practice and learning, but for those who aren’t going to be doing evaluation what do they need to know and why do they need to know it?

Participants will be introduced to the fundamentals of ECB; intended long-, mid-, and short-term outcomes of ECB interventions; a range of ECB interventions; and important considerations in measuring evaluation capacity in organizations. This course provides practitioners with an opportunity to consider what types of evaluation capacity they want to build within their organization and why. Participants will engage throughout the course in building a logic model of a potential ECB intervention for their organization with specific emphasis on what needs to change among whom in the organization.

Learning Objectives

After this workshop, participants will be able to:

  1. Define evaluation capacity building
  2. Articulate several intended outcomes of evaluation capacity building
  3. Identify at least two audiences who are important to engage as audiences for future ECB efforts to support high-quality evaluation practice and learning in their organization and explain why this is the case
  4. Describe several approaches to building evaluation capacity
  5. Explain several nuances involved in measuring evaluation capacity within organizations



How to Build a Successful Evaluation Consulting Practice

Instructor: Michael Quinn Patton, PhD

Description: This class offers the opportunity for participants to learn from someone who has been a successful evaluation consultant for 30 years. Issues addressed include: What does it take to establish an independent consulting practice? How do you find your consulting niche? How do you attract clients, determine how much to charge, create collaborations, and generate return business? Included will be discussion on such topics as marketing, pricing, bidding on contracts, managing projects, resolving conflicts, professional ethics, and client satisfaction.  Participants will be invited to share their own experiences and seek advice on situations they’ve encountered. The course is highly interactive and participant-focused.

Participants will receive a copy of the instructor’s text: Essentials of Utilization-Focused Evaluation (Sage, 2012).



Implementation Analysis for Feedback on Program Progress and Results

Instructor: Arnold Love, PhD

Description: Many programs do not achieve intended outcomes because of how they are implemented. Thus, implementation analysis (IA) is very important for policy and funding decisions. IA fills the methodological gap between outcome evaluations that treat a program as a “black box” and process evaluations that present a flood of descriptive data. IA provides essential feedback on the “critical ingredients” of a program, and helps drive change through an understanding of factors affecting implementation and short-term results. Topics include: importance of IA; conceptual and theoretical foundations of IA; how IA drives change and complements other program evaluation approaches; major models of IA and their strengths/weaknesses; how to build an IA framework and select appropriate IA methods; concrete examples of how IA can keep programs on-track, spot problems early, enhance outcomes, and strengthen collaborative ventures; and suggestions for employing IA in your organization. Detailed course materials and in-class exercises are provided.



Leveraging Technology in Evaluation

Instructor: Tarek Azzam, PhD

Description: This course will focus on how range of new technological tools can be used to improve program evaluations. Specifically, we will explore the application of tools to engage clients and a range of stakeholders, collect research and evaluation data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings.

After completing the course participants are expected to have an understanding of how technology can be used in evaluation practice, and some familiarity with some specific technological tools that can be used to collect data, interpret findings, conceptually map programs in an interactive way, produce interactive reports, and utilize crowdsourcing for quantitative and qualitative analysis.

Participants will be given information on how to access tools such as Mechanical Turk (MTurk) for crowdsourcing, Geographical Information Systems (GIS), interactive reporting software, and interactive conceptual mapping tools to improve the quality of their evaluation projects.

After completing the course participants are expected to have: 1) an understanding of how technology can be used in evaluation practice, 2) Become familiar with some specific technological tools that can be used to collect data, interpret findings, map programs in an interactive way, and display data 3) participants will be provided with a list of tools resources that can be used after the completion of the course.



Making Evaluation Data Actionable

Instructor: Ann Doucette, PhD

Description: Interventions and programs are implemented within complex environments that present challenges for collecting program performance information. A general problem for performance measurement initiatives — and what often causes them to fall short of their intended objectives — is the failure to choose performance measures that are actionable, meaning that they are linked to practices that an organization or agency can actually do something about, and the changes in those practices can be linked directly to improved outcomes and sustained impact.

This class introduces complex adaptive systems (CAS) thinking and addresses the implication of CAS in evaluating the outcomes and impact of interventions and programs. Examples used in this case range from healthcare, education, transportation and safety, developing countries, and research and development environments. The class examines performance measurement strategies that support actionable data. The focus will be on data-based decision making, value-based issues, and practice-based evidence that can assist in moving performance measurement and quality monitoring activities from a process, outcome, and impact evaluation approach to continuous quality improvement. Business models such as Toyota Production System, Six-sigma, Balanced Scorecards, as well as knowledge management and benchmarking strategies will be discussed in terms of how they can inform improvement strategies.

Note: Persons with some experience in program evaluation, and those with interest in a systems perspective will likely derive the most benefit from this course.



Presenting Data Effectively: Practical Methods for Improving Evaluation Communication

Instructor: Stephanie Evergreen, PhD

Description: Crystal clear charts and graphs are valuable–they save an audience’s mental energies, keep a reader engaged, and make you look smart. In this workshop, attendees will learn the science behind presenting data effectively. We will go behind-the-scenes in Excel and discuss how each part of a visualization can be modified to best tell the story in a particular dataset. We will discuss how to choose the best chart type, given audience needs, cognitive capacity, and the story that needs to be told about the data–and this will include both quantitative and qualitative visualizations. We will walk step-by-step through how to create newer types of data visualizations and how to manipulate the default settings to customize graphs so that they have a more powerful impact. Working in a computer lab, attendees will build with a prepared spreadsheet to learn the secrets to becoming an Excel dataviz ninja. Attendees will get hands-on practice implementing direct, practical steps that can be immediately implemented after the workshop to clarify data presentation and support clearer decision-making. Full of guidelines and examples, after this workshop you’ll be better able to package your data so it represents your smart, professional quality.

Note: Attendees are strongly encouraged to maximize the workshop experience by bringing a slideshow that contains graphs under current construction.

In the second day of workshop, Dr. Stephanie Evergreen will lead attendees through how to manipulate Excel into making impactful charts and graphs, step-by-step, using provided data sets distributed to the audience. Audience members will leave the session with more in depth knowledge about to craft effective data displays. The demonstration will occur in the computer lab on PCs running Office 2010. Completing the session moves one to Excel Ninja Level 10.

Attendees will learn:

  1. Visual processing theory and why it is relevant for evaluators
  2. How to apply graphic design best practices and visual processing theory to enhance data visualizations with simple, immediately implementable steps
  3. Which chart type to use, when
  4. How to construct data visualizations and other evaluation communication to best tell the story in the data
  5. Alternative methods for reporting

Workshop attendees will leave with helpful handouts and a copy of Effective Data Visualization (Sage, 2016).

Registrants should regularly develop graphs, slideshows, technical reports and other written communication for evaluation work and be familiar with the navigational and layout tools available in simple software programs, like Microsoft Office.



Project Management and Oversight for Evaluators

Instructor: Tessie Catsambas, MPP

Description: The purpose of this course is to provide new and experienced evaluation professionals and funders with strategies, tools and skills to: (1) develop realistic evaluation plans; (2) negotiate needed adjustments when issues arise; (3) organize and manage evaluation teams; (4) monitor evaluation activities and budgets; (5) protect evaluation independence and rigor while responding to client needs; and (6) ensure the quality of evaluation products and briefings.

Evaluation managers have a complex job: they oversee the evaluation process and are responsible for safeguarding the methodological integrity, evaluation activities, and budgets. In many cases they must also manage people, including clients, various stakeholders, and other evaluation team members. Evaluation managers shoulder the responsibility for the success of the evaluation, frequently dealing with unexpected challenges, and making decisions that influence the quality and usefulness of the evaluation.

Against a backdrop of demanding technical requirements and a dynamic political environment, the goal of evaluation management is to develop, with available resources and time, valid and useful measurement information and findings, and ensure the quality of the process, products and services included in the contract. Management decisions influence methodological decisions and vice versa, as method choice has cost implications.

The course methodology will be experiential and didactic, drawing on participants’ experience and engaging them with diverse material. It will include paper and online tools for managing teams, work products and clients; an in-class simulation game with expert judges; case examples; reading; and a master checklist of processes and sample forms to organize and manage an evaluation effectively. At the end of this training, participants will be prepared to follow a systematic process with support tools for commissioning and managing evaluations, and will feel more confident to lead evaluation teams and negotiate with clients and evaluators for better evaluations.



Strategic Planning with Evaluation in Mind

Instructor: John Bryson, PhD

Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses – along with the many opportunities – facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.

This course presents a pragmatic approach to strategic planning based on John Bryson’s best-selling and award-winning book, Strategic Planning for Public and Nonprofit Organizations, Fifth Edition (Jossey-Bass, 2018). The course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible.  The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.

The following topics are covered though a mixture of mini-lectures, individual and small group exercises, and plenary discussion:

  • Understanding why strategic planning has become so important
  • Gaining knowledge of the range of different strategic planning approaches
  • Understanding the Strategy Change Cycle (Prof. Bryson’s preferred approach)
  • Knowing how to appropriately design formative, summative, and developmental evaluations into the strategy process
  • Knowing what it takes to initiate strategic planning successfully
  • Understanding what can be institutionalized
  • Making sure ongoing strategic planning, acting, learning, and evaluation are linked



Strategy Mapping

Instructor: John Bryson, PhD

Description: The world is often a muddled, complicated, dynamic place in which it seems as if everything connects to everything else–and that is the problem! The connections can be problematic because, while we know things are connected, sometimes we do not know how, or else there are so many connections we cannot comprehend them all. Alternatively, we may not realize how connected things are and our actions lead to unforeseen and unhappy consequences. Either way, we would benefit from an approach that helps us strategize, problem solve, manage conflict, and design evaluations that help us understand how connected the world is, what the effects of those connections are, and what might be done to change some of the connections and their effects.

Visual strategy mapping (ViSM) is a simple and useful technique for addressing situations where thinking–as an individual or as a group–matters. ViSM is a technique for linking strategic thinking, acting, and learning; helping make sense of complex problems; communicating to oneself and others what might be done about them; and also managing the inevitable conflicts that arise.

ViSM makes it possible to articulate a large number of ideas and their interconnections in such a way that people can know what to do in an area of concern, how to do it, and why. The technique is useful for formulating and implementing mission, goals, and strategies and for being clear about how to evaluate strategies. The bottom line is: ViSM is one of the most powerful strategic management tools in existence. ViSM is what to do when thinking matters!

When can mapping help? There are a number of situations that are tailor-made for mapping. Mapping is particularly useful when:

  • Effective strategies need to be developed
  • Persuasive arguments are needed
  • Effective and logical communication is essential
  • Effective understanding and management of conflict are needed
  • When it is vital that a situation be understood better as a prelude to any action
  • Organizational or strategic logic needs to be clarified in order to design useful evaluations

These situations are not meant to be mutually exclusive. Often they overlap in practice. In addition, mapping is very helpful for creating business models and balanced scorecards and dashboards. Visual strategy maps are related to logic models, as both are word-and-arrow diagrams, but are more tied to goals, strategies, and actions and are more careful about articulating causal connections.

Objectives: (Strategy Mapping)

At the end of the course, participants will:

  • Understand the theory of mapping
  • Know the difference between action-oriented strategy maps, business model maps, and balanced scorecard maps
  • Be able to create action-oriented strategy maps for individuals – that is, either for oneself or by interviewing another person
  • Be able to create action-oriented maps for groups
  • Be able to create a business model map linking competencies and distinctive competencies to goals and critical success factors
  • Know how to design and manage change processes in which mapping is prominent
  • Have an action plan for an individual project



Using Program Evaluation in Nonprofit Environments

Instructor: Kathryn Newcomer, PhD

Description: Funders and oversight boards typically need data on the results obtained by the programs they fund. Within foundations program officers want information about grantees, and about lines of effort they fund to guide planning and future allocation of resources.  Executive officers and members of the boards that oversee nonprofit service providers also want to know what works and what does not. This class provides background that program officers and overseers need to understand how evaluation can serve their information needs, and how to assess the quality of the evidence they receive.

Drawing upon cases from foundations and nonprofits, the session will help attendees:

  • Clarify where to start in using evaluation to improve nonprofit social service programss
  • Learn what/who drives program evaluation and performance measurement in public and nonprofit service providers
  • Explore uses of evaluation and outcomes assessment in the non-profit sector
  • Understand how to frame useful scopes of work (SOWs) and requests for proposals (RFPs)  for evaluations and performance measurement systems
  • Identify and apply relevant criteria  in choosing contractors and consultants to provide evaluation assistance
  • Discuss challenges to measurement of social service outcomes
  • Understand what questions to ask of  internal evaluation staff and outside consultants about the quality of their work

Programs and Events
September 2017 Program
Project Management & Oversight for Evaluators, Sept 19-28, 2017
Project Management & Oversight for Evaluators, Dec 5-14, 2017
Feb 26-March 10, 2018
March 12-17, 2018
July 9-21, 2018
Contact Us

The Evaluators’ Institute

TEI Maryland Office
1451 Rockville Pike, Suite 600
Rockville, MD 20852
301-287-8745
tei@cgu.edu