Using Evaluation – Strategies and Capacity Courses


Course Descriptions

Culture and Evaluation

Instructor: Leona Ba, EdD

Description: This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs. More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

During the workshop, participants will reflect on their own cultural self-awareness, a prerequisite for conducting culturally responsive evaluations. In addition, they will explore strategies for applying cultural responsiveness to evaluation practice using examples from the instructor’s first-hand experience and other program evaluations. They will receive a package of useful handouts, as well as a list of selected resources.

Prerequisites: Understanding of evaluation and research design.

This course uses some material from Bledsoe, K., & Donaldson, S. I. (2015). Culturally responsive theory-driven evaluation. In Hood, S., Hopson, R., & Frierson, H. (Eds.) Continuing the journey to reposition culture and cultural context in evaluation theory and practice, (pp. 3-27). Charlotte, NC; Information Age Publishing, Inc.


Dashboard Design: Creating Static Dashboards Using Excel

Instructor: Ann K. Emery, M.S.

Description: Why wait until the end of the year to write a lengthy report when you can share data early and often with dashboards? Your organization’s leaders have more important things to do than read lengthy reports. Dashboards get to the point so that leaders can understand the numbers and take action. During this session, you’ll see sample dashboards from a dozen organizations like yours. I’ll share the story behind each dashboard so that you can learn about each dashboard’s audience and goals. For example, some of the dashboards were designed to track progress towards goals. Other dashboards were designed to help organizations compare their different program areas. You can decide which elements of each dashboard would be most applicable to your own work. Then, you’ll vote on which dashboards you’d like to create from scratch.

We’ll spend most of our time designing static dashboards within Excel. We do not use Tableau or cover interactive dashboard development in this course. Instead we will focus on static dashboards that can deliver the right information to the right person at the right time, with content that lives in Excel and gets shared with stakeholders as PDFs through email or as printed handouts during meetings.

Note: Please bring laptops with Excel. Attendees should bring their own laptops loaded with Microsoft Excel. No tablets or smartphones. PCs preferred; Macs okay.


Effective Reporting Strategies for Evaluators

Instructor: Kathryn Newcomer, PhD

Description: The use and usefulness of evaluation work is highly affected by the effectiveness of reporting strategies and tools. Care in crafting both the style and substance of findings and recommendations is critical to ensure that stakeholders pay attention to the message. Skill in presenting sufficient information — yet not overwhelming the audience — is essential to raise the likelihood that potential users of the information will be convinced with both the relevance and the validity of the data. This course will provide guidance and practical tips on reporting evaluation findings. Attention will be given to the selection of appropriate reporting strategies/formats for different audiences and to the preparation of: effective executive summaries; clear analytical summaries of quantitative and qualitative data; user-friendly tables and figures; discussion of limitations to measurement validity, generalizability; causal inferences, statistical conclusion validity, and data reliability; and useful recommendations.


Evaluation Capacity Building in Organizations

Instructor: Leslie Fierro, PhD

Description: Evaluation capacity building (ECB) involves implementing one or more interventions to build individual and organizational capacities to engage in and sustain the act of evaluation – including, but not limited to, commissioning, planning, implementing, and using findings from evaluations. ECB has received increased levels of attention in the evaluation community over the past decade as funders of government and nonprofit institutions frequently request evaluations as part of the funding award. As a result, grantees are faced with questions about how they will perform this work – hire an external contractor, create a full or half-time position internally for an evaluator, etc.

Given this context, it perhaps is no surprise that when we think of ECB, we tend to think about training people on how to plan and implement evaluations. This is certainly very important. However, if what we want to do is effectively sustain high-quality evaluation practice in organizations and learn from the findings, it is critically important to also consider: (1) What capacities beyond individual knowledge and skill to do evaluation are needed to support a program’s ability to plan, conduct, learn from, and sustain evaluation and (2) Who needs to be the “target audience” for ECB activities in order to facilitate this support. Not everybody needs to be an evaluator to support high-quality evaluation practice and learning, but for those who aren’t going to be doing evaluation what do they need to know and why do they need to know it?

Participants will be introduced to the fundamentals of ECB; intended long-, mid-, and short-term outcomes of ECB interventions; a range of ECB interventions; and important considerations in measuring evaluation capacity in organizations. This course provides practitioners with an opportunity to consider what types of evaluation capacity they want to build within their organization and why. Participants will engage throughout the course in building a logic model of a potential ECB intervention for their organization with specific emphasis on what needs to change among whom in the organization.

Learning Objectives

After this workshop, participants will be able to:

  1. Define evaluation capacity building
  2. Articulate several intended outcomes of evaluation capacity building
  3. Identify at least two audiences who are important to engage as audiences for future ECB efforts to support high-quality evaluation practice and learning in their organization and explain why this is the case
  4. Describe several approaches to building evaluation capacity
  5. Explain several nuances involved in measuring evaluation capacity within organizations

Evaluation Management

Instructor: Tessie Catsambas, MPP

Description: The purpose of this course is to provide new and experienced evaluation professionals and funders with strategies, tools and skills to: (1) develop realistic evaluation plans; (2) negotiate needed adjustments when issues arise; (3) organize and manage evaluation teams; (4) monitor evaluation activities and budgets; (5) protect evaluation independence and rigor while responding to client needs; and (6) ensure the quality of evaluation products and briefings.

Evaluation managers have a complex job. They oversee the evaluation process and are responsible for safeguarding the methodological integrity, evaluation activities, and budgets. In many cases they must also manage people, including clients, various stakeholders, and other evaluation team members. Evaluation managers shoulder the responsibility for the success of the evaluation, frequently dealing with unexpected challenges, and making decisions that influence the quality and usefulness of the evaluation.

Against a backdrop of demanding technical requirements and a dynamic political environment, the goal of evaluation management is to develop, with available resources and time, valid and useful measurement information and findings, and ensure the quality of the process, products and services included in the contract. Management decisions influence methodological decisions and vice versa, as method choice has cost implications.

The course methodology will be experiential and didactic, drawing on participants’ experience and engaging them with diverse material. It will include paper and online tools for managing teams, work products and clients; an in-class simulation game with expert judges; case examples; reading; and a master checklist of processes and sample forms to organize and manage an evaluation effectively. At the end of this training, participants will be prepared to follow a systematic process with support tools for commissioning and managing evaluations, and will feel more confident to lead evaluation teams and negotiate with clients and evaluators for better evaluations.


Foundations in Data Visualization

Instructor: Ann K. Emery, M.S.

Description: Today’s evaluators need to use a variety of strategies to present their data. Data visualization can help to make complex data easier to understand and to use. The training will walk participants through a step-by-step design process that they can apply to their own projects. Participants will learn how to customize visualizations for their audience; choose the right chart for their message; declutter their visuals so that viewers’ attention is focused on the data; reinforce their branding with custom color palettes and typography; and increase accessibility by ensuring that their visuals are legible for people with color vision deficiencies. Finally, participants will learn to tell a story through dark colors (saturation), explicit titles, and call-out boxes (annotation). This workshop is highly interactive and participants will pause several times throughout the day to sketch makeovers of their own visualizations.

Participants will design advanced visualizations in Excel. They will have an opportunity to apply what they have learned to their own project. This course is designed for attendees who develop graphs, slideshows, technical reports and other written communication for evaluation work.

Note: Please bring (1) laptops with Excel and (2) a project to work on (a report, slideshow, one-pager, dashboard, infographic, etc.). The second half of this course will include hands-on practice in building charts/graphs in Microsoft Excel. Attendees should bring their own laptops loaded with Microsoft Excel. No tablets or smartphones. PCs preferred; Macs okay.


How to Build a Successful Evaluation Consulting Practice

Instructor: Michael Quinn Patton, PhD

Description: This class offers the opportunity for participants to learn from someone who has been a successful evaluation consultant for 30 years. Issues addressed include: What does it take to establish an independent consulting practice? How do you find your consulting niche? How do you attract clients, determine how much to charge, create collaborations, and generate return business? Included will be discussion on such topics as marketing, pricing, bidding on contracts, managing projects, resolving conflicts, professional ethics, and client satisfaction.  Participants will be invited to share their own experiences and seek advice on situations they’ve encountered. The course is highly interactive and participant-focused.


Implementation Analysis for Feedback on Program Progress and Results

Instructor: Arnold Love, PhD

Description: Many programs do not achieve intended outcomes because of how they are implemented. Thus, implementation analysis (IA) is very important for policy and funding decisions. IA fills the methodological gap between outcome evaluations that treat a program as a “black box” and process evaluations that present a flood of descriptive data. IA provides essential feedback on the “critical ingredients” of a program, and helps drive change through an understanding of factors affecting implementation and short-term results. Topics include: importance of IA; conceptual and theoretical foundations of IA; how IA drives change and complements other program evaluation approaches; major models of IA and their strengths/weaknesses; how to build an IA framework and select appropriate IA methods; concrete examples of how IA can keep programs on-track, spot problems early, enhance outcomes, and strengthen collaborative ventures; and suggestions for employing IA in your organization. Detailed course materials and in-class exercises are provided.


Learning through Data Visualization

Instructor: Tarek Azzam, PhD

Description: With new technologies, the availability of more and more data, and advanced analytic tools and techniques, the challenge becomes how to best communicate what the data and findings are really telling us. Moreover, a related challenge is how to make sure the information we disseminate is accessible to different audiences. This can successfully be done by graphic design strategies to emphasize the story in the data.

In this course we will:

  • Explore the underlying principles behind effective information displays
  • Provide tips to improve most data displays
  • Examine the core factors that make these principles effective
  • Discuss the use of the common graphical tools
  • Explore other graphical displays that allow the user to visually interact with data
  • Review interactive visual displays, GIS and crowdsourcing visualizations

This course is designed as an introduction to the topics. No prior training is required and researchers in all career stages are welcome.


Leveraging Technology in Evaluation

Instructor: Tarek Azzam, PhD

Description: This course will focus on how range of new technological tools can be used to improve program evaluations. Specifically, we will explore the application of tools to engage clients and a range of stakeholders, collect research and evaluation data, formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings.

After completing the course participants are expected to have an understanding of how technology can be used in evaluation practice, and some familiarity with some specific technological tools that can be used to collect data, interpret findings, conceptually map programs in an interactive way, produce interactive reports, and utilize crowdsourcing for quantitative and qualitative analysis.

Participants will be given information on how to access tools such as Mechanical Turk (MTurk) for crowdsourcing, Geographical Information Systems (GIS), interactive reporting software, and interactive conceptual mapping tools to improve the quality of their evaluation projects.

After completing the course participants are expected to have: 1) an understanding of how technology can be used in evaluation practice, 2) Become familiar with some specific technological tools that can be used to collect data, interpret findings, map programs in an interactive way, and display data 3) participants will be provided with a list of tools resources that can be used after the completion of the course.


Making Evaluation Data Actionable

Instructor: Ann Doucette, PhD

Description: Interventions and programs are implemented within complex environments that present challenges for collecting program performance information. A general problem for performance measurement initiatives — and what often causes them to fall short of their intended objectives — is the failure to choose performance measures that are actionable, meaning that they are linked to practices that an organization or agency can actually do something about, and the changes in those practices can be linked directly to improved outcomes and sustained impact.

This class introduces complex adaptive systems (CAS) thinking and addresses the implication of CAS in evaluating the outcomes and impact of interventions and programs. Examples used in this case range from healthcare, education, transportation and safety, developing countries, and research and development environments. The class examines performance measurement strategies that support actionable data. The focus will be on data-based decision making, value-based issues, and practice-based evidence that can assist in moving performance measurement and quality monitoring activities from a process, outcome, and impact evaluation approach to continuous quality improvement. Business models such as Toyota Production System, Six-sigma, Balanced Scorecards, as well as knowledge management and benchmarking strategies will be discussed in terms of how they can inform improvement strategies.

Note: Persons with some experience in program evaluation, and those with interest in a systems perspective will likely derive the most benefit from this course.


Strategic Planning with Evaluation in Mind

Instructor: John Bryson, PhD

Description: Strategic planning is becoming a common practice for governments, nonprofit organizations, businesses, and collaborations. The severe stresses – along with the many opportunities – facing these entities make strategic planning more important and necessary than ever. For strategic planning to be really effective it should include systematic learning informed by evaluation. If that happens, the chances of mission fulfillment and long-term organizational survival are also enhanced. In other words, thinking, acting, and learning strategically and evaluatively are necessary complements.

This course presents a pragmatic approach to strategic planning based on John Bryson’s best-selling and award-winning book, Strategic Planning for Public and Nonprofit Organizations, Fifth Edition (Jossey-Bass, 2018). The course examines the theory and practice of strategic planning and management with an emphasis on practical approaches to identifying and effectively addressing organizational challenges – and doing so in a way that makes systematic learning and evaluation possible.  The approach engages evaluators much earlier in the process of organizational and programmatic design and change than is usual.

The following topics are covered though a mixture of mini-lectures, individual and small group exercises, and plenary discussion:

  • Understanding why strategic planning has become so important
  • Gaining knowledge of the range of different strategic planning approaches
  • Understanding the Strategy Change Cycle (Prof. Bryson’s preferred approach)
  • Knowing how to appropriately design formative, summative, and developmental evaluations into the strategy process
  • Knowing what it takes to initiate strategic planning successfully
  • Understanding what can be institutionalized
  • Making sure ongoing strategic planning, acting, learning, and evaluation are linked

Strategy Mapping

Instructor: John Bryson, PhD

Description: The world is often a muddled, complicated, dynamic place in which it seems as if everything connects to everything else–and that is the problem! The connections can be problematic because, while we know things are connected, sometimes we do not know how, or else there are so many connections we cannot comprehend them all. Alternatively, we may not realize how connected things are and our actions lead to unforeseen and unhappy consequences. Either way, we would benefit from an approach that helps us strategize, problem solve, manage conflict, and design evaluations that help us understand how connected the world is, what the effects of those connections are, and what might be done to change some of the connections and their effects.

Visual strategy mapping (ViSM) is a simple and useful technique for addressing situations where thinking–as an individual or as a group–matters. ViSM is a technique for linking strategic thinking, acting, and learning; helping make sense of complex problems; communicating to oneself and others what might be done about them; and also managing the inevitable conflicts that arise.

ViSM makes it possible to articulate a large number of ideas and their interconnections in such a way that people can know what to do in an area of concern, how to do it, and why. The technique is useful for formulating and implementing mission, goals, and strategies and for being clear about how to evaluate strategies. The bottom line is: ViSM is one of the most powerful strategic management tools in existence. ViSM is what to do when thinking matters!

When can mapping help? There are a number of situations that are tailor-made for mapping. Mapping is particularly useful when:

  • Effective strategies need to be developed
  • Persuasive arguments are needed
  • Effective and logical communication is essential
  • Effective understanding and management of conflict are needed
  • When it is vital that a situation be understood better as a prelude to any action
  • Organizational or strategic logic needs to be clarified in order to design useful evaluations

These situations are not meant to be mutually exclusive. Often they overlap in practice. In addition, mapping is very helpful for creating business models and balanced scorecards and dashboards. Visual strategy maps are related to logic models, as both are word-and-arrow diagrams, but are more tied to goals, strategies, and actions and are more careful about articulating causal connections.

Objectives: (Strategy Mapping)

At the end of the course, participants will:

  • Understand the theory of mapping
  • Know the difference between action-oriented strategy maps, business model maps, and balanced scorecard maps
  • Be able to create action-oriented strategy maps for individuals – that is, either for oneself or by interviewing another person
  • Be able to create action-oriented maps for groups
  • Be able to create a business model map linking competencies and distinctive competencies to goals and critical success factors
  • Know how to design and manage change processes in which mapping is prominent
  • Have an action plan for an individual project

Systems Evaluation

Instructor: Jennifer Brown Urban, PhD

Description: High-quality evaluation necessarily begins with good evaluation planning. All too often, there is a rush to measurement without putting in the careful thought and attention to a program’s underlying theory of change, the evaluation questions that are important to answer, and the larger system within which a program is embedded. This can lead to a waste of resources when the data collected fail to address the question(s) of interest and/or the evaluation design is not appropriate for the stage of development of the program.

There are a number of evaluation methods that specifically incorporate a systems framework to address these issues. This course will provide an introduction to and overview of several prominent systems methods including Agent-Based Modeling, System Dynamics Modeling, Social Network Analysis, Group Concept Mapping, and Relational Systems Evaluation. By the end of the course, students will have a basic understanding of the range of Systems Evaluation methods, understand the pros and cons of each method, and learn the evaluation situations most conducive to apply each of them. The course will help orient evaluators and evaluation capacity builders towards Systems Evaluation methods and will prepare students to apply tools in their specific evaluation contexts.

The practical, hands-on portion of the class will focus on incorporating Relational Systems Evaluation into one’s evaluation practice. Relational Systems Evaluation (RSE) is an empirically tested framework for program evaluation and planning that integrates principles associated with systems theories in order to develop evaluation capacity, enhance evaluation quality, integrate research and practice, and ultimately improve programs. RSE is operationalized using the Systems Evaluation Protocol (SEP), a step-by-step freely available protocol designed to guide evaluators and program managers in the planning, implementation, and utilization of an evaluation of virtually any type of program or intervention. This includes using the SEP to develop detailed theories of change that can then be linked directly to empirical evidence (evidence mapping), program practice (practice mapping), and evaluation strategy (measurement mapping). Through hands-on practice, students will also learn how to use the Netway, an online cyberinfrastructure that facilitates the development of SEP products (e.g., logic models, pathway models, stakeholder analysis).


Using Program Evaluation in Nonprofit Environments

Instructor: Kathryn Newcomer, PhD

Description: Funders and oversight boards typically need data on the results obtained by the programs they fund. Within foundations, program officers want information about grantees, and about lines of effort they fund to guide planning and future allocation of resources.  Executive officers and members of the boards that oversee nonprofit service providers also want to know what works and what does not. This class provides background that program officers and overseers need to understand how evaluation can serve their information needs, and how to assess the quality of the evidence they receive.

Drawing upon cases from foundations and nonprofits, the session will help attendees:

  • Clarify where to start in using evaluation to improve nonprofit social service programs
  • Learn what/who drives program evaluation and performance measurement in public and nonprofit service providers
  • Explore uses of evaluation and outcomes assessment in the non-profit sector
  • Understand how to frame useful scopes of work (SOWs) and requests for proposals (RFPs)  for evaluations and performance measurement systems
  • Identify and apply relevant criteria  in choosing contractors and consultants to provide evaluation assistance
  • Discuss challenges to measurement of social service outcomes
  • Understand what questions to ask of  internal evaluation staff and outside consultants about the quality of their work
Programs and Events
Contact Us

The Evaluators’ Institute

tei@cgu.edu