Evaluation and Improvement

Podcasts and Videos

The Eval Café Podcast

Episode 7: “Damn It, Jim, I’m an Evaluator! Lessons from the Trekiverse” (December 11, 2017)

In this episode, the hosts are joined by Kylie Hutchinson, principal consultant for Community Solutions Planning & Evaluation. They talk about the lessons that evaluators can take from the Star Trek universe. The discussants identify their “prime directives” and use themes, episodes, and characters from the series to describe their approach to evaluation.

Inside Social Innovation Podcast with Stanford Social Innovation Review (SSIR)

Strengthening Data Capacity in the Social Sector” (June 19, 2019)

This recording is from SSIR’s Data on Purpose Conference and features the following: Kevin Miller, civic technology manager from the Microsoft Cities Team; Aman Ahuja, data consultant; Kathryn Pettit, principal research associate at The Urban Institute; and Kauser Razvi, principal of Strategic Urban Solution. These speakers provide advice and seek to inspire public and nonprofit professionals to embrace data analysis as a tool to further social sector initiatives.


How Client Feedback Helped Transform a Houston Health Agency” (January 29, 2019)

This episode challenges organizations to use feedback loops coming from the community that is served. These feedback loops can be used to evaluate and improve programs and better meet the needs of the individuals receiving services. The episode focuses on the experience of one woman at a Texas hospital, but it also offers more global perspectives on the value of listening to voices of those most impacted by programs to inform program operation and improvement.

 

Storytelling with Data Podcast

Episode 30: “Influencing Change for Data Storytelling” (May 22, 2020)In this episode, Cole Nussbaumer Knaflic provides strategies for effectively letting your data tell the story you want to surface. Highlights include knowing how to listen to the audience to hear if a specific data visualization is truly telling the story you want to relay.

 

Ways to Apply an Equity Lens

GARE Communications Guide (Government Alliance on Race and Equity [GARE], Center for Social Inclusion, Living Cities, the Haas Institute for a Fair and Inclusive Society, and Provoc, Updated May 2018).
This guide helps organizations enhance their communication on racial equity work.

Racial Equity Action Plans: A How-To Manual (Ryan Curren, Julie Nelson, Dwayne S. Marsh, Simran Noor, and Nora Liu, 2016).
This manual offers guidance to organizations as they conduct research and develop their own racial equity plans.

Racial Equity: Getting to Results (Erika Bernabei, Updated July 2017).
This guide helps organizations as they use a racial equity lens and carry out a community process to support equity work.

Racial Equity Toolkit: An Opportunity to Operationalize Equity (Julie Nelson and Lisa Brooks, Updated December 2016).
This toolkit offers guidance for organizations as they develop strategies that promote racial equity.

Compendium of Measures of Quality

Quality in Early Childhood Care and Education Settings: A Compendium of Measures (2nd ed.) (Mirjam Neunning, Debra Weinstein, Tamara Halle, Laurie Martin, Kathryn Tout, Laura Wandner, and  Mary Burkhauser, 2010).
This compendium was prepared by Child Trends for the Office of Planning, Research and Evaluation of the Administration for Children and Families to provide uniform information about quality measures and a consistent framework with which to review existing measures of the quality of early care and education settings.

Data and Data System Resources 

"About CLASP" (Center for Law and Social Policy n.d.).
The CLASP DataFinder is a custom, easy-to-use tool developed to provide select demographic information as well as administrative data on programs that affect low-income people and families.

"Frequently Asked Questions on the Statewide Longitudinal Data Systems (SLDS) Grant Program" (National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, n.d.).
This website has information and resources on the Statewide Longitudinal Data Systems (SLDS) Grant Program, which helps states make better decisions by requiring improved data and information. Through grants and a growing range of services and resources, the program helps propel the successful design, development, implementation, and expansion of K–12 and P–20W (prekindergarten through workforce) longitudinal data systems.

KIDS COUNT Data Center (Annie E. Casey Foundation, n.d.).
A project of the Annie E. Casey Foundation, KIDS COUNT is the premier source for data on child and family well­being in the United States. Users can access hundreds of indicators, download data, and create reports and graphics that support smart decisions about children and families.

"Using Qualitative Data in Program Evaluation: Telling the Story of a Prevention Program" (FRIENDS National Resource Center for Community-Based Child Abuse Prevention, 2009).
This guide was developed for program administrators, managers, direct-service practitioners, and others expanding and enhancing current and future evaluation efforts using qualitative methods.

Outcome-Based Evaluation Tools 

"Evaluation Toolkit" (FRIENDS National Center for Community-Based Child Abuse Prevention, n.d.).
The FRIENDS Evaluation Toolkit is a resource for developing an individualized outcome evaluation plan from the ground up. It is an online compendium of information and resources. The toolkit is not intended to take the place of hands-on training or technical assistance; rather, it is intended to serve as an entry-level guide for programs to help build evaluation capacity.

ORS Impact (ORS Impact, n.d.).
Since 1989, ORS Impact has been delivering outcome-based knowledge, understanding, and application to public and private organizations to pursue the change they seek and to improve their communities’ health, well-being, and prospects to flourish. Through this Web site, ORS Impact shares these resources with a view to building capacity for evaluation and outcome-based thinking and acting in organizations doing good work around the world.

W.K. Kellogg Foundation Logic Model Development Guide (W.K. Kellogg Foundation, updated 2004).
This guide focuses on the development and use of the program logic model. Logic models and their processes facilitate thinking, planning, and communication about program objectives and actual accomplishments. Through this guide, the W.K. Kellogg Foundation provides an orientation to the underlying principles and language of the program logic model so that it can be effectively used in program planning, implementation, and dissemination of results. The premise behind this guide is simple: good evaluation reflects clear thinking and responsible program management.

Resources for Evaluating Systems Initiatives and Complexity

A Framework for Evaluating Systems Initiatives (Julia Coffman, 2007).
This paper introduces a framework to help advance the discussion about evaluating systems initiatives. The framework helps clarify what complex systems initiatives are doing and aiming to accomplish and thereby supports both initiative theory-of-change development and evaluation planning. Because this paper grew out of a symposium focused on early childhood, concepts presented throughout are illustrated with examples from that field. The framework and ideas presented also apply, however, to systems initiatives in other fields.

“An Introduction to Context and Its Role in Evaluation Practice” (Jody L. Fitzpatrick, 2012).
This publication reviews the evaluation literature on context and discusses the two areas in which context has been more carefully considered by evaluators: 1) the culture of program participants when their culture is different from the predominant one, and 2) the cultural norms of program participants in countries outside the West. We have learned much—and should continue learning—about how the culture of participants or communities can affect evaluation. Evaluators also need to expand their consideration of context to include the program itself and its setting as well as the political norms of audiences, decisionmakers, and other stakeholders of the program.

“Putting the System Back into Systems Change: A Framework for Understanding and Changing Organizational and Community Systems” (Pennie G. Foster-Fishman, Branda Nowell, and Huilan Yang, 2007).
This paper provides one framework—grounded in systems thinking and change literatures—for understanding and identifying fundamental system parts and interdependencies that can help explain system functioning and leverage systems change. The proposed framework highlights the importance of attending to the deep and apparent structures within a system as well as interactions and interdependencies among system parts. This includes attending to the value of engaging critical stakeholders in problem definition, boundary construction, and systems analysis.

The “Most Significant Change” (MSC) Technique: A Guide to Its Use (Rick Davies and Jess Dart, 2005).
This publication is aimed at organizations, community groups, students, and academics who wish to use the MSC technique to help monitor and evaluate their social-change programs and projects or to learn more about how it can be used. The technique is applicable in many different sectors, including education and health. It is also applicable to many different cultural contexts. MSC has been used by a range of organizations in various diverse communities and countries.

Strengthening Stabilization Grant Integrity with Internal Controls (Author, 2021).
The brief and the corresponding Child Care Stabilization Grantee Internal Controls Self Assessment Instrument were developed to guide Child Care and Development Fund Lead Agencies in reviewing and evaluating their internal controls for the child care stabilization grant program. The brief provides strategy considerations for Lead Agencies on how to leverage existing and create new program integrity policies and processes. The self-assessment instrument allows agencies to self-evaluate, prioritize, and monitor risk areas in their policies and processes to assure compliance with the American Rescue Plan Act (ARPA) of 2021.

"Unique Methods in Advocacy Evaluation" (Julia Coffman, and Ehren Reed, 2009).
There are systematic approaches for gathering qualitative and quantitative data that can be used to determine whether a program or strategy is making progress or achieving its intended results. Evaluations draw on a familiar list of traditional data collection methods, such as surveys, interviews, focus groups, or polling. However, some early childhood programs, policies, and initiative processes can be complex, fast-paced, and dynamic, which can make data collection a challenge. This brief describes four new methods that were developed to respond to unique measurement challenges in the early childhood field.