Glossary of Terms

Dec 19, 201917 min read

Adaptive learning

Organizational learning that focuses on past successes and challenges as a basis for developing future strategies in response to a changing business environment.

Administrative data

Information collected, used, and stored primarily for administrative (i.e., operational), rather than research, purposes. Examples include attendance and grades in school or voter registration data.

Source: Poverty Action Lab


Ascribing a causal link between observed changes and a specific intervention(s) or program, taking into account the effects of other interventions and possible confounding factors.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.


Information collected before or at the start of a project or program that provides a basis for planning and/or assessing subsequent progress and impact.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.


A reference point or standard against which performance or achievements can be assessed.

Source: UNAIDS. (2008). Glossary of Monitoring and Evaluation Terms.

Big data

Evolving term that describes a large volume of data that has the potential to be mined for information and used in advanced analytics.

Case study

Focuses on a particular unit—a person, a site, a project. It often uses a combination of quantitative and qualitative data. Case studies can be particularly useful for understanding how different elements fit together and how different elements (implementation, context, and other factors) have produced the observed effects and impacts.

Source: Better Evaluation. (n.d.). Case Study.


Team of nonprofit partners that works with Project Evident during a SEP engagement. Strategically designed to include individuals and segments of the organization that can not only inform the plan as it's developed but also implement it after adoption.

Continuous Quality Improvement (CQI)

Continuous quality improvement (CQI) is the systematic process of identifying, describing, and analyzing strengths and problems and then testing, implementing, learning from, and revising solutions. More simply, one can describe CQI as an ongoing cycle of collecting data and using it to make decisions to gradually improve program processes.



The extent to which, how, and why, an intervention(s) or specific program has influenced and/or contributed to observed changes.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.

Cost-Benefit Analysis (CBA)

Estimates and totals up the equivalent money value of the benefits and costs to the community of projects to establish whether they are worthwhile. These projects may be dams and highways or can be training programs and health care systems.

Source: San Jose State University Department of Economics

Culture of Learning

Organizational conventions, values, practices and processes that encourage continuous learning


Facts and statistics collected systematically (quantitative or qualitative) for reference or analysis.

Data analysis

The process of extracting meaning from data through summarization, mathematical or otherwise. Typically includes data cleaning, summarizing, modeling, and visualizing.

Data architecture

The models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations.

Data collection

The process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer the stated research questions, test hypotheses, and evaluate outcomes.

Source: Office of Research Integrity, US DHHS

Data map

A visual representation of a data architecture.

Data science

An emerging field at the intersection of statistics and programming. Some take an elite view of data science, insisting that data science must include the use of gigantica data sets that require advanced techniques to handle due to the size, or bleeding-edge methodologies like reinforcement learning and deep neural nets. Others take an all-inclusive view and treat data science as a modern synonym for data analytics. We prefer a middle ground, where data science is a programmatic approach to data analytics, where each step is designed to be reproducible, automatable, and scalable.  

Data systems

The tools and processes an organization uses to collect, store, process, and report data.

Data visualization

Creation and study of the visual representation of data to communicate information.


Diversity, Equity, and Inclusion

Designed experiment

Experiment is one for which the analyst/evaluator controls the specifications of the treatments and the method of assigning participants to each treatment.

See: Randomized Control Trial, Quasi-experimental design

Source: Adapted from: McClave, J. T., & Sincich, T. (2006). Statistics. 10th Edition. Upper Salle River, New Jersey: Pearson Prentice Hall.

Developmental evaluation (DE)

Informs and supports innovative and adaptive development in complex dynamic environments. DE brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product, and/or organizational development with timely feedback.

Source: Patton, M. Q. (2011). Developmental Evaluation. New York: The Guilford Press. p. 1.


All the ways in which people differ, encompassing the different characteristics that make one individual or group different from another. While diversity is often used in reference to race, ethnicity, and gender, a broader definition of diversity also includes age, national origin, religion, disability, sexual orientation, socioeconomic status, education, marital status, language, and physical appearance. The definition can also include diversity of thought: ideas, perspectives, and values.

Source: Based on language from the D5 Coalition, Racial Equity Tools Glossary, and UC

Ecosystems solutions

Activities to strengthen environment for evidence, e.g., digital learning communities and field-building (includes case studies, online resource repository, field presence)


The fair treatment, access, opportunity, and advancement for all people, while at the same time striving to identify and eliminate barriers that have prevented the full participation of some groups. Improving equity involves increasing justice and fairness within the procedures and processes of institutions or systems, as well as in their distribution of resources. Tackling equity issues requires an understanding of the root causes of outcome disparities within our society.

Source: Based on language from the D5 Coalition, Racial Equity Tools Glossary, and UC


An ongoing and integrated learning process for investigating and understanding social, community, organization, and program issues. Ultimately, evaluation is about sense-making, reality checking, assumption testing, and answering questions — increasing our ability to take risks and learn from both failures as well as successes. Evaluation involves the systematic collection of relevant, credible and useful information for making decisions and taking programmatic and strategic actions.

Source: Preskill, H. & Torres, R. T. (1999). Evaluative Inquiry for Learning in Organizations. Thousand. Oaks, CA: Sage.

Evaluative thinking

Systematic results-oriented thinking about what results are expected, how results can be achieved, what evidence is needed to inform future actions and judgments, and how the results can be improved in the future.

Source: Patton, M. Q. (2014). Evaluation Flash Cards: Embedding Evaluative Thinking in Organizational Culture. St. Paul, MN: Otto Bremer Foundation.


Data collected by an organization, plus the analysis that allows it to answer a question or make the data actionable.

Every Student Succeeds Act (ESSA)

Most recent version of US federal education law, the primary goal of which is to improve educational equity for students from lower-income families by providing federal funds to school districts serving poor students.


Any concept or strategy that is derived from or informed by objective evidence. That evidence may be from research literature in the field or from evidence generated by the organization itself.

Source: Glossary of Education Reform

Evidence building

The process of generating and using evidence over time in a strategic, systematic way, using a range of methods

Also: Evidence


Using evidence externally (to inform the field and policy) as well as internally (for program improvement, management, and sales)

Fee-for-service (FFS)

Paid work

Follow-on funding

Subsequent investment made by an investor who has made a previous investment in the enterprise

Formative evaluation

Formative evaluation typically connotes collecting data for a specific period of time, usually during the start-up or pilot phase of a project, to improve implementation, solve unanticipated problems, and make sure that the participants are progressing toward desired outcomes.

Source: Patton, M. Q. (2008). Utilization-Focused Evaluation. 4th Edition. Thousand Oaks, CA: Sage Publications.


Providers of capital. Includes public and private, institutional and individual


Positive and negative, primary and secondary long-term effects produced by an intervention, directly or indirectly, intended or unintended.

Source: OECD/DAC. (2002). Glossary of Key Terms in Evaluation and Results-Based. Management.

Impact evaluation

Study of changes that can be attributed to a particular intervention, such as a project, program or policy; impact evaluations typically involve the collection of baseline data for both an intervention group and a comparison or control group, as well as a second round of data collection after the intervention, sometimes even years later.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.


The act of creating environments in which any individual or group can be and feel welcomed, respected, supported, and valued to fully participate. An inclusive and welcoming climate embraces differences and offers respect in words and actions for all people. It’s important to note that while an inclusive group is by definition diverse, a diverse group isn’t always inclusive. Increasingly, recognition of unconscious or ‘implicit bias’ helps organizations to be deliberate about addressing issues of inclusivity.

Source: Based on language from the D5 Coalition, Racial Equity Tools Glossary, and UC


Quantitative or qualitative variable that provides reliable means to measure a particular phenomenon or attribute.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.


Human, financial, organizational, and community resources a program has available to direct toward doing the work; sometimes this is referred to as “resources.”

Source: The Community Health Worker Tool Kit. (2000). Logic Model Development Guide.


A rigorous science based practice – not a default when something doesn’t work out

Learning Agenda

A set of questions, assembled by an organization or team, that identifies what needs to be learned before a project can be planned and implemented.

Logic model

A systematic and visual way to present the perceived relationships among program resources, activities, and the changes or results you hope to achieve; logic models typically contain the following: resources or inputs, outputs, short-term outcomes, long-term outcomes, and the program goal.

Source: Centers for Disease Control. (September 2017). Program Development and Evaluation.


Monitoring involves the systematic collection of data on specified indicators to provide management and key stakeholders of a program/initiative with indications of the extent of progress and achievement of objectives and progress against stated goals and objectives. Monitoring is generally about tracking progress against a set of metrics to determine “progress” at regular intervals (e.g., quarterly or annually). Monitoring is often done by program staff that gather data on agreed-upon metrics. Data are often quantitative.

Source: Adapted from University of Wisconsin – Extension. (2002). Glossary of Common Evaluation Terms. UNAIDS. (2008). Glossary of Monitoring and Evaluation Terms.

Model for Improvement

The Model for Improvement is an improvement framework used to support improvement efforts made up of a set of fundamental questions that drive all improvement and the Plan-Do-Study-Act (PDSA) cycle used to develop tests and implement changes.

Source: Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: a practical approach to enhancing organizational performance. John Wiley & Sons.

Organizational path

An organization's progressive development along the two continua of impact and capacity, from insight-driven to evidence-generating.


Changes that result from a program or initiative’s activities; these may be changes in individuals, organizations, or systems that occur during or following the intervention. (Compare to outputs.)

Source: USAID. (2009). Glossary of Evaluation and Related Terms.

Outcomes-based financing

Performance-based investment.

Also Pay for Success, social impact bonds, results-based funding


The direct products of program activities; immediate measures of what the program did or produced. (Compare to outcomes.)

Centers for Disease Control. (Updated September 2017). Program Development and Evaluation.

Pay for Success (PFS)

Innovative contracting model that drives government resources toward high-performing social programs. PFS contracts track the effectiveness of programs over time to ensure that funding is directed toward programs that succeed in measurably improving the lives of people most in need.

Plan-Do-Study-Act (PDSA) Cycles

Plan-Do-Study-Act cycles or PDSA cycles combined with the Three fundamental questions are used in the improvement science framework called the Model for Improvement. These cycles are an efficient trial and learning methodology used to help people develop tests and implement changes.

Source: Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: a practical approach to enhancing organizational performance. John Wiley & Sons.

Performance indicators

Measurable markers that suggest a certain condition or circumstance exists, or certain outcomes have been achieved; they provide information on progress made toward a particular goal, output, or outcome.

Source: Point K Learning Center. (2005). Glossary: Nonprofit Planning & Evaluation.

Policy mapping

Exercise/activity to help organizations integrate and reinforce policy-related knowledge, strategies, and capacity into the design, implementation, and improvement of their evidence-building internal culture and processes (including public, private, and philanthropic funding partnerships, legislation, communications, coalition/ field-building, and results-based funding).


Obtaining or buying of goods and services on behalf of a public authority, such as a government agency.


Product or services an organization provides to change a situation.

Program data

Data collected by a service provider in the course of delivering services.

Program impact

The net effect of a program relative to what would have happened without it.

Source: Evidence Based Policymaking Collaborative

Program improvement

The process of making changes to the design or delivery of an intervention in order to increase impact or efficiency


Consistency or dependability of data with reference to the quality of the instruments, procedures, and data collection methods; data are reliable when the repeated use of the same instrument generates the same results.

Source: Adapted from USAID. (2009). Glossary of Evaluation and Related Terms.

Qualitative data

Data that is text based rather than numeric. Qualitative data result from interviews, observations, documents, video, photographs, and drawings. Qualitative data are not anecdotes — rather, when planned for and systematically collected, they constitute credible evidence. Qualitative data are typically analyzed using thematic and content analysis procedures.

Source: Adapted from Mathison, S. (Ed.). (2004). Encyclopedia of Evaluation. Thousand Oaks, CA: Sage, p. 345.

Quantitative data

Generally refers to information that is represented in numerical form—that can be expressed as numbers, amounts, or degrees. Quantitative data can be analyzed using both descriptive and inferential statistics.

Source: Adapted from Mathison, S. (Ed.). (2004). Encyclopedia of Evaluation. Thousand Oaks, CA: Sage, p. 345.

Quasi-experimental design

A designed experiment in which participants are sorted into a treatment and comparison group, but the sorting is not random.

Source: Adapted from: McClave, J. T., & Sincich, T. (2006). Statistics. 10th Edition. Upper Salle River, NJ: Pearson Prentice Hall.

Randomized controlled trial (RCT)

Study design that randomly assigns participants into an experimental group or a control group. As the study is conducted, the only expected difference between the control and experimental groups is the outcome variable being studied.

Rapid-cycle evaluation

Rapid-cycle evaluation is a process for testing changes to program operations and services in order to quickly know whether and for whom the change caused its intended improvement. The “cycle” in rapid-cycle evaluation refers to a continuous improvement approach in which a program builds evidence over time about what works best for whom and under what circumstances.

Source: Mathematica

Results-based funding

Tool by which a funder (a foundation, international donor, or government) conditions its payment to a service provider (an NGO or private company) on desired outcomes.


Strategic Evidence Plan: 3-5 year plan designed to accelerate a nonprofit’s ability to generate evidence and improve outcomes.

Social sector

Overarching term that includes charitable nonprofit organizations and foundations (which are nonprofits but don’t typically deliver services or fundraise).

Shared services

Business models that enable people, technologies, or processes to be leveraged across multiple organizations resulting in lower costs and higher quality. The objective is usually to gain efficiencies through a methodology of continuous improvement that result in more efficient and standardized processes, with much of the activity automated through enabling technology.

Source: Arizona State University

Social Impact Partnerships to Pay for Results Act (SIPPRA)

Federal legislation that creates standing pool of capital to support outcomes-based financing. Builds on work of the Social Innovation Fund, state-level pay for success projects, and the global movement to create social impact bonds

Social Innovation Fund (SIF)

Program of the Corporation for National and Community Service (CNCS) that combines public and private resources to grow promising community-based solutions that have evidence of results in any of three priority areas: economic opportunity, healthy futures, and youth development.

Social Return on Investment (SROI)

Systematic way of incorporating social, environmental, economic and other values into decision-making processes

Strategic learning

The use of data and insights from a variety of information- gathering approaches — including evaluation — to inform decision making about strategy.

Source: Coffman, J., & Beer, T. (2018). Strategic Learning. Center for Evaluation Innovation

Strategic learning & evaluation system

Overarching strategy and framework, a set of processes, and a supportive infrastructure for determining what to monitor, evaluate, and research, to what extent, when, with what resources, and by whom.

Source: Preskill, H., & Mack, K. (2013). Building a Strategic Learning and Evaluation System for Your Organization. FSG.

Summative evaluation

Judge the overall merit, worth, and significance of a project. The term summative connotes a summit (important) or summing-up judgment. The focus is on judging whether a model is effective. Summative evaluations are used to inform decisions about whether to expand a model, replicate it elsewhere, and/or “take it to scale” (make it a statewide, region-wide, or national model).

Source: Patton, M. Q. (2014). Evaluation Flash Cards: Embedding Evaluative Thinking in Organizational Culture. St. Paul, MN: Otto Bremer Foundation.


An interconnected set of elements that is coherently organized in a way that achieves something (function or purpose).

Source: Meadows, D. H. (2009). Thinking in Systems: A Primer. London; Sterling, VA: Earthscan.

Systems change

An intentional process designed to alter the status quo by shifting the function or structure of an identified system with purposeful interventions. Systems change aims to bring about lasting change by altering underlying structures and supporting mechanisms that make the system operate in a particular way. These can include policies, routines, relationships, resources, power structures, and values.

Source: Harries, E., Wharton, R., & Abercrombie, R. (2015). Systems change: A guide to what it is and how to do it. London, UK: NPC.

Systems thinking

The ability to see how organizational systems, sub-systems, and their parts interact with and influence each other, and how these systems create and contribute to specific problems.

Source: Adapted from Bullock, Robert. (2013). Systems Thinking: How to Lead in Complex Environments. Scontrino Powell, Inc.

Theory of Action

Explains how programs or other interventions are constructed to activate their theory of change. The theory of action explains the activities that will be undertaken and what level of success will be needed for each outcome to produce the final intended results.

Source: Funnell, S. & Rogers, P. (2011). Purposeful Program Theory. San Francisco, CA: Jossey-Bass. pp. 31-32.

Theory of Change

Describes a strategy or blueprint for achieving a given long-term goal. It identifies the preconditions, pathways, and interventions necessary for an initiative's success.

Source: Poverty Action Lab  

Three Fundamental Questions (Model for Improvement)

The Three fundamental questions , and their answers form the basis of improvement and are used to guide improvement efforts in combination with Plan-Do-Study-Act (PDSA) Cycles. The Three fundamental questions include: 1. What are you trying to accomplish? 2. How will we know that a change is an improvement? And 3. What changes can we make that will result in improvement?

Source: Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: a practical approach to enhancing organizational performance. John Wiley & Sons.


The analysis of data from three or more sources obtained by different methods. Findings can be corroborated, and the weakness or bias of any of the methods or data sources can be compensated for by the strengths of another, thereby increasing the validity and reliability of the results.

Source: UNAIDS. (2008). Glossary of Monitoring and Evaluation Terms.

Utilization-focused evaluation

Evaluation done for and with specific primary intended users for specific, intended uses. Utilization-focused evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration for how everything that is done, from beginning to end, will affect use.

Source: Patton, M. Q. (2008). Utilization-Focused Evaluation. 4th Edition. Thousand Oaks, CA: Sage Publications, p. 37.


The extent to which data measures what it purports to measure and the degree to which that data provides sufficient evidence for the conclusions made by an evaluation or research study.

Source: USAID. (2009). Glossary of Evaluation and Related Terms.

    Need more help? Contact us today to find out how Project Evident can help your organization better use evidence to improve outcomes for the communities you serve.