-
Define monitoring and measures needs.
Define and prioritize information needs for mitigating legal risk (if applicable). What legal guarantees, if any, are you responsible for? What evidence would be necessary to defend your organization in court?
Define and prioritize information needs for mitigating risk of negative impacts and other reputational risks. Are there potential, credible negative impacts to nature and people? Are there public promises for which proof of delivering on these promises is required?
Define and prioritize information needs for reporting to funders or other philanthropic uses. What, if any, are the funders’ requirements? Is your project designed to be used as a ‘proof of concept’ to solicit additional funding? If so, what proof will likely be required? What are the requirements for reporting progress on activities and results such as tool development, analytical reports, etc.?
Define and prioritize information needs for influencing key actors, including building the evidence base for conservation. Is there a group whose behavior you are hoping to change? What type of evidence do they require? The best way to assess this is generally to directly ask members of this group. Additionally, assessing how information has led to changes in behavior in the past can provide an understanding of the required quality and quantity of evidence. Note that one group of key actors is the conservation community. Evidence that a strategy works is important to influence broader adoption of the strategy by the conservation community.
Define and prioritize information needs for reporting impact of your project or program to your organization. How is your organization tracking and summarizing impact across projects and programs? How do partners and others report impact? Is there a need for reporting in a common “currency” for area impacted, types of impacts, degrees of impacts?
Define and prioritize information needs for adaptive management. Define milestones, specific key points in the results chain, and other information needs that are critical to making management decisions. Be explicit with what information is needed for these points. For each information need, consider whether indicators are needed. Note that activities and outputs are generally more commonly measured and reported to donors and in management reviews, but intermediate results and impacts are the results that are necessary to measure to indicate conservation results.
-
Processes or activitiesdescribe project actions,such as engaging in meetings, working with partners, conducting lobbying activities, etc. Indicators for these track activities and participation. They are generally qualitative in describing status, such as whether an activity is completed, ongoing and going well, has some issues, has major issues, or has not been started.
-
Outputs describe the major products that are completed by the conservation activity. These may be reports, or tools that were developed. Indicators for these are qualitative and generally related to the completion and delivery of a product. Progress in completing outputs can be generated similarly to process or activity indicators.
-
Intermediate resultsdescribe what weintendtoaccomplishthatisa prerequisite for achieving conservation goals or outcomes. Intermediate results can be defined for several major steps in a sequence within a strategy. Intermediate results may relate to changes to or establishment of policy, governance, sustainable finance, partnership development, a social behavioral change, or implementation of management activities. Intermediate results are often referred to as “leading indicators” since their completion suggests that impacts will occur in the future.
-
Impacts describe what changes to people and nature are ultimately being achieved as a result of the conservation strategy. Impacts are related to goals; specifically, our goals are to achieve a certain level of impact. Impacts can be described in terms of the scope of an impact (how many hectares and/or kilometers are protected, restored, improved, how many people benefited, etc.) and/or the degree of impact (increase in population size, changes in species diversity, changes in water quality, changes in income, life expectancy, etc.). Impacts are often referred to as “lagging indicators” since they can take time to be realized and/or monitored.
-
Identify indicators and measures. For each of the information needs identified above, identify an indicator. Indicators specify what needs to be measured. To learn more about selecting indicators, see Mayoux, L. (2002). What do we want to know? Selecting Indicators.
Consider the full range of information needs (outlined above). If an indicator or measure doesn’t fill an information need, don’t measure it. This would be a waste of limited resources. Instead, make sure that the indicators and measures are focused on priorities for fulfilling the needs identified in Step 1.
Considering novel indicators and monitoring approaches can save money.For instance, household surveys may be common when collecting socioeconomic data, but they can be expensive and require specialized knowledge. A less expensive option may be to use mobile phone-based survey methods, focus groups and key informant interviews, and participatory rural appraisal methods.
Adopting or adapting existing tools and indicators can save time and money. Consult the literature, specialists, or colleagues for survey templates or previous tools that have been employed in past projects. Be sure to check if survey templates or tools are context appropriate and will collect relevant information.
For all indicators, define the following:
- The information need it is addressing
- The audience for the information
- The activity,output,outcome or impact it is intended to measure progress toward (or risk it is intended to avoid or mitigate)
- The level or range of values that are triggers for action,and what that action would be.
- Necessary strength of inference–and where relevant define appropriate experimental design, resolution, precision, and accuracy (see below for discussion of certainty statements as well as Appendix G.
- Analytical or methodological approach for analyzing indicators
- The source of the data,including any relevant methods for how the information will be collected
- How often it should be measured
- How often it needs to be communicated to each audience, and in what format
- Costs and resources required for information collection, management, analysis, and reporting
Be explicit about how the indicator will be used.As stated above, the purpose of measuring each indicator should be specified, including the intended audience and use. In particular, it is useful to specify the status or quantitative values of the indicator that would signify adequate progress in activities, completing outputs, achieving intermediate results, leverage through generating influence and progress toward goals, or unacceptable impacts, or lack of progress that would trigger adaptive management or other decisions. If this cannot be done, consider whether you have selected the wrong indicator, or whether it would actually be useful to measure that indicator.
To learn more about different types of indicators, see this useful paper from UNDP.
Example indicators
Outcome: By 2040, property values increase by 5% more than comparable communities due to restoration activities.
Indicator:% increase in property values (US$)
Outcome: By 2020, tribal conflicts caused by resource constraints decreases within the conservancy.
Indicator:# of tribal conflicts caused by resource constraints annually
Outcome: By 2030 and thereafter, fewer than 10 cases of water-borne diseases are recorded annually within the region.
Indicator:Perception of the prevalence of water-borne illnesses within the local community
Select indicators that inform multiple audience needs first.Doing so can save resources. Table 4 demonstrates that in some cases one indicator can provide relevant information to many audiences (i.e., for Reduced conflict and Increased food security), and in other cases, different audiences may need different information (i.e., multiple indicators are required to measure Increased employment).
INSERT TABLE 4 (p94)
Indicators, especially those related to human well-being should be drafted in consultation with, and be validated by, local stakeholders to ensure they accurately reflect their priorities and experiences. For more on social indicators and examples of case studies from the field, see Wongbusarakum, S., Myers Madeira, E. and Hartanto, H. 2014. Strengthening the Social Impacts of Sustainable Landscapes Programs: A practitioner’s guidebook to strengthen and monitor human well-being outcomes. The Nature Conservancy. Arlington, VA. 2014.
Evaluate the indicators.If you have several indicators measuring the same concept, rate the indicators based on a set of criteria and only use the top rated indicators. The most well-known indicator criteria are SMART (for quantitative) and SPICED (for qualitative). For more on selecting SMART and SPICED criteria, see this guide from UNICEF.
SMART indicator criteria are:
-
Specific: Explicit enough and sensitive enough to measure changes in or results due to the action, intermediate outcome, or impact.
-
Measurable: The proposed indicator should be quantitatively measurable (e.g., can be counted, observed) and able to be analyzed.
-
Actionable: The indicator should provide information required for known decision points.
-
Realistic: The indicator should be feasible to monitor,given available resources and publicly available data, or easily acquired data.
-
Timebound: The indicator should be sensitive enough to indicate change within required reporting periods and within the project timeframe.
And
SPICED indicator criteria are:
-
Subjective: People providing the data (informants) have experience or are in a position to give unique insights.
-
Participatory: Outcomes, intermediate results and indicators are developed with primary stakeholders when appropriate.
-
Interpreted (and communicable): Indicators are adapted to local context and reporting needs..
-
Cross-checked: where possible,information about an indicator reflects different sources to ensure there is a deeper understanding of a phenomenon.
-
Empowering: Indicator selection should provide ownership to local stakeholders and give each one a voice.
-
Diverse and disaggregated: Indicators should be developed by engaging a diverse group of stakeholders, including women and men separately.
-
Evaluate needs and available human and data resources
The following are required for effective monitoring and evaluation. Staff must assess whether they have the expertise and capacity for each step.
- A research design (for impacts) should be developed or evaluated by a qualified professional.
- Data collection and sample analysis activities need to have assigned responsible parties that are qualified, and designated timeframes and schedules.
- Data management needs to have an assigned responsible party and a map of information flow and access.
- Data evaluation needs to have analytical approaches defined and experts designated to conduct the analysis prior to data collection.
- Communications staff to work with evaluation staff to figure out how to best present results for each audience.
Monitoring and evaluation plans should make use of data already being collected by governments, academics, industries, indigenous organizations, community organizations and and NGOs. This may include survey data, government statistics, model results, experimental results, and remotely sensed data. Consider partnerships with entities that have already invested in infrastructure to support monitoring and evaluation activities. However, it is important to assess the monitoring and evaluation design and information needs with data from existing efforts, and the probability that these efforts will or will not continue to provide needed information, before deciding to depend on others to supply data.
- Ask others involved in the strategy about existing sources of information. Partners and key actors often have familiarity with data sources useful to address information needs.
- Ask academics, agencies, partners and research entities about data they are acquiring and about availability. Some data are restricted, but in many cases data sharing agreements can be put in place.
- Conduct web searches for data sources that will continue to provide updated information. Many government agencies regularly collect high quality data. Explore local, state, and federal government agencies for data needs.
Consider hiring contractors to fill capacity gaps. Hiring contractors to design and implement data collection, evaluation, and/or measures reporting may be necessary. It can be difficult to evaluate the qualifications of contractors in areas where you lack expertise. Take special care in selecting the right contractor by reviewing the contractor’s previous work, talking directly with their prior clients, and soliciting external expertise to vet contractors if necessary. Second, you will need to be explicit about the information needs of critical audiences: what information is truly needed, when information is necessary, and how it should be reported. The contract should specify the content, timing and format of products delivered by the contractor. Finally, consider the importance of consistency in data collection and analysis. Is it possible to use the same contractor for the length of the project? And if not, will others be able to reliably duplicate their methods?
Consider community-based monitoring.Engaging local communities in monitoring can engender interest and support for the project and empower communities through participation, as well as providing necessary information.
Separately assess social science and ecological monitoring and evaluation capabilities.While some social science and ecological data collection and analyses can be combined, developing monitoring and evaluation plans require specialized knowledge. If critical audiences need statements with high certainty (i.e., the most rigorous monitoring and evaluation plans), significant research experience (i.e., PhD-level training) or certifications (i.e., analytical labs for certain monitoring needs) may be required.
-
Develop monitoring and evaluation plans with appropriate research design. Monitoring and evaluation plans should clearly articulate who, what, when, how, and why information should be collected, analyzed, and used. The monitoring and evaluation plan should specify:
- Audiences
- Indicators
- Research design
- Sources for data collected by others
- Hypothesesthatwillbeexplored,andmilestonesorkeypointsalonga results chain where information is necessary (why)
- Datacollectionactivitiesandtimeframes(who,when,how)
- Datamanagementplanandamapofinformationflowandaccess.
- Dataanalysisandevaluationplan(who,when,how)
- Formatandtimingofcommunicationswithkeyaudiences
- Estimatedmonitoring,analysisandevaluation,andcommunications costs and funding sources
The monitoring and evaluation plan should explicitly articulate information needs. Be clear when data are needed to inform key decisions. The monitoring and evaluation plan should clearly articulate the questions that should be answered through the monitoring and evaluation activities. More specifically:
- Identify the decisions/actions you want to influence.This defines the realm of issues that you want to affect and need to develop measures for.
- Identifyinformationneedstoinformdecisions/actionsandassess impact. This is what determines what you want to monitor and why.
Implement the social safeguards. Monitoring and evaluation plans should pass the 11 social safeguard questions. Conservancy staff can learn more about social safeguards and other elements of integrating human well-being into conservation via a series of half-hour webinars recorded in 2015, found here on CONNECT.
The communication component of monitoring and evaluation plans should consider the needs of indigenous and local communities. Specifically, all communication should consider the language, literacy levels, format, and other needs of these audiences.
Conduct peer review.Monitoring and evaluation plans are strengthened when they are reviewed by peers. Peer reviews can be useful for identifying issues with monitoring and evaluation plans, whether there are duplicated efforts by others (e.g., government agencies collecting similar data), or whether monitoring and evaluation activities are sufficiently rigorous or realistic given the available resources.
Baseline data is important. It is difficult to detect change when you don’t know where you started. You can’t go back in time to collect baseline data, so make sure that all relevant baseline data is collected.
Consider the timeframe of expected changes. If, even under the best circumstances, conservation outcomes and impacts will take years to become apparent, don’t spend a lot of time and money monitoring to confirm the lack of change. Treat monitoring as a way to test for hypothesized changes. Design monitoring and evaluation plans that have realistic timeframes and geographic scales. Note that this mainly applies to ongoing monitoring activities that are required to detect subsequent change. Also note that the relevant timeframe for negative impacts may be different than the time frame relevant for desired change, and may require sooner or more frequent monitoring.
Use a qualified expert to help with the research design.Developing an appropriate research design - taking into account requirements for temporal and spatial sampling, replication, controls, and counterfactuals - before initiating data collection is critical, for several reasons. These include avoiding wasting resources on unnecessary monitoring and accurately budgeting for required monitoring. Further, research design should inform monitoring decisions (e.g. frequency, sample size and other methodological decisions). Have a qualified scientist and/or statistician develop or review the research design to ensure that it provides the necessary rigor to match the desired level of certainty.
Select a research design commensurate with the level of certainty required for your audience.It is often helpful to think about the certainty statements a program would like to make about its impact (Table 5). If a program is interested in attributing outcomes to program activities (Certain statements), then a rigorous monitoring and evaluation plan should be implemented. This would likely require a greater level of investment than a plan that would attribute program impacts to anecdotal evidence (Cautious statements). Consider the types of statements the primary audience will need to be informed, or convinced, by a monitoring and evaluation plan. For instance, local stakeholders may be satisfied with anecdotes about a program’s impact, while donors may require causal statements about a program’s impact. Please see Appendix G for examples of research designs required to meet the three levels of certainty described below.
INSERT TABLE 5 (p99)
INSERT Minimum Standard Questions (p99)
InSERT FAQ (p100)