Everyone loves resilience. NOAA’s strategic plan focuses on “resilient communities.” The PopTech! Conference just held a gathering on the theme of resilience that brought in thought leaders on banking, psychology, art, the environment, business and human development. We all want our children to be resilient. Is there anyone who does not aspire to it?
But what is resilience? And more specifically — what does it mean in the conservation context? To answer this question well, we need to embrace four principles:
1. There is no such thing as “resilience” in general. Resilience must always be defined with respect to some specific attribute. For example, we might ask whether carbon fixation is resilient in a community, or biodiversity (number of species), or the supply of clean water. We could ask whether coral cover is resilient. The answers differ depending on the variable of interest. And that variable should always be measurable.
2. We need to distinguish between “resistance” and “resilience.” Resistance is the extent to which an ecosystem (or any system) does NOT change when stressed. It can be measured as the % change per unit of stress (degrees of warming, meters of sea level rise, % loss of habitat, % reduction in flow rates, etc).
3. Resilience is the instantaneous rate at which a perturbation caused by a shock or stress decreases. It is quite simply:
Let X = the deviation from a baseline in the chosen metric of interest (see #1 above).
Then X(0) is the deviation immediately following the shock or perturbation. X(t) is the deviation at time t. And the resilience index is: log X(t)/X(0)/t.
This is analogous to the dominant eigenvalue of the linearized matrix of component interactions, and has a rich history in dynamical systems theory. It is also practical — it describes the rate at which perturbations shrink.
In some case the perturbation will NOT decrease — that means the system has zero resilience.
And if one is lucky enough to have a time series of observation, one can simply fit a decay function to the data.
4. Unfortunately, the NGO community has gravitated towards expert opinion and non-quantitative assessments. We should reject consensus and expert opinion and instead turn towards data and predictions that can be falsified.
We frequently ask field practitioners to assess recovery or resilience without gathering actual data. Big mistake. This is a recipe for no progress. If you do not believe me, consider the following:
I examined a large data set of published accounts of recovery or resilience following some major perturbations (oil spills, deforestation, mining, over-fishing, etc). I compared the judgments of the “expert” as recorded in the published papers with the actual recovery as measured by observed deviation from reference state. I did not use the above resilience index because, although it is scientifically more appropriate, I figured ecologists lean towards simpler metrics like “percentages” to inspire their expert opinions. In any event, as you can see below, the data for recovered versus not recovered are not different! That is crazy. It reveals the extent to which expert opinion is weakly disguised bias.
Figure 1. Box-and-whisker plot of deviations from baseline after time has elapsed for recovery. These depict the median and the 50% “box” of observations around that median – in others words, the box within which the central half of the observations fall. The whiskers represent the maximum and minimum, excluding the outliers. The outliers are defined as any data that are more than 1.5 times the interquartile range above or below the central box. The interquartile range is the distance between the bottom and top of the box.
Again, with huge sample sizes, one could not detect significant differences in the final deviations from reference state between what the experts called “recovered” compared to what the experts called “not recovered.” It is my hypothesis that experts in conservation are strongly influenced by perception and attention biases, and that their conclusions about “resilient” or “not resilient” should be ignored. What we need are quantitative data in the form of #3 above.
Bottom line: In fact, “resilience” is quite easy to define precisely, and it is straightforward to measure. Unfortunately, the conservation community addresses resilience by relying too much on storytelling and consensus. That is sad. Resilience is something we are all in favor of — and is what we should be managing towards — but we can only do so if we measure it.
Peter Kareiva is the chief scientist of The Nature Conservancy.