Does a weak metric really matter?
Good environmental decision making is information-intensive. Environmental managers invest a lot in monitoring and research to collect information, but often take a rough-and-ready approach to combining that information into a form that is useful for decision making. Does this matter? Does it make a difference to environmental outcomes to use a theoretically sound decision metric, compared with a weak decision metric? That was the question we set out to answer by comparing environmental outcomes generated by these two approaches.
What we found, in short, was that it does matter which decision metric you use. Indeed, it can make an enormous difference. As a consequence, many decision metrics used by environmental managers result in us missing out on very large environmental benefits.
What’s in a metric?
What is a decision metric and why are they so important? Around the world, billions of dollars worth of public funds are allocated to environmental projects each year. These funds are scarce relative to the amount needed to support all possible environmental projects, so prioritisation is essential. This means some projects are determined to be more valuable than others and will receive funding whereas the less valuable projects miss out.
A common approach used by environmental managers to score the projects they have to choose between is to define a set of variables believed to correlate with projects’ benefits and costs, and combine them into a formula or metric so that projects can be compared. Numerical values or scores are assigned to each potential project and these scores are used to rank the projects.
Of course, there are many different ways the various benefits and costs of a project could be combined and there are thousands of different decision metrics in practice around the world. Unfortunately, many (if not most) of these decision metrics have problems in the way they determine the value of the project. Indeed, the performance of many of these metrics is not much better than choosing projects at random.
Commonly used decision metrics have a range of weaknesses, including adding variables that should be multiplied, omitting important variables related to environmental benefits, omitting project costs, or subtracting costs rather than dividing by them (see the box ‘nine questions to a robust ranking’).
But what do these weaknesses add up to in terms of lost value? We estimated the environmental losses resulting from each of these weaknesses.
The attributes of a robust metric
Pannell (2013) described the requirements for a theoretically sound and practical decision metric for ranking environmental projects. He recommends:
where BCR stands for Benefit: Cost Ratio, benefits depend on the value (V) of the environmental assets; the effectiveness of the new practices at increasing environmental values (W); the likely adoption of new practices or behaviours (A); the risk of project failure (R); the time lag until benefits occur (L); the discount rate (r); are divided by costs (C). All of the benefit-related variables are multiplied, not weighted and added, for reasons explained by Pannell (2013).
Distributions for each of these variables were obtained from a database of 129 projects that have been evaluated using INFFER (the Investment Framework for Environmental Resources – see the box on INFFER).
Essentially, our analysis involved evaluating and ranking projects using Pannell’s metric and an alternate metric with one or more weaknesses included. By comparing the two results, we estimated the overall loss of environmental values from selecting relatively weak projects using the alternative metric.
We tested the metrics for different program budget levels: from 2.5% to 40% of the budget required to fund all the projects. Altogether, the analysis simulated 27 million projects being considered in 270,000 project-prioritisation decisions.
Using weak metrics makes an enormous difference. The wrong projects get funded, resulting in big losses of environmental values. Where funding is tight (as it almost always is) we found that poor metrics resulted in environmental losses of up to 80% – not much better than completely random uninformed project selection.
“Environmental managers should be more concerned in the first instance about how they calculate a decision metric rather than funding the acquisition of higher quality information to feed into that metric.”
The most costly errors omitted information about environmental values, project costs or the effectiveness of management actions. Using a weighted-additive decision metric for variables that should be multiplied is another costly error commonly made in real-world decision metrics. We found that omitting information about project costs or the effectiveness of management actions, or using a weighted-additive decision metric (that should be multiplied) can reduce potential environmental benefits by 30 to 50 per cent. Think about how hard it would be to double your budget (achieve a bigger slice of the funding pie); yet an equivalent environmental benefit could be achieved in effect in many cases by simply strengthening the decision metric being used.
What about the quality of the info?
Of course, it’s not just the structure of the metric calculation that could be a weakness in the prioritisation. The quality of the information going into the calculation is also a factor. We looked at the environmental losses resulting from use of poor-quality information in the decision metric. We compared results from prioritising projects based on perfect information and uncertain information.
Naturally, poorer quality information about projects results in some relatively weak projects being selected for funding. Surprisingly, however, we found that the quality of the decision metric makes a much bigger difference to environmental outcomes than the quality of the information used within it.
If a very poor metric is used, then the benefits of going from high uncertainty to perfect information are remarkably low: 3 to 6%. Improving information quality only produces benefits greater than 10% if a reasonably good decision metric is used, and even then only if the available budget is tight.
That’s an amazing finding and suggests environmental managers (and policy makers) should be more concerned in the first instance about how they calculate a decision metric rather than funding the acquisition of higher quality (and inevitably much more expensive) information to feed into that metric.
Does it really matter?
Our results show that relatively easy improvements to metrics used for environmental decision making can make a big difference to the environmental benefits generated by funded projects. Environmental budgets are usually small relative to the problems faced, so good decision metrics are crucial.
So, your choice of metric matters. Simply choosing a logical metric can improve environmental outcomes more than even obtaining substantial increases in environmental budgets. Of course, getting a bigger slice of the budget will help, but it is critical to ensure that any money is spent wisely by using a good metric.
More info: Fiona Gibson firstname.lastname@example.org
Pannell DJ (2013). Ranking environmental projects. Working paper 1312, School of Agricultural and Resource Economics, UWA. Crawley, WA. http://ageconsearch.umn.edu//handle/156482
Pannell D, AM Roberts, G Park & J Alexander (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects. Wildlife Research 40: 126-133. http://dx.doi.org/10.1071/WR12072
Pannell DJ & FL Gibson (2014). Testing metrics to prioritise environmental projects. Working Paper 1401, School of Agricultural and Resource Economics, UWA. Crawley, WA. http://ageconsearch.umn.edu/handle/163211
Nine questions to a robust ranking
There are many ways that you can go wrong when putting together a formula to rank projects, and unfortunately the quality of the results is quite sensitive to some of the common errors. Common important mistakes include: weighting and adding variables that should be multiplied; messing up the comparison of outcomes with versus without the project; omitting key benefits variables; ignoring costs; and measuring activity (instead of environmental outcomes).
It’s relatively easy to avoid these problems. Apply bit of theory, some simple logic and a dose of common sense and it’s not hard to do a pretty good job of project ranking. Indeed, it’s simply a matter of being able to answer the following set of essential questions. For more details on how you would answer these questions, see Decision Point #75 or download a compendium of David Pannell’s 20 blog posts on ranking environment projects at http://purl.umn.edu/156482
1. What is the core criterion?
2. What is it that you’re ranking?
3. What is the benefit?
4. What factors should be taken into account in working out the benefits?
5. How should these benefit values be combined?
6. Should private costs and benefits be included?
7. What other costs should be included?
8. How do you deal with uncertainty?
9. Should every project go through a rigorous analysis?
INFFER puts this research into practice
David Pannell and collaborators have implemented the theoretically preferred project ranking metric in INFFER, the Investment Framework for Environmental Resources (Pannell et al., 2013). INFFER is being used by many environmental organisations around Australia (see the story Evaluating bang for buck), and there is growing international interest, with users in Canada, New Zealand and Italy.
As well as facilitating the use of a sound metric, INFFER assists users with logical project development, collation of required information, and selection of appropriate delivery mechanisms for each project.
More info: www.inffer.org