Why bother? How is it done?
Prioritisation is unavoidable; most environmental programs have too few resources to meet their goals.
Given the great variation between potential environmental investments, good prioritisation is critically important.
In most cases, good prioritisation of environmental investments is reasonably easy to apply.
In some cases, prioritisation is more complex and difficult, requiring special techniques.
Despite the compelling case for good prioritisation, it is often not practiced.
In most environmental programs around the world, the funding provided by governments falls well short of that needed to deal comprehensively with the environmental issues in question. Success rates of 5-20% are common in competitive funding rounds for environmental projects.
As a result, program managers cannot avoid the need to decide which potential investments should receive funds and which should not. Even where no explicit prioritisation process is used, managers are implicitly prioritising, although probably not systematically or transparently.
The benefit that can be generated by systematic prioritisation depends in part on how heterogeneous the various investment options are. The greater the diversity in costs and benefits amongst different projects, the more important it is to accurately identify the best projects. The variance in these factors is often extremely high. It’s not uncommon for Benefit: Cost Ratios of different proposed projects to vary enormously. For example, the data set used by Fuller and colleagues (2010) for 7000 potential environmental investments reveals Benefit: Cost Ratios that vary by more than eight orders of magnitude.
Reinforcing that finding, we estimated the environmental gains that are possible through high quality prioritisation of investments relative to poor-quality prioritisation (and compared this with random project selection). High quality prioritisation can give you a gain of 50 to 100% relative to poor-quality prioritisation and a gain of up to 800% relative to random project selection (Pannell & Gibson, 2016).
Of course, there are additional costs involved in undertaking good-quality prioritisation processes, relative to simpler approaches. However, the estimated benefits are easily large enough to justify the additional effort.
So if you are going to engage in a prioritisation process, what do you do? In the following pages I outline the basic approach. If you would like more detail on any aspect, have a look at Pannell (2015) as it sets out a relatively simple and plain-speaking discussion on the process (though in a lot more detail).
Generating the greatest environmental benefit
The starting assumption is that the objective of the prioritisation process is to generate the greatest environmental benefits for the community as a whole by allocating a limited budget across a range of potential projects. In other words, the organisation wishes to maximise the value for money from its environmental investments.
Sometimes environmental organisations seek to rank locations, or issues, or desired outcomes, with no explicit project activities defined. This is problematic because value for money depends on the answers to questions like, “what is the technical feasibility of generating the hoped-for benefits?”, “to what extent would the community cooperate?” and “what would it cost?” Those questions can only be answered for a particular set of actions or interventions – a project. Prior to ranking projects, each potential project needs to be clearly defined in terms of what would be done, where, and by whom.
Let’s begin with the simplest case, where each project is independent of other projects – the benefits and costs of a project do not depend on which other projects are implemented. We will deal with more complex cases later.
One crucial insight is that to estimate the benefits of a project you need to know the benefit values ‘with the project’ and the values ‘without the project’ (both of which usually have to be predicted).
Comparing values ‘with versus without’ is not the same as comparing values ‘before versus after’ the project. The reason is that conditions may not be static in the absence of the project. For example, it may be that an environmental asset would degrade in the absence of the project (as illustrated in Figure 1). Remarkably, a study looking at 17 existing systems from around the world for prioritising conservation projects found that only one correctly used the with-versus-without approach to estimate benefits (Maron et al, 2013).
A simple benefit-cost metric
Here is a Benefit-Cost Ratio metric for ranking environmental projects. It is the simplest theoretically defensible formula that should be used: Equation 1.
BCR is the Benefit: Cost Ratio. The higher the BCR, the better the project, V( ) represents the values (or benefits or services) generated, P1 represents the outcomes with the project in place, P0 represents the outcomes without the project in place, A is the level of adoption/compliance (by individuals or businesses whose cooperation is needed to achieve the project’s goal) as a proportion of the level needed to achieve the project’s goal, R is the probability of project failure – in other words, the riskiness of the project, C is the total project cash costs, and M is total discounted maintenance costs.
[V(P1) – V(P0)] in the above formula represents the difference in overall values with versus without the project (assuming full compliance, A=1, and zero project risk, R=0). It is the potential benefit of the project if everything goes right.
V can be measured in monetary terms, or in some other unit that makes sense for the types of projects being ranked.
The structure of this formula is very important. Benefits (in the numerator) are divided by costs. The three main parts of the top row are multiplied together, not added, because the overall benefit is proportional to each of these parts. There are no weights applied to any of these variables. And costs (in the denominator) get added up, rather than multiplied.
This simple formula can be modified in a number of ways to better deal with some of the complexities decision makers will face in the ‘real’ world. Here are two such modifications. The first incorporates factors dealing with time lags and discount rates. The second provides a more nuanced engagement with risk.
Incorporating time: Equation 2
L is the lag time in years until most benefits of the project are generated, r is the annual discount rate, to account for the fact that money spent on the project incurs the equivalent of an interest cost, and K is the total project in-kind costs of the organisation that is running the project, not costs to people whose behaviour the project is intended to influence.
The last part of the numerator, “/(1 + r) L ”, is included to discount future benefits back to their present value. It is important to include this part of the formula if different projects vary substantially in the time lags until they generate benefits. The choice of discount rate, r, can make a large difference to the estimated benefits for projects with very long-term effects.
Incorporating risk: Equation 3
where: Rt , Rs , Rf and Rm are the probabilities of the project failing due to technical risk, socio-political risks, financial risks and management risks, respectively, and E is total discounted compliance costs. These are involuntarily borne private costs, where people are forced to comply by regulation or similar. We recommend that private costs that are borne voluntarily should not be included, because the fact that there is voluntary cooperation indicates that the costs are offset by unmeasured private benefits.
There are a number of simplifications in the above formulae, even for the most complex of them:
- Assuming that benefits are linearly related to the proportion of people who adopt the desired new practices or behaviours;
- Representing project risks as binary variables: success or complete failure;
- Having only one time lag for all benefits from the project;
- Approximating the private benefits and voluntary private costs as zero; and
- Treating the project costs, maintenance costs and compliance costs as if there was only one combined constraint on their availability.
Simplifications are essential to make the system workable, but care is needed when selecting which simplifications to use. Each of these simplifications can be relaxed if desired.
The choice between the three versions of the ranking formula depends on the importance of the issues being addressed, the scale and costs of the projects being considered, the time and resources available for the ranking process, and the availability of the information needed for each formula.
In the BCR formulas presented in equations (1), (2) and (3), the numerators represent the benefits of a project. The equations show that the benefits are determined by several factors: the value or importance of the environmental benefits generated (measured as a difference, with versus without the project); the level of adoption or compliance with the project (ie, the extent to which the necessary actions are actually taken); various risks that may cause the project to fail; and the time lag until benefits arise. If we represent the risks as probabilities of failure, the numerator represents the ‘expected’ benefits, using ‘expected’ in the statistical sense of a weighted average.
If the projects being prioritised all produce benefits that are similar in nature, and policy makers are happy to base their measure of benefits on scientific criteria, benefits (ie, V in the above formulae) can be measured using ecological criteria that are specific to the issue. The advantages of using monetary values are that it allows you to:
(a) compare value for money for projects that address completely different types of issues (eg, river water quality versus recreational benefits versus income) and
(b) assess whether a project’s overall expected benefits exceed its total costs.
Economists have developed a range of methods that can be used to monetise environmental values: so-called non-market valuation methods. While these methods are not without their challenges and problems, they do have some strengths. One is that they allow the preferences of the broader community to be transparently considered during environmental prioritisation. They don’t rule out using an approach that combines community preferences with those of experts. The methods have been subjected to deep scrutiny and testing, and clear guidance on preferred procedures for implementing them and analysing the results are available. They result in a more logically consistent and defensible set of weightings than are often used in weighting processes that avoid monetisation.
Notwithstanding the advantages of monetising the benefits, it can be challenging to obtain appropriate values. Help from an expert is often advisable.
Measuring the costs of environmental projects is conceptually simpler than the measurement of benefits, but it is not without its challenges. Many environmental prioritisation processes fail to include the full range of relevant costs, particularly maintenance costs (Armsworth 2014). If benefits are to be counted for a long time frame (eg, decades), then any relevant maintenance costs should be counted over the same time frame. Similarly to the benefits, maintenance costs should be discounted to present values, to avoid over-stating their significance. (In principle, even the initial 3-to-5-year costs of establishing a project should be discounted as well, but failing to discount over such short time frames is a less serious error than failing to discount maintenance costs.)
Let us now consider a few more more complex prioritisation problems.
Multiple versions of the same project
There are always many different ways of designing a project, and they can vary greatly in value for money. Therefore, it can be worth evaluating more than one project per asset or issue, especially in important cases. For example, we may have identified that it is a high priority to invest in protection of a particular environmental asset (a wetland, or a particular species, or a river), but there remains the issue of how ambitious the project should be. If there is currently a 20% probability that a species will go extinct over the next 20 years, should the project aim to reduce that probability to 10%, 5%, 1%, 0.01%, or what? Should a project that addresses water pollution from agricultural nutrients aim to reduce nutrient inflows to the water body by 10%, 20%, 40% or 80%? Project options such as these can be compared by defining a separate project for each target level, and comparing the BCR for each option.
Doing this comparison can be important because the BCRs can vary widely depending on the target chosen. A key factor behind this is the empirical observation that project costs are often related to the target in a highly non-linear way, with costs escalating greatly at higher targets. For example, Figure 2 shows the estimated costs of reducing phosphorus pollution in the Gippsland Lakes in eastern Victoria, depending on the percentage reduction. Clearly, the cost increases at an increasing rate as the target reduction is increased.
When comparing distinct projects, the way to generate the most valuable environmental benefits for the available resources is to select those projects with the highest BCRs, up to the point where the budget is exhausted. However, when comparing multiple versions of the same project, the criterion is a little different. It is to select the most ambitious project that has a BCR above the threshold level for acceptance. The threshold depends on how tight funding is, and on the performance of other competing projects.
Multiple benefits from one project
The prioritisation formulas (1), (2) and (3) discussed earlier are designed to work where there is a single type of benefit from a project, or where the values for multiple benefits have already been converted into a common currency, such as dollars, and added up. If a project has multiple benefits and they are not monetised, the other option is to combine them by weighting them (to reflect their relative importance) and adding them up (see Pannell 2015). This is essentially what monetising them does, but sometimes people have a prejudice against using monetised values in this process.
Prioritisation when projects depend on each other
The formulas are also founded on an assumption that the projects being compared do not depend on each other. This means that they cannot be used, for example, to compare which parts of a region should be restored through revegetation, because the benefits of such revegetation depend on what vegetation there already is, and on whether other parts of the region are revegetated. Sound optimisation in this situation requires a more complex approach, such as a mathematical model that optimises across the whole region.
A concluding comment: People often respond to the manifest inadequacy of budget allocations to the environment by demanding we should spend more. Yet, consider this. If you could double your budget for projects by putting a bit more effort into your project ranking process, would you do so? Of course you would. Doubling the environmental benefits generated from your environmental investments is rather like doubling your budget (but much much easier to achieve). If your current ranking system is of the usual questionable quality, doubling the benefits (or more) is readily achievable using the approaches advocated here.
The wrong metric for ranking projects
Around the world there are thousands of different quantitative systems to rank projects. At the heart of most of these systems is a formula or metric that combines various pieces of information about a project to produce a number that provides an overall assessment. There are various errors that can be made when putting together a ranking metric, and the quality of the results is quite sensitive to some of the common errors. These errors include: weighting variables inappropriately; adding variables that should be multiplied; comparing outcomes without considering counterfactuals (ie, ignoring what might happen if the project hadn’t happened, the outcome might have occurred anyway); omitting key variables related to benefits; ignoring costs; and measuring activity (outputs) instead of outcomes. In this article we outline the basic approach that will ensure you avoid these errors.
One framework for robust prioritisation: INFFER
There are thousands of different systems in use to rank environmental projects for funding. Unfortunately, judging from the many examples I have examined, most of the systems in use are very poor. Indeed, the performance of many of them is not much better than choosing projects at random. If only people would be more logical and thorough in their approach to ranking environmental projects! The potential to reduce wastage and improve environmental outcomes is enormous. Attempting to get managers, researchers, policy people and decision makers to appreciate this has been a major driving force behind much of my work in environmental economics. It led to the creation of the INFFER framework (Pannell et al, 2013) and it also has sparked many editorials on my blog, Pannell Discussions.
Dealing with uncertainty
Uncertainty and knowledge gaps are unavoidable realities when evaluating and ranking projects. The available information is almost always inadequate for confident decision making. Key information gaps often include: the cause-and-effect relationship between management actions and environmental outcomes; the likely behavioural responses of people to the project; and the values resulting from the project.
Although uncertainty is often high, the ranking procedure used remains important. Even given uncertain data, the overall benefits of a program can be improved substantially by a better decision process. Indeed, benefits appear to be more sensitive to the decision process than to the uncertainty. For example, we found that there is almost no benefit in reducing data uncertainty if the improved data are used in a poor decision process (Pannell & Gibson, 2016). On the other hand, even if data is uncertain, there are worthwhile benefits to be had from improving the decision process.
This is certainly not to say that uncertainty should be ignored. Once the decision process is fixed up, uncertainty can make an important difference to the delivery of benefits.
There are economic techniques to give negative weight to uncertainty when ranking projects. However, we suggest a simpler and more intuitive approach: rating the level of uncertainty for each project; and considering those ratings subjectively when ranking projects (along with information about the Benefit: Cost Ratio, and other relevant considerations).
Apart from its effect on project rankings, another aspect of uncertainty is the question of what, if anything, the organisation should do to reduce it. It is good for project managers to be explicit about the uncertainty they face, and what they plan to do about it (even if the plan is to do nothing). Simple and practical steps could be to: record significant knowledge gaps; identify the knowledge gaps that matter most through sensitivity analysis (Pannell, 1997); and have an explicit strategy for responding to key knowledge gaps as part of the project, potentially including new research or analysis.
In practice, there is a tendency for decision makers to ignore uncertainty when ranking projects, and to proceed on the basis of ‘best-guess’ information, even if the best is poor. In support of that approach, it is often argued that we should not allow lack of knowledge to hold up action, because delays may result in damage that is costly or impossible to reverse. That is reasonable up to a point, but sometimes organisations are too cavalier about proceeding with projects when they have little knowledge of whether they are worthwhile. It may be at the expense of other projects in which they have much more confidence, even though they currently appear to have lower BCRs. It is not just a question of proceeding with a project or not proceeding – it’s a question of which project to proceed with, considering the uncertainty, benefits and costs for each project. When you realise this, the argument based on not letting uncertainty stand in the way of action is rather diminished.
In some cases, a sensible strategy is to start with a detailed feasibility study or a pilot study, with the intention of learning information that will help with subsequent decision making about whether a full-scale project is worthwhile, and how a full-scale project can best be designed and implemented. A related idea is active adaptive management, which involves learning from experience in a directed and systematic way (see Decision Point #102).
Particularly for larger projects, we believe that one of these approaches should be used as they have great potential to increase the benefits that are generated. They imply that the initial ranking process should not produce decisions that are set in stone. Decisions may need to be altered once more information is collected. We should be prepared to abandon projects if it turns out that they are not as good as we initially thought, rather than throwing good money after bad.
In the environment sector, managers are almost never explicit about the uncertainties they face, there usually is no plan for addressing uncertainty, projects are funded despite profound ignorance about crucial aspects of them, proper feasibility assessments are never done, active adaptive management is almost never used, and ineffective projects that have been started are almost never curtailed so that resources can be redirected to better ones. In these respects, the environment sector is dramatically different from the business world, where people seem to be much more concerned about whether their investments will actually achieve the desired outcomes. Perhaps the difference is partly because businesses are spending their own money and stand to be the direct beneficiaries if the investment is successful. Perhaps it is partly about the nature of public policy and politics. Whatever the reason is, there is an enormous missed opportunity here to improve environmental outcomes, even without any increase in funding.
Institutional challenges to good prioritisation
Our experiences in encouraging environmental agencies and other relevant bodies to use sound and rigorous approaches to prioritisation have shown that a number of institutional challenges can arise.
One issue is that it isn’t necessarily apparent to people that their organisation is using a prioritisation process with serious weaknesses, nor that there is an opportunity to deliver substantially greater environmental outcomes by improving the process. It is common for organisations to develop a prioritisation process without an understanding of the essential principles outlined here. Sometimes considerable effort is put into developing the process, and this results in a strong commitment to the process and a belief that it is sound, despite serious flaws. Convincing organisations that there would be substantial benefits in modifying the process can be difficult in this situation, despite evidence of the magnitude of potential gains.
Contributing to a reluctance to change in some cases is concern about the greater cost of a more rigorous prioritisation process. Where the existing process is superficial and highly subjective, advice to include more and stronger evidence may not be welcomed. And yet, the improved gain from using a strong prioritisation process versus a weak process (Pannell & Gibson, 2016) seem easily large enough to justify slightly higher costs from a more comprehensive approach – yet some are still difficult to convince.
Apart from the financial cost of doing a more thorough prioritisation process, it can also be more demanding in terms of time and data. In some organisations there is a culture of rushing decision making, such that there is insufficient time allowed for data collection and analysis to support the decisions. This often goes hand in hand with a view (or at least a rationalisation) that subjective judgements are sufficient. However, subjective judgements in ignorance of the principles outlined here are highly unlikely to deliver the hoped-for environmental outcomes.
In some cases, decision makers prefer to use a process that is not as comprehensive and transparent as we have recommended, because it reduces their flexibility and their scope to select ‘pet projects’ that are actually not good investments.
Some people in the environment sector shy away from using approaches that they judge to be too tainted with economics, because they believe that economic drivers have caused environmental problems, so economic thinking should be avoided when trying to solve them. This unfortunate prejudice, when applied to prioritisation, simply results in poorer environmental outcomes.
Perhaps related to this is a reluctance in some cases to prioritise at all, apparently based on a feeling that to do so is defeatist. This often arises when discussions of conservation triage arise which is really just a form of prioritisation (Joseph et al, 2009). Opponents to the process say we should never ‘give up on a species’, we should save all species. While this might be a laudable goal, it ignores what is actually happening in the real world in which constrained government (and NGO) budgets simply can’t save all species. If we were to engage in robust prioritisation we could save more species than we currently do.
More info: Dave Pannell email@example.com
Armsworth PR (2014). Inclusion of costs in conservation planning depends on limited datasets and hopeful assumptions. Annals of the New York Academy of Sciences. doi: 10.1111/nyas.12455
Read a discussion on this paper in Decision Point #87
Fuller RA, E McDonald-Madden, KA Wilson, J Carwardine, HS Grantham, JEM Watson, CJ Klein, DC Green & HP Possingham (2010). Replacing underperforming protected areas achieves better conservation outcomes. Nature 466: 365-367.
Read a discussion on this paper in Decision Point #41, p2-5
Joseph LN, RF Maloney & HP Possingham (2009). Optimal allocation of resources among threatened species: a project prioritisation protocol. Conservation Biology 23: 328-338.
Read a discussion on this paper in Decision Point #29, p8-10
Maron M, JR Rhodes & P Gibbons (2013). Calculating the benefit of conservation actions. Conservation Letters 6: 359-367.
Read a discussion on this paper in Decision Point #69
Pannell DJ (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152.
Pannell D, AM Roberts, G Park & J Alexander (2013). Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects. Wildlife Research 40: 126-133. http://dx.doi.org/10.1071/WR12072
Pannell DJ (2015). Ranking environmental projects, Working Paper 1506, School of Agricultural and Resource Economics, University of Western Australia, Crawley, Australia. http://ageconsearch.umn.edu/handle/204305
Read a discussion on these papers in Decision Point #75, p4,5
Pannell DJ & FL Gibson (2016). The environmental cost of using poor decision metrics to prioritise environmental projects. Conservation Biology 30: 382-391.
Read a discussion on this paper in Decision Point #82
Roberts AM, DJ Pannell, G Doole & O Vigiak O (2012). Agricultural land management strategies to reduce phosphorus loads in the Gippsland Lakes, Australia. Agricultural Systems 106: 11-22.