Policies are usually seen as the highest level of conservation action, kind of like a goal to aim towards (for a species/habitat/area of conservation interest to be ‘covered’ by a policy). In developed countries like Singapore and maybe the UK as well, it seems that conservation action usually starts from the grassroots level up, with a small group of locals fighting to keep a place from development. There’s likely to be lots of public outreach and education, perhaps some scientific field experiments and data collected to help bolster the case for conservation. The ultimate goal is some kind of a legal status, a protected area status for the place, or legal protection for an endangered species. Of course, there are many other conservation strategies out there, but I’ll focus on just Protected Areas in this post.
Protected Areas (PAs) are perhaps a relic from previous paradigms of conservation (‘fortress conservation’ or ‘nature for itself’, Mace 2014), but are still, I think, seen as a cornerstone of conservation. The ultimate protection, the “best available means to ensure the recovery and survival of our threatened native animals and plants”, according to WWF Australia, as seen on the Australian government’s Dept of Environment website. According to the IUCN, there are approximately 200,000 protected areas in the world currently, covering around 14.6% of the world’s land and 2.8% of the oceans. Following the general trend of conservation of promoting nature for people’s sake and not wildlife’s sake though, the IUCN World Parks Congress website makes it rather clear the value of PAs to human life and health.
PAs have been viewed as the most important and effective policy tool for conservation, even though we do know there’s more to effective conservation than giving a defined area legal protection. ‘Paper parks’ abound from poor regulation or enforcement, or just bad planning (not fully covering the range of the species of conservation interest, for example). It seems though, that the metrics of ‘success’ for PA lie in biodiversity indicators – 100% increase in breeding pairs of species X, population numbers doubled for species Y, diversity index increased etc. There’s been calls for evaluation of conservation programmes (Kleiman et al. 2000), and for more evidence-based conservation (Pullin & Knight 2001, Sutherland et al. 2004), but the few weeks here in UQ have made me think more about what constitutes as evidence for effective conservation. Having more evidence =/= better evidence if the evidence does not accurately measure what it’s meant to.
Ferraro and Pattanayak (2006) make it clear how programme evaluation could be improved in the conservation field, using causal inference techniques. The key is to think what would have been otherwise (the counterfactual outcome), in the absence of the programme, and not make implicit assumptions (that all would have been lost). Andam et al. (2008) looked at the effectiveness of PAs in reducing deforestation in Costa Rica, and found that “protection reduced deforestation: approximately 10% of the protected forests would have been deforested had they not been protected”, as opposed to estimates of avoided deforestation of more than 75%. A review of common policy instruments for conservation in tropical developing countries, namely PAs, Payments for Ecosystem Services (PES) and decentralisation found few studies demonstrating causally the effectiveness of those instruments (Miteva et al. 2012).
Causal inference techniques (which I attended a crash course on in my first week here, given by Paul Ferraro and which I found really confusing/mind-blowing initially) are explained rather well in the Ferraro and Pattanayak (2006) paper and in this Ferraro (2009) paper. To give a brief overview (from the little that I know now), it involves considering the counterfactual outcome – what the result will be if the area wasn’t protected and selection biases. Why is it that the area was chosen for protection? Was it because it was on a steeper slope and was unsuitable for agriculture anyway, so even in the absence of protection, it probably wouldn’t have been deforested? There are methods that can be used to estimate the counterfactual outcome (since it is unobservable), which are known as quasi-experimental (cos you often can’t conduct actual experiments like randomised controlled trials) to simulate experiments and get objective results. Some of the popular methods include matching (trying to compare apples with apples) and difference-in-difference, and I am not familiar with the actual methods, but this Ferraro and Hanauer (2014) paper will help explain it all.
I’m not saying that oh wth conservation is ineffective and pointless, why do we even bother (though sometimes I do wonder and feel like that :/). But in conservation, we usually want to “save the world”/”leave the world a better place”, and we want to know that what we’re doing is effective and achieving our desired outcome. These techniques are being used increasingly, and I find it all rather exciting (:
They tell us if the policies that have been implemented are working the way we intended them to. And of course, all this is good for places where policies are actually made based on science. But it seems all too often that decisions are being made based on everything apart from science. Politics, randomness, money all factor in, and in all likelihood, even if a PA only reduces deforestation by 10% instead of 75%, that’s still better than having a plantation in its stead. After all, a PA may have many purposes – carbon sequestration if it’s a terrestrial forest perhaps, but also for biodiversity, for human recreation, for water regulation, coastal protection or various other ecosystem services.
In their paper, Miteva et al. (2012) also “emphasize[d] the need for a more advanced Conservation Evaluation 2.0 that seeks to measure how programme impacts vary by socio-political and bio-physical context, to track economic and environmental impacts jointly, to identify spatial spillover effects to untargeted areas, and to use theories of change to characterize causal mechanisms that can guide the collection of data and the interpretation of results.” Conservation is a vast field (even if people on the outside just think we’re all just tree huggers), and I knew even from 5 years ago that saying I wanted to be a “conservationist” meant little. Conservation evaluation appeals to me, particularly working on the science-policy interface to make sure science gets translated into policy and that the policy is effective. Possingham (2012) made a point “that conservation needs more analysts, not more field data”, but the former is a lot less sexy and appealing to young aspiring conservation researchers (like me). And I think that’s quite true. I see the need for conservation evaluation and all that, but I really enjoy field work, and would love to be able to combine the two. Not having fieldwork would also mean a lot less photos in my posts and a lot more text, like this post!
[Update 29 July 2015 9.20am]
For more information on this topic in non-scientific writing, check out this post here on SNAP magazine (which is brilliant, don’t know how I didnt find out about it earlier) by Paul Ferraro on ‘Nature & Prosperity: The Evidence We Still Need & The Right Questions to Ask’, as well as a YouTube video of him giving a lecture on the same topic, if you’re a more audio learner…