Systematic reviews seem to be all the rage in conservation these days, but often it is all talk. The reason? Systematic reviews are actually quite difficult and time-consuming. Particularly with the proliferation of literature. So what is a solution? A new paper by Haddaway and colleagues in Conservation Biology argues we can still use some of the techniques from systematic reviews even when we don’t have time or resources to go the whole hog.
Why is this important?
“Research is being published at an ever-increasing rate” argue Haddaway and colleagues, who suggest reviews are valuable to synthesise the primary literature, clarify controversies (both in terms of definitions, and supporting evidence), and to identify knowledge gaps. Reviews are increasingly used to inform conservation decisions, which have important socioeconomic and environmental implications.
However, reviews can be subject to bias and misinterpretation. This can be from many sources.
The first is when the literature reviewed is from a non-representative sample of the literature (selection bias). This can be due to insufficient searching (e.g. of only one database, and few search terms), ad hoc inclusion of literature based on reviewer familiarity and awareness, or unclear or inappropriate criteria for inclusion.
The second is when the published literature itself is unrepresentative (publication bias), due to both authors, institutions, and journals tending to favour “positive, hypothesis-affirming, or significant results rather than negative, contentious, or non-significant findings” (Haddaway et al. 2015).
Third, published literature can also be biased if it used inappropriate study design, such as poor controls, selection bias, or pseudoreplication (i.e. low internal validity), or when it cannot be generalised to the population of interest (low external validity) (see Bilotta et al. 2014).
Misinterpretation can be a result of inappropriately weighting evidence, whether this be in meta-analysis or quantitative review (for example by not considering the quality or relevance of evidence, or the magnitude of effect sizes, sample sizes, or variability; all forms of “vote counting”), or by qualitatively discussing some literature more than others.
Also, reviews can be made more trustworthy by increasing the transparency of the methods used to derive them.
How do systematic reviews avoid bias?
Systematic reviews aim to minimize bias, be as comprehensive as possible, and to be repeatable. Techniques for rigorous systematic review are: 1) publication of the protocol prior to the commencement of the review, to avoid subsequent bias induced by modifications to this, 2) searching multiple databases to gain exhaustive and comprehensive coverage of published literature, and trawling unpublished literature to determine the potential extent of publication bias, 3) Iterative screening of search results using inclusion criteria, by multiple reviewers (who need to be coordinated to ensure consistent application of inclusion criteria), 4) transparent and objective appraisal of the methodological rigour of each included primary research article, 5) evidence is described and sythesised with reflection on implications for management, policy, and research, and finally 6) publication of results along with supplementary information that transparently documents the review process, enabling it to be repeated, evaluated, and updated.
As you can imagine, all this takes time (around 9-24 months), involves multiple people including both review technicians and an expert review panel, and costs a lot (~$40k – $400k) according to Haddaway et al. (2015).
What to Haddaway and colleagues propose?
While there are efforts to reduce the resource intensity of systematic reviews by developing new protocols, and “rapid reviews“, Haddaway and colleagues propose there is still space for literature reviews, where the time or resource cost of systematic reviews are prohibitive, or when the additional benefits of systematic review do not outweigh the additional costs, making a systematic review unwarranted.
Haddaway and colleagues suggest traditional reviews can be made more robust and transparent by:
- Plan questions, desired outputs, and inclusion criteria before starting, and consult with subject experts
- Include all viewpoints by searching multiple databases
- Include all details of searches conducted in supplementary information. Consider checking agreement over inclusion of articles with a colleague or expert for a small subset of search results
- Spend time developing an appropriate, comprehensive search string, and test comprehensiveness of searches based on a list of known articles
- Include searches for unpublished (gray) literature to account for publication bias
- Screen all search results with the same predetermined criteria
- Create basic critical appraisal tools to categorize articles as of low, high, or unclear quality
- Apply weighted, quantitative syntheses (meta-analysis) where possible. For qualitative analyses describe direction and magnitude of significance, including detectability (sample size or background noise vs. signal)
- Describe evidence base as a whole where possible (i.e., in figures, tables, and meta-analyses). Clearly state why some studies are not discussed at length (e.g., not relevant or low quality)
Why is this relevant for new PhDs?
There is increasing pressure on new PhDs to publish their literature review at the start of their PhD. But unless they’re already familiar with their topic, this can be a hard task, particularly in interdisciplinary topics such as conservation science, as literature is often dispersed among journals from several disciplines, each with their own complement of terminology.
My first tip to new PhDs is therefore to read widely before drafting out methods for a literature review, and in particular search for existing reviews on the topic in different disciplines (in particular existing systematic reviews), and note down the terminology/keywords from these different disciplines that will be useful to create your search terms.
Another interesting, emerging idea is, rather than doing a traditional review, to do a text analysis instead.
Finally, while Haddaway and colleagues point towards some references by Koricheva and colleagues (see below) for methods on meta-analysis, I’d like to also point out some emerging theory from the causal inference literature by Bareinboim and Pearl on transportability and meta-analysis of causal effects (also see below).
References and further reading:
Haddaway, N.R., Woodcock, P., Macura, B. and Collins, A. (2015), Making literature reviews more reliable through application of lessons from systematic reviews. Conservation Biology. doi: 10.1111/cobi.12541
Haddaway NR, Collins A, Coughlin D, Kirk S, Cooke S, Richards R, Stewart R, Johansson S, Knight TM. 2015. Proposal for establishment of CEE rapid evidence review methods. The Collaboration for Environmental Evidence: in press.
Koricheva J, Gurevitch J. 2014. Uses and misuses of meta-analysis in plant ecology. Journal of Ecology 102:828–844.
Koricheva J, Gurevitch J, Mengersen K. 2013. Handbook of meta-analysis in ecology and evolution. Princeton University Press, New Jersey.
Bareinboim, E., & Pearl, J. (2013). A general algorithm for deciding transportability of experimental results. Journal of Causal Inference, 1(1), 107-134.
Bilotta GS, Milner AM, Boyd IL. 2014. Quality assessment tools for evidence from environmental science. Environmental Evidence 3: http://www.environmentalevidencejournal.org/content/3/1/14.