Authors: Robert Kerrison, Christian von Wagner, Lesley McGregor
Introduction
Systematic reviews enable researchers to collect information from various studies, in order to create a consensus. One of the major limitations of systematic reviews, however, is that they generally take a long time to perform (~1-2 years; Higgins and Sally, 2011). Often, it is the case that an answer to a question is required quickly, or the resources for a full systematic review are not available. In such instances, researchers can perform what is known as a ‘rapid review’, which is a specific kind of review in which steps used in the systematic review process are simplified or omitted.
As of right now, there are no formal guidelines describing how to perform a rapid review. A number of methods have been suggested (Tricco et al., 2015), but none are recognised as being ‘best in practice’. In this blog, we describe our experience of conducting a rapid review, the obstacles encountered, and what we would do differently next time.
For context, our review was performed as part of a wider project funded by Yorkshire Cancer Research. The aim of the project was to develop and test interventions to promote flexible sigmoidoscopy (‘bowel scope’) screening use in Hull and East Riding. The review was intended to inform the development of the interventions by identifying possible reasons for low uptake.
Obstacles
Our first task was to select an approach from the plethora of options described in the extent literature. On the basis that many rapid reviews are criticised for not providing a rationale for terminating their search at a specific point (Featherstone et al., 2015), we opted to use a staged approach (previously described by Duffy and colleagues), which suggests researchers continue to expand their search until fewer than 1% of articles are eligible upon title and abstract review (the major assumption being that, if successive expansions yield diminishing numbers of potentially eligible publications, and the most recent expansion yields a relatively small addition to the pool, stopping the expansion at this point is unlikely to lead to a major loss of information).
After deciding an approach, our next task was to ‘iron out’ any kinks with the method selected. Several aspects of the review method were not fully detailed by Duffy and colleagues in their paper, and therefore needed to be addressed. Such aspects included: 1) how authors selected search terms for the initial search, 2) how authors selected the combination and order in which search terms were added to successive searches, 3) whether authors restricted search terms to titles and abstracts, 4) how many authors screened titles and abstracts and, 5) if two or more authors reviewed titles and abstracts, how disagreements between reviewers were resolved.
Through discussion, we agreed that: 1) the initial search should include key terms from the research question, 2) successive searches should include one additional term analogous to each of those included in the initial search (to ensure a large number of new papers was obtained), 3) the order and combination in which search terms should be added to successive searches should be based on the combination and order giving the greatest number of papers (i.e. to ensure that the search was not terminated prematurely), 4) search terms should be restricted to titles and abstracts, 5) titles and abstracts should be reviewed by at least two reviewers and, 6) disagreements between reviewers should be resolved through discussion between reviewers (see: Kerrison et al., 2019, for full details regarding the method used).
Experience
Having agreed an approach, and ironed out any issues with it, we were then faced with the task of performing the review itself. While this took less time to perform than a traditional systematic review, it was still a lengthy process (approx. 4 months). As per the systematic method, we were required to screen hundreds of titles and abstracts and extract data from many full-text articles. Perhaps the most time-consuming aspect of the entire review, was the process of manually entering the many different combinations of search terms to see which gave the largest number of papers for review at each stage. It is possible that, in the future, a computer programme could be developed to automate this process; however, this would only likely occur if the method was widely accepted by the research community.
After performing the review, we submitted the results for publication in peer-reviewed journals. Having never previously performed a rapid review, we were uncertain how it would be received. Disappointingly, our initial submission was rejected, but did receive some helpful comments from reviewers. While we were slightly discouraged, we decided to resubmit our article to Preventive Medicine, where it received positive reviews and, after major revisions, was accepted for publication.
Next time
So, what would we do differently next time? For a start, we’d consider using broader search terms. Our searches only detected 52% of papers prior to searching the reference lists of selected papers. We think that the main reason for this is that search terms were restricted to abstracts and titles, which often did not mention ‘flexible sigmoidoscopy’ (or variants thereof), specifically. Instead, most papers simply referred to the predictors of all colorectal cancer screening in the abstract (key words we had not included in our search terms in order to reduce the number of irrelevant papers reviewed), and then the predictors of each test in the main text. This problem is likely to repeat itself in other contexts (e.g. diagnostics and surveillance).
Another key change we would make would be to include qualitative studies and appropriate search terms to highlight these. Employing a mixed methods approach would help explain some of the associations observed, and thereby how best to develop interventions to address inequalities in uptake.
Final thoughts
Conducting a ‘rapid’ (4 months!) review has been an enjoyable experience. Like any research, it has, at times, been difficult. A lack of formal guidance, available for many forms of research today, made the process perhaps harder than it needed to be. With rapid reviews becoming increasingly common (read all about this here), it is our hope that this blog and paper will help make the process easier for others considering rapid reviews in the future.
Acknowledgements
This study was funded by Yorkshire Cancer Research (registered charity 516898; grant number: UCL407)
References
Duffy, S. W., et al. (2017). “Rapid review of evaluation of interventions to improve participation in cancer screening services.” Journal of medical screening 24(3): 127-145.
Featherstone RM, Dryden DM, Foisy M, et al. Advancing knowledge of rapid reviews: An analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Systematic Reviews. 2015; 4(1): 50.
Higgins JP, Sally. G. Cochrane handbook for systematic reviews of interventions, version 5.1.0. . 2011.
Kerrison, R. S., von Wagner C, Green T, Winfield M, Macleod U, Hughes M, Rees C, Duffy S, McGregor L (2019) Rapid review of factors associated with flexible sigmoidoscopy screening use. Preventive Medicine.
Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE (2015) A scoping review of rapid review methods. BMC medicine 13(1): 224