As the Director of Consulting Operations for Maple Health Group, I conduct both targeted and systematic literature reviews (TLRs and SLRs) in addition to working with consultants within my organization to improve the efficiency in which we conduct these reviews. Our literature reviews often require the screening of thousands, even tens of thousands, of citations to support various activities, such as evidence gap analysis, global value dossiers, and health technology assessments. Reviews are often completed under accelerated timelines without compromising the methodological rigour required of these reviews, particularly SLRs. For our organization, the ability to use a platform that simplifies the screening process and allows for creative/flexible solutions to manage massive volumes of evidence is vital to our process.
Our company has chosen DistillerSR as our literature screening platform and we appreciate Evidence Partners’ commitment to advancing literature screening technology to meet the evolving needs of research organizations, such as ours.
Here are the top five ways we utilize the filter feature to optimize our SLR processes:
1. Utilizing labels during reference upload
When performing multiple SLRs to capture a variety of evidence across one disease area (e.g., clinical efficacy and safety, real-world effectiveness, humanistic, economic) it is extremely helpful to know which citations are associated with each SLR. Applying labels during the import process allows the user to upload multiple sets of reference files to one project, eliminating the need to create separate projects. These labels will enable you to track citations through each screening level and create filters to direct citations towards their corresponding SLR screening forms.
2. Utilizing filters to assign screening batches to multiple reviewers
When managing a large volume of citations, assigning batches for screening across multiple reviewers is required. Filters can be used to direct a specific batch of citations by specifying a reference identification (REFID) range to a set of reviewers for duplicate screening. As each reviewers’ availability changes from day-to-day, or week-to-week, new batches can be assigned allowing the project manager to closely monitor progress and screening rates, to allocate support/resources where needed.
3. Assigning labels during the screening process
It is not uncommon for the selection criteria of our SLRs to start quite broad and become more refined during the screening process as nuances of the evidence are realized. A common area of refinement within PICOS (Population, Intervention, Comparator, Outcomes, Study Design) criteria is study design, as illustrated below:
Without Labels
With Labels
4. Using labels and filters as risk mitigation for the dreaded ‘missed citation’
Before the use of labels and filters, we were not able to effectively move citations from one SLR to another and often spent valuable time and resources cross-checking these citations manually. When applying broad search strategies across multiple SLRs, for the same disease area, it is expected that there will ultimately be overlapping (duplicate/triplicate) citations between SLRs. However, it is not uncommon for clinical trials reporting efficacy and safety to also include HRQoL outcomes that are not referenced in the abstract, keywords, or indexed in an electronic database. Therefore, in this example, the Clinical SLR search strategy may pick-up citations with HRQoL outcomes not captured in the HRQoL search. Applying a label during the screening process in this example allows citations in the Clinical SLR, that may have outcomes of interest for the HRQoL SLR, to be filtered to the HRQoL form for screening. Paired with DistillerSR’s ‘Duplicate Citation’ feature, screening forms can be set-up for reviewers to recognize if a publication filtered from another SLR is a duplicate or unique citation, and proceed accordingly.
5. Using labels and filters to prioritize review and improve resource allocation
The filtering feature in DistillerSR offers great flexibility providing the opportunity to filter by REFID range, label, % of references, and by the answer provided to a specific form question. These options ultimately allow you to customize your filters to meet the needs of your project. When filters were first introduced, our team was ‘label naïve’ and regularly utilized the question/answer approach to filtering. If we consider the study design example discussed previously, our ‘label naïve’ approach included open-ended questions in our screening forms with multiple answers regarding the categorization of study design. This approach relied on reviewers to have extensive knowledge and be able to correctly identify the proper study design of each citation under review. We learned that this led to high conflict rates between reviewers, ultimately increasing the amount of time and resources spent reconciling inclusion/exclusion decisions.
After a year of utilizing filters, we realized the benefit of labels and now consider our team to be ‘label enlightened’. Our ‘label enlightened’ approach has changed the way we design our forms. We now provide direct questions that target the selection criteria required for inclusion, or easier yet, selection criteria for exclusion (often a much shorter list). Once a reviewer confirms the existence (or non-existence) of particular criteria, instructions will appear in the form to tell the reviewer which label to apply (based on their answer) and to confirm with a simple ‘yes’ that the appropriate label was applied, before moving on to the next question or form.
Utilizing labels rather than questions for our filtering needs has resulted in lower conflicts, and allows us to proceed to the next step faster than we have in the past. Additionally, returning to the study design example, by identifying high and low priority criteria, we can filter study designs of most interest (e.g. RCTs) to a specific form and prioritize the review of those publications for data extraction. While the study designs of least interest (e.g. observational) are filtered to a separate form to review at a later date. I am personally very excited to pair this knowledge and experience with the new DistillerSR ‘Re-Rank’ feature to further optimize the efficiency of our SLRs.
From a methodological perspective, the filtering feature in DistillerSR provides a robust and flexible approach, which allows for broader research questions to be answered in a systematic/reproducible way. Additionally, when dealing with large volumes and multiple SLRs, filtering can improve the ability to cross-check citations across SLRs early in the screening process thereby, mitigating the risk of missing citations and ultimately providing a better nights sleep!
From a consultancy perspective, the filtering feature in DistillerSR has increased our efficiency and resource utilization when paired with labelling. Overall, the labelling and filtering features have revolutionized our approach to SLRs, providing us with ability to easily assess the progress and scope of our projects in real-time, anticipate our clients’ needs, and develop strategic solutions for unique and shifting demands throughout any SLR project.