Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2021 Jan 23;28(4):766–771. doi: 10.1093/jamia/ocaa232

Optimizing a literature surveillance strategy to retrieve sound overall prognosis and risk assessment model papers

Patricia L Kavanagh 1,2, Francine Frater 1, Tamara Navarro 3, Peter LaVita 1, Rick Parrish 3, Alfonso Iorio 3,4,
PMCID: PMC7973466  PMID: 33484123

Abstract

Objective

Our aim was to develop an efficient search strategy for prognostic studies and clinical prediction guides (CPGs), optimally balancing sensitivity and precision while independent of MeSH terms, as relying on them may miss the most current literature.

Materials and Methods

We combined 2 Hedges-based search strategies, modified to remove MeSH terms for overall prognostic studies and CPGs, and ran the search on 269 journals. We read abstracts from a random subset of retrieved references until ≥ 20 per journal were reviewed and classified them as positive when fulfilling standardized quality criteria, thereby assembling a standard dataset used to calibrate the search strategy. We determined performance characteristics of our new search strategy against the Hedges standard and performance characteristics of published search strategies against the standard dataset.

Results

Our search strategy retrieved 16 089 references from 269 journals during our study period. One hundred fifty-four journals yielded ≥ 20 references and ≥ 1 prognostic study or CPG. Against the Hedges standard, the new search strategy had sensitivity/specificity/precision/accuracy of 84%/80%/2%/80%, respectively. Existing published strategies tested against our standard dataset had sensitivities of 36%–94% and precision of 5%–10%.

Discussion

We developed a new search strategy to identify overall prognosis studies and CPGs independent of MeSH terms. These studies are important for medical decision-making, as they identify specific populations and individuals who may benefit from interventions.

Conclusion

Our results may benefit literature surveillance and clinical guideline efforts, as our search strategy performs as well as published search strategies while capturing literature at the time of publication.

Keywords: prognosis, literature search, search strategy, sensitivity, specificity, updating

INTRODUCTION

In the medical community, there is growing demand for evidence-based resources to support clinical decision-making in order to deliver optimal and high value care,1–3 such as guidelines developed using the GRADE methodology.4 Optimal, high-value care originates from 3 primary categories of information needed for evidence-based clinical practice—diagnosis, treatment, and prognosis—in order to define who will benefit from which intervention.5,6 Diagnostic research aims at properly identifying patients with a specific clinical condition, while treatment research seeks to establish which interventions are associated with a relative increase in the likelihood of positive outcomes (or reduction in likelihood of negative outcomes). Prognostic studies include those on overall prognosis which measure the risk of future events in a broadly defined population with a specific medical condition.7 Another prognosis study type is the development of risk assessment models, commonly called clinical prediction guides (CPGs), which combine patient and disease characteristics to identify the risk of future events for an individual with a specific medical condition.8–10 CPGs constitute the base for practicing personalized medicine, optimizing the use of effective interventions and potentially reducing waste of health resources.11

The volume of published medical literature has steadily grown over time, making it difficult for guideline developers and researchers to stay abreast of newly generated knowledge.12 Retrieving studies on diagnosis, treatment, and prognosis is a critical step toward producing clinically relevant and trustworthy recommendations. Database search strategies for studies on diagnosis13–15 and treatments16–18 have been developed and validated in the past. Previous efforts to derive and validate search strategies for prognostic studies19–21 have been hampered by the relative paucity of publications and the lack of standardization, with reporting22,23 and appraisal guidance7,24–27 becoming available only recently. Also, most of the proposed search strategies use Medical Subject Heading (MeSH) terms, a controlled vocabulary used by the National Library of Medicine to index articles for PubMed and MEDLINE. However, MeSH terms may only become available 3 weeks to 7 months after the initial publication date, depending on the discipline and journal impact factor,28,29 thereby decreasing the sensitivity of commonly used search strategies like the one derived as part of the Hedges process.19,20 Whether planning a guideline development process, maintaining an ongoing literature surveillance service, or searching for specific information to manage an individual case, minimizing the number of references needed to read (NNR) while capturing recent, relevant evidence is critical.

OBJECTIVE

This article describes the derivation and validation of a strategy to retrieve overall prognosis studies and CPGs that does not rely on MeSH terms. We focused on improving specificity and precision of the new search strategy trading off sensitivity in order to limit the NNR and increase efficiency. This would be an advance for the field as most published search strategies for prognostic studies rely on MeSH terms and may miss recently published literature.

MATERIALS AND METHODS

Search strategy development

We focused on “method” terms, without using any content term, and limited the search to the journals currently included in the literature surveillance process for DynaMed (EBSCO Health, Ipswich, MA). We chose PubMed as our target database. To develop our search strategy, we used the Hedges database30,31 which was designed for the purpose of developing and testing optimal search strategies to gather clinically relevant and methodologically robust references. For this study, we used the set of articles classified as prognosis or CPGs from 78 journals in the Hedges database.30 We combined terms from the Hedges “sensitive” search strategies for CPGs and overall prognosis studies.32,33 MeSH terms were removed and replaced with similar text terms yielding the highest sensitivity in the Hedges database. Using a set of 100 references fulfilling our criteria (see Reading Criteria and Training section below), we explored the impact of introducing search terms using the semantic text mining approach proposed by the Canadian Agency for Drugs and Technologies in Health34 and the methodological search strategy development and evaluation strategies proposed by Jenkins.35 First, the value of including single terms with the highest predictive values34,35 was assessed. Second, PubMed PubReMiner36 was used to identify frequently occurring terms in the title and abstract of references captured in the first phase of the study. Terms were combined in a stepwise fashion using the Boolean operator “OR.” Third, Voyant37 was used to perform a frequency analysis of phrases in the title and abstract and to identify words in proximity that might be used to build the search strategy. Various candidate strategy combinations were tested in the Hedges database to maximize specificity and precision, and the best combination became our candidate search strategy used in all subsequent analyses detailed below.

Reading criteria and training

Criteria for assessing overall prognosis studies and CPGs as sound were adapted from the McMaster Plus critical reading criteria.38 Overall prognosis studies were categorized as sound when reporting data from an inception cohort of patients at a common, early, well-defined stage of disease who are at risk of developing a clinically important outcome. CPGs were assessed as sound if studies reported the validation (with or without derivation) or impact assessment of a risk assessment model including ≥ 2 prognostic factors intended for clinical use. Studies assessing individual prognostic factors were excluded. An expert reader (FF) was trained to identify sound references until an agreement of 0.9 was reached when compared to the assessment of the same articles independently performed as part of the McMaster PLUS process. When the expert reader (FF) was in doubt, she would flag references as “uncertain” and classification was made by panel assessment (PK, AI). The set of 100 true positive references used to refine our search strategy were identified during this training period.

Reference classification assessment

Our new search strategy was run weekly, from March to October 2019 on 269 journals included in the DynaMed (EBSCO Health, Ipswich, MA) literature surveillance process. Retrieved references were uploaded into a reference management system, DistillerSR (Evidence Partners, Ottawa, Canada). Using the random sort function, we read until ≥ 20 random abstracts per journal were reviewed. During this search period, 15 journals did not retrieve the required 20 references. Therefore, we expanded our search period solely for this subset from September 2018 to March 2019. If 20 references were not found in this expanded timeframe, the journal and its references were removed from the dataset.

Based solely on data presented in the abstract, the expert reader used a standardized data collection form and classified the reference status (included, excluded, or uncertain) and type (overall prognosis or CPG). When the abstract did not provide sufficient information to confirm eligibility, the reference was marked as uncertain and adjudicated by panel discussion (FF, AI, PK). Annotation was performed for all abstracts that were included or deemed uncertain, specifically highlighting the sentences that led to this decision. For the subset of references retrieved from journals surveyed both for this study and McMaster PLUS,39 the classification in the PLUS database was retrieved and compared to the study data; discrepancies were adjudicated by panel discussion (FF, AI, PK).

Search strategy assessment

We measured the performance characteristics of our search strategy against the Hedges database30 by comparing the categorization of study type (overall prognosis or CPG) and the quality (sound or not sound) listed in Hedges. Details about calculation of performance characteristics are provided in Supplementary Appendix SA. Of note, we had to address a critical difference in the process used to categorize references in the Hedges database compared to our process, specifically, that diagnostic CPGs were classified as true positives for the CPG category in Hedges. To address this issue, we asked our expert reviewer to classify a pool of references using our assessment criteria containing, among others, the CPGs missed by our search strategy when tested against the Hedges database. The expert reader was blinded to the original assignment in the Hedges database.

In addition, we compared the search strategy derived in this study to 7 strategies previously published in the literature.21,40–43 We ran each of the published strategies in PubMed limited to the same search dates as used during our reference classification assessment process (see above). This allowed us to compare how many citations (true and false positives) would have been retrieved with each of the other published search strategies over the same period of time. From this, we calculated the performance of each filter when compared to our standard dataset. We also compared the total number of references that would be retrieved over a 1-year period by each of the published search strategies as compared to our new one using 2018 as the test year.

Expected journal yield

We applied our new search strategy to journals yielding at least 1 overall prognosis study or CPG meeting our inclusion criteria over a 1-year period (January–December 2018) in order to estimate the expected contribution of prognostic evidence from each. We used the estimated NNR for each journal (expected number of true positive references as a result of reading that journal) and calculated the pooled NNR for the set of journals used in this study.

Statistical consideration

Based on the relative proportion of published prognostic references observed in the McMaster PLUS database, we expected to retrieve approximately 200–300 sound references from reviewing approximately 5000 references, which we deemed sufficient to estimate the overall performance given the experience in the Hedges database.40 Assuming independence of the samples, reading 20 references per journal would predict with 95% certainty that the NNR is ≤ 7.

RESULTS

Using our new search strategy, 16 089 references were retrieved from 269 journals. From this, 6907 references were randomly selected and appraised and yielded 285 (4%) sound studies, of which 147 (52%) were overall prognosis studies and 138 (48%) were CPGs. At least 1 sound overall prognosis study or CPG was identified at abstract review from 154 journals that provided ≥ 20 references. To compare the reference classification performed in this study to a published gold standard, we assessed 2969 (42% of total) references that were also included in the McMaster PLUS database and found that only 121 (4%) had discrepant classification as a sound overall prognosis study or CPG.

Performance characteristics for our new search strategy are reported in Table 1. The sensitivity of our search strategy against the Hedges database (containing references from 2011) was 84% for both overall prognosis and CPGs. In addition, compared to the Hedges sensitive strategies, the new search strategy was slightly less sensitive for overall prognosis (82% vs 90%) and for CPGs (90% vs 96%).

Table 1.

Overall performance characteristics of the new search strategy against Hedges

Search Strategy Sensitivity (%) Specificity (%) Accuracy (%) Precision (%)a
Performance in retrieving prognosis or CPGs

New strategy

 

Terms: prognos*[TIAB] OR cohort [TIAB] OR validat*[TIAB] OR predict*[TIAB] OR mortality [TIAB] OR follow up [TIAB]

84 80 80 2
Performance in retrieving prognosis
New strategy 82 80 79 1.5

Hedges sensitive prognosis search strategy terms

 

incidence [MeSH: noexp] OR mortality [MeSH Terms] OR follow up studies [MeSH: noexp] OR prognos*[Text Word] OR predict*[Text Word] OR course*[Text Word]

90 80 80 2
Performance in retrieving CPGs
New strategy 90 79 79 0.8
Hedges CPG search strategy Terms: predict*[Title/Abstract] OR predictive value of tests [MeSH Term] OR scor*[Title/Abstract] OR observ*[Title/Abstract] OR observer variation [MeSH Term] 96 79 79 1

Abbreviation: CPG, clinical prediction guide.

a

Performance characteristics were assessed against the Hedges 2011 database.30 The performance characteristics of the sensitive Hedges search strategy (including MeSH terms) are presented for comparison.

Table 2 shows the performance of previously published search strategies when assessed against our standard set of 285 positive and 6622 negative references. Only 1 of the existing strategies outperformed our new search strategy with respect to precision and NNR (ie, 1/precision).21

Table 2.

Performance characteristics of new search strategy vs published search strategies

Search Strategy Sensitivity (%) Specificity (%) Accuracy (%) Precision (%)
New strategy NAa 19 22 5
Ingui 42 82 43 45 6
Hayden 43 48 68 68 6
Geersing 21 36 86 84 10
YALE–1 41 94 29 31 5
YALE–2 41 38 73 72 6
Teljeur/Murphy–26 40 88 35 38 6
Teljeur/Murphy–22 40 66 58 58 6

For each of the strategies, the performance characteristics were assessed for their performance in identifying the 285/6907 positive references in our set, therefore sensitivity could not be calculated.

Finally, we examined the impact of journal selection on search strategy performance. When all 269 journals were included, the new strategy presents an estimated NNR of 20 (Table 3). See Supplementary Appendix SB for the performance of the new search strategy for 154 journals, ordered by NNR, that produced at least 1 sound overall prognosis article or CPG during our original search period. In addition, Supplementary Appendix SB details the number retrieved when our search strategy was used for the dates of January–December 2018 alongside the expected number of positive references for that same period.

Table 3.

Estimated Yield of new search strategy vs. published search strategies

Search Strategy NNR 2018 Yield, no journal restrictiona 2018 Yield, selected journalb Expected yearly yield, journal subset
New strategy 20 303 263 21 676 1084
Ingui 42 17 376 848 21 041 1262
Hayden 43 17 179 050 20 953 1257
Geersing 21 10 72 679 5213 521
YALE–1 41 20 410 110 34 680 1734
YALE–2 41 17 204 225 24 444 1467
Teljeur/Murphy–26 40 17 535 412 39 337 2360
Teljeur/Murphy–22 40 17 187 062 12 910 775

Abbreviations: CPG, clinical prediction guide; NNR, number needed to read.

a

Yield = strategy AND 2018/01/01:2018/12/31[dp] AND 269 journals.

b

Yield = strategy AND 2018/01/01:2018/12/31[dp] AND selected 154 journals that returned > 20 references for prognosis and CPGs.

DISCUSSION

We have developed and validated a precise and efficient (ie, low NNR) search strategy, independent of MeSH terms, for retrieving sound overall prognosis studies and CPGs. We compared this strategy on a core set of clinical journals and found that it favorably compares with the previously published Hedges strategies and other published strategies.21,31,40–43 We believe that this strategy can be used by others who are looking to incorporate the most current prognostic evidence into surveillance processes.

Our proposed search strategy might be adapted to various scopes. When resources are not a limitation and inclusiveness is the highest value, such as when performing a systematic review for a question related to prognosis, our method block coupled with content terms, with or without limitations as to publication type or date, can provide sufficient sensitivity. When searching for prognostic evidence at the point of care or in the framework of a guideline development process, combining our method block with the appropriate content block and restricting the search to the relevant specialty journals or to a core set of clinical journals can significantly reduce the workload of screening. Indeed, using search strategies such as the one developed in this study could help guideline developers in their struggle to keep guidelines current.39,44–51

Our article has both strengths and limitations. With respect to strengths, our new search strategy has been assessed for its capacity of supporting retrieval of sound clinical evidence in the field of overall prognosis and CPGs, both of which are directly applicable to medical decision-making. To the best of our knowledge, previously proposed search filters focused on the 2 categories separately, or included prognostic factor research which is less applicable to clinical practice.21,40–43 Therefore, when the scope of searching is to support production of clinical recommendations, our proposed strategy may be more efficient. Second, our new search strategy is independent of MeSH terms, which ensures retrieving recently published and not yet indexed citations, which would be missed by using standard strategies52,53 or relying on systematic reviews.47,54–56 Third, we have explicitly explored the impact on precision of focusing on a subset of journals more likely to publish prognostic studies, which has not been done previously. With the proliferation of medical journals, it becomes critical to evaluate the benefit and cost of searching the universe of PubMed and other databases. Indeed, the assumption that all reports of trials about a specific intervention need to be retrieved and analyzed to minimize bias is considered valid for experimental evidence (ie, randomized controlled trials); how relevant such completeness is for an observational field like prognosis is unknown. Therefore, focusing on a subset of journals might allow us to capture and use the best prognostic evidence without inflating the budget and resource requirements to unsustainable levels. To this end, our article provides an empirical objective measure of the performance of a large set as well as a smaller subset of medical journals.

Our article also has limitations. First, this was a pragmatic study in which we made pragmatic decisions in sizing the study (eg, reviewing 20 references per journal). However, our set is almost as large as the high-quality studies included in the prognostic segment of the Hedges database, which counts 1781 references classified as either prognosis (1547, of which 190 assessed as high-quality) or CPGs (234, of which 91 assessed as high-quality). In addition, the robustness of the search strings derived on the original Hedges database was confirmed twice,30,57 suggesting the sample size of approximately 300 high-quality studies would allow for robust inference. Second, our objective was to improve specificity and precision of the search strategy for overall prognosis studies and CPGs rather than focus on its sensitivity. Therefore, we focused on the return set of our empirically generated search strategy and trimmed it to reduce the “noise” of irrelevant studies without losing significant sensitivity. Although we did not expect to improve sensitivity over that of the Hedges search strategy, we did find that our final search strategy had not lost sensitivity. Third, we did not perform duplicate classification for all the references in our dataset. However, both the initial calibration of the appraiser against the panel assessment and the measure of agreement for references overlapping with the McMaster PLUS project showed an interrater agreement of 0.9. Finally, our analysis was limited to a small random sample of references, which may have resulted in unstable estimates, particularly with respect to journal yield of prognostic studies, as this may vary depending on the mix of references published by each journal over a specific period of time. Additional studies on larger independent samples are needed to confirm these results.

Further assessments and possible refinements to our search strategy are likely needed. First, this search strategy needs to be validated on a larger database. As our new search strategy will be adopted as routine for the literature surveillance of the DynaMed process, newly accumulated data will allow for prospective validation and possible refinement of our estimates. In addition, we plan to review the annotation of the abstracts of all sound and uncertain references (ie, the sentences driving the choice to include or not include) from this study. This will be used to perform patient-intervention-comparator-outcome (PICO)-based coding of a subset of references to support development of an artificial intelligence/deep learning algorithm to assess if further increases in precision are possible without decreasing sensitivity of our search strategy.

CONCLUSION

We recommend using this search strategy as a valid base to build disease-specific searches to retrieve the prognostic evidence to assist in clinical decision-making. Our search strategy appears to be as sensitive as the gold standard Hedges-based search strategies for overall prognosis studies and CPGs, potentially more specific for CPGs, and highly efficient when applied to journals with a good yield for prognostic references. In addition, while the sensitivity, specificity, and accuracy of a search string do not change much with the underlying “prevalence” of good references, the precision of the search does increase significantly when there are more positives to retrieve. Therefore, when efficiency of searching is more important than comprehensiveness, applying this new search strategy to select journals known to publish the desired type of references may be a useful strategy.

FUNDING

This work was not supported by governmental or private foundation grant funding.

AUTHOR CONTRIBUTIONS

PLK and AI designed the study, participated in data collection and analysis, and cowrote the manuscript. FF, TN, PL, and RP participated in data collection and analysis, edited drafts of the manuscript, and approved the final version.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

Supplementary Material

ocaa232_Supplementary_Data

ACKNOWLEDGMENTS

We wish to acknowledge Peter Oettgen, MD and Alan Ehrlich, MD for their support of this project.

CONFLICT OF INTEREST STATEMENT

None declared.

REFERENCES

  • 1. Kritz M, Gschwandtner M, Stefanov V, et al.  Utilization and perceived problems of online medical resources and search tools among different groups of European physicians. J Med Internet Res  2013; 15 (6): e122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Moja L, Kwag KH.  Point of care information services: a platform for self-directed continuing medical education for front line decision makers. Postgrad Med J  2015; 91 (1072): 83–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Neumann I, Alonso-Coello P, Vandvik PO, et al.  Do clinicians want recommendations? A multicenter study comparing evidence summaries with and without GRADE recommendations. J Clin Epidemiol  2018; 99: 33–40. [DOI] [PubMed] [Google Scholar]
  • 4. Guyatt GH, Oxman AD, Vist GE, et al.  GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ  2008; 336 (7650): 924–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Guyatt G, Rennie D, Meade MO, et al.  Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 2nd ed. New York, NY: JAMA | McGraw-Hill; 2008. [Google Scholar]
  • 6. Glasziou P, Burls A, Gilbert R.  Evidence based medicine and the medical curriculum. BMJ  2008; 337: a1253. [DOI] [PubMed] [Google Scholar]
  • 7. Iorio A, Spencer FA, Falavigna M, et al.  Use of GRADE for assessment of evidence about prognosis: rating confidence in estimates of event rates in broad categories of patients. BMJ  2015; 350: h870. [DOI] [PubMed] [Google Scholar]
  • 8. Hingorani A, van der Windt D, Riley R, et al.  Prognosis research strategy (PROGRESS) 4: stratified medicine research. BMJ  2013; 345: e5793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Dorresteijn JAN, Visseren FLJ, Ridker PM, et al.  Estimating treatment effects for individual patients based on the results of randomised clinical trials. BMJ  2011; 343: d5888. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Vickers AJ, Kattan MW, Sargent DJ.  Method for evaluating prediction models that apply the results of randomized trials to individual patients. Trials  2007; 8 (1): 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Parikh RB, Kakad M, Bates DW.  Integrating predictive analytics into high-value care: the dawn of precision delivery. J Am Med Assoc  2016; 315 (7): 651–2. [DOI] [PubMed] [Google Scholar]
  • 12. Bastian H, Clarke M, Doust J, et al.  From Barcelona to Madrid: history and quality of update reporting of Cochrane Reviews flagged as updates in 2003 and analysed for the Barcelona Colloquium. Cochrane Colloquium Abstracts; 2011. https://abstracts.cochrane.org/2011-madrid/barcelona-madrid-history-and-quality-update-reporting-cochrane-reviews-flagged-updates Accessed March 2, 2020. [Google Scholar]
  • 13. Haynes RB, McKibbon KA, Wilczynski NL, et al.  Optimal search strategies for retrieving scientifically strong studies of diagnosis from MEDLINE: analytical survey. BMJ  2005; 330 (7501): 1179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Wilczynski NL, Haynes RB.  Indexing of diagnosis accuracy studies in MEDLINE and EMBASE. AMIA Annu Symp Proc  2007; 2007: 801–5. [PMC free article] [PubMed] [Google Scholar]
  • 15. Kastner M, Wilczynski NL, McKibbon AK, et al.  Diagnostic test systematic reviews: Bibliographic search filters (“Clinical Queries”) for diagnostic accuracy studies perform well. J Clin Epidemiol  2009; 62 (9): 974–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Wong SS-L, Wilczynski NL, Haynes RB.  Comparison of top-performing search strategies for detecting clinically sound treatment studies and systematic reviews in MEDLINE and EMBASE. J Med Libr Assoc  2006; 94 (4): 451–5. [PMC free article] [PubMed] [Google Scholar]
  • 17. Wilczynski NL, McKibbon KA, Haynes RB.  Sensitive Clinical Queries retrieved relevant systematic reviews as well as primary studies: an analytic survey. J Clin Epidemiol  2011; 64 (12): 1341–9. [DOI] [PubMed] [Google Scholar]
  • 18. Wong SS-L, Wilczynski NL, Haynes RB.  Developing optimal search strategies for detecting clinically sound treatment studies in EMBASE. J Med Libr Assoc  2006; 94 (1): 41–7. [PMC free article] [PubMed] [Google Scholar]
  • 19. Holland JL, Wilczynski NL, Haynes RB.  Optimal search strategies for identifying sound clinical prediction studies in EMBASE. BMC Med Inform Decis Mak  2005; 5 (1): 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Walker-Dilks C, Wilczynski NL, Haynes RB.  Cumulative Index to Nursing and Allied Health Literature search strategies for identifying methodologically sound causation and prognosis studies. Appl Nurs Res  2008; 21 (2): 98–103. [DOI] [PubMed] [Google Scholar]
  • 21. Geersing G-J, Bouwmeester W, Zuithoff P, et al.  Search filters for finding prognostic and diagnostic prediction studies in MEDLINE to enhance systematic reviews. PLoS One  2012; 7 (2): e32844. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Collins GS, Reitsma JB, Altman DG, et al.  Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement. BMC Med  2015; 13 (1): 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Moons KGM, Altman DG, Reitsma JB, et al.  Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med  2015; 162 (1): W1. [DOI] [PubMed] [Google Scholar]
  • 24. Hayden J, van der Windt D, Cartwright JL, et al.  Assessing bias in studies of prognostic factors. Ann Intern Med  2013; 158 (4): 280–6. [DOI] [PubMed] [Google Scholar]
  • 25. Huguet A, Hayden J. A, Stinson J, et al.  Judging the quality of evidence in reviews of prognostic factor research: adapting the GRADE framework. Syst Rev  2013; 2 (1): 71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Wolff RF, Moons KGM, Riley RD, et al.  PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med  2019; 170 (1): 51. [DOI] [PubMed] [Google Scholar]
  • 27. Foroutan F, Guyatt G, Zuk V, et al.  GRADE Guidelines 28: use of GRADE for the assessment of evidence about prognostic factors: rating certainty in identification of groups of patients with different absolute risks. J Clin Epidemiol  2020; 121: 62–70. [DOI] [PubMed] [Google Scholar]
  • 28. Irwin AN, Rackham D.  Comparison of the time-to-indexing in PubMed between biomedical journals according to impact factor, discipline, and focus. Res Social Adm Pharm  2017; 13 (2): 389–93. [DOI] [PubMed] [Google Scholar]
  • 29. Rodriguez RW.  Delay in indexing articles published in major pharmacy practice journals. Am J Health Syst Pharm  2014; 71 (4): 321–4. [DOI] [PubMed] [Google Scholar]
  • 30. Wilczynski NL, McKibbon KA, Walter SD, et al.  MEDLINE clinical queries are robust when searching in recent publishing years. J Am Med Inform Assoc  2013; 20 (2): 363–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Wilczynski NL, Morgan D, Haynes RB.  An overview of the design and methods for retrieving high-quality studies for clinical care. BMC Med Inform Decis Mak  2005; 5 (1): 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Wong SS-L, Wilczynski NL, Haynes RB, et al.  Developing optimal search strategies for detecting sound clinical prediction studies in MEDLINE. AMIA Annu Symp Proc  2003; 2003: 728–32. [PMC free article] [PubMed] [Google Scholar]
  • 33. Wilczynski NL, Haynes RB.  Developing optimal search strategies for detecting clinically sound prognostic studies in MEDLINE: an analytic survey. BMC Med  2004; 2 (1): 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.CADTH. Text Mining Opportunities: White Paper. Ottawa. 2018. www.cadth.ca/text-mining-opportunities-white-paper
  • 35. Jenkins M, Metropolitan M, Infirmary MR.  Evaluation of methodological search filters—a review. Health Info Libr J  2004; 21 (3): 148–63. [DOI] [PubMed] [Google Scholar]
  • 36.Anonymous. PubMed PubReMiner. 2014. https://hgserver2.amc.nl/cgi-bin/miner/miner2.cgi Accessed June 4, 2020
  • 37. Sinclair S, Rockwell G. Voyant Tool. 2020. https://voyant-tools.org/ Accessed June 4, 2020.
  • 38.McMaster PLUS. PLUS/MORE Reading Criteria. 2012. https://hiru.mcmaster.ca/hiru/InclusionCriteria.html Accessed June 5, 2020
  • 39. Haynes RB, Cotoi C, Holland J, et al.  Second-order peer review of the medical literature for clinical practitioners. JAMA  2006; 295 (15): 1801–8. [DOI] [PubMed] [Google Scholar]
  • 40. Keogh C, Wallace E, Brien KKO, et al.  Optimized retrieval of primary care clinical prediction rules from MEDLINE to establish a web-based register. J Clin Epidemiol  2011; 64 (8): 848–60. [DOI] [PubMed] [Google Scholar]
  • 41. Chatterley T, Dennett L.  Utilisation of search filters in systematic reviews of prognosis questions. Health Info Libr J  2012; 29 (4): 309–22. [DOI] [PubMed] [Google Scholar]
  • 42. Ingui BJ, Rogers MAM.  Searching for clinical prediction rules in MEDLINE. J Am Med Informatics Assoc  2001; 8 (4): 391–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Hayden JA, Côté P, Bombardier C.  Evaluation of the quality of prognosis studies in systematic reviews. Ann Intern Med  2006; 144 (6): 427. [DOI] [PubMed] [Google Scholar]
  • 44. Garcia LM, Sanabria AJ, Alvarez EG, et al.  The validity of recommendations from clinical guidelines: a survival analysis. Can Med Assoc J  2014; 186 (16): 1211–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Martínez García L, Arévalo-Rodríguez I, Solà I, et al.  Strategies for monitoring and updating clinical practice guidelines: a systematic review. Implement Sci  2012; 7: 109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Akl EA, Meerpohl JJ, Elliott J, et al.  Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol  2017; 91: 47–53. [DOI] [PubMed] [Google Scholar]
  • 47. Martínez García L, McFarlane E, Barnes S, et al.  Updated recommendations: an assessment of NICE clinical guidelines. Implement Sci  2014; 9: 72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Martínez García L, Pardo-Hernández H, Sanabria AJ, et al.  Guideline on terminology and definitions of updating clinical guidelines: the updating glossary. J Clin Epidemiol  2018; 95: 28–33. [DOI] [PubMed] [Google Scholar]
  • 49. Martínez García L, Pardo-Hernandez H, Alonso-Coello P.  More detail is needed for updating clinical guidelines. Kidney Int  2016; 90 (3): 707–8. [DOI] [PubMed] [Google Scholar]
  • 50. Shekelle PG.  Updating practice guidelines. JAMA  2014; 311 (20): 2072–3. [DOI] [PubMed] [Google Scholar]
  • 51. Martínez García L, Pardo-Hernandez H, Superchi C, et al.  Methodological systematic review identifies major limitations in prioritization processes for updating. J Clin Epidemiol  2017; 86: 11–24. [DOI] [PubMed] [Google Scholar]
  • 52. Moher D, Tsertsvadze A, Tricco AC, et al.  A systematic review identified few methods and strategies describing when and how to update systematic reviews. J Clin Epidemiol  2007; 60 (11): 1095–104. [DOI] [PubMed] [Google Scholar]
  • 53. Becker M, Neugebauer EAM, Eikermann M.  Partial updating of clinical practice guidelines often makes more sense than full updating: a systematic review on methods and the development of an updating procedure. J Clin Epidemiol  2014; 67 (1): 33–45. [DOI] [PubMed] [Google Scholar]
  • 54. Montori VM, Wilczynski NL, Morgan D, et al.  Optimal search strategies for retrieving systematic reviews from MEDLINE: analytical survey. BMJ  2005; 330 (7482): 68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Alonso-Coello P, Martínez García L, Carrasco JM, et al.  The updating of clinical practice guidelines: insights from an international survey. Implement Sci  2011; 6: 107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Vernooij RWM, Martínez García L, Florez ID, et al.  Updated clinical guidelines experience major reporting limitations. Implement Sci  2017; 12 (1): 120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Wilczynski NL, Haynes RB.  Robustness of empirical search strategies for clinical content in MEDLINE. Proc AMIA Symp  2002: 904–8. [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocaa232_Supplementary_Data

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES