U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Med Libr Assoc
  • v.106(4); 2018 Oct

A systematic approach to searching: an efficient and complete method to develop literature searches

Associated data.

Creating search strategies for systematic reviews, finding the best balance between sensitivity and specificity, and translating search strategies between databases is challenging. Several methods describe standards for systematic search strategies, but a consistent approach for creating an exhaustive search strategy has not yet been fully described in enough detail to be fully replicable. The authors have established a method that describes step by step the process of developing a systematic search strategy as needed in the systematic review. This method describes how single-line search strategies can be prepared in a text document by typing search syntax (such as field codes, parentheses, and Boolean operators) before copying and pasting search terms (keywords and free-text synonyms) that are found in the thesaurus. To help ensure term completeness, we developed a novel optimization technique that is mainly based on comparing the results retrieved by thesaurus terms with those retrieved by the free-text search words to identify potentially relevant candidate search terms. Macros in Microsoft Word have been developed to convert syntaxes between databases and interfaces almost automatically. This method helps information specialists in developing librarian-mediated searches for systematic reviews as well as medical and health care practitioners who are searching for evidence to answer clinical questions. The described method can be used to create complex and comprehensive search strategies for different databases and interfaces, such as those that are needed when searching for relevant references for systematic reviews, and will assist both information specialists and practitioners when they are searching the biomedical literature.

INTRODUCTION

Librarians and information specialists are often involved in the process of preparing and completing systematic reviews (SRs), where one of their main tasks is to identify relevant references to include in the review [ 1 ]. Although several recommendations for the process of searching have been published [ 2 – 6 ], none describe the development of a systematic search strategy from start to finish.

Traditional methods of SR search strategy development and execution are highly time consuming, reportedly requiring up to 100 hours or more [ 7 , 8 ]. The authors wanted to develop systematic and exhaustive search strategies more efficiently, while preserving the high sensitivity that SR search strategies necessitate. In this article, we describe the method developed at Erasmus University Medical Center (MC) and demonstrate its use through an example search. The efficiency of the search method and outcome of 73 searches that have resulted in published reviews are described in a separate article [ 9 ].

As we aimed to describe the creation of systematic searches in full detail, the method starts at a basic level with the analysis of the research question and the creation of search terms. Readers who are new to SR searching are advised to follow all steps described. More experienced searchers can consider the basic steps to be existing knowledge that will already be part of their normal workflow, although step 4 probably differs from general practice. Experienced searchers will gain the most from reading about the novelties in the method as described in steps 10–13 and comparing the examples given in the supplementary appendix to their own practice.

CREATING A SYSTEMATIC SEARCH STRATEGY

Our methodology for planning and creating a multi-database search strategy consists of the following steps:

  • Determine a clear and focused question
  • Describe the articles that can answer the question
  • Decide which key concepts address the different elements of the question
  • Decide which elements should be used for the best results
  • Choose an appropriate database and interface to start with
  • Document the search process in a text document
  • Identify appropriate index terms in the thesaurus of the first database
  • Identify synonyms in the thesaurus
  • Add variations in search terms
  • Use database-appropriate syntax, with parentheses, Boolean operators, and field codes
  • Optimize the search
  • Evaluate the initial results
  • Check for errors
  • Translate to other databases
  • Test and reiterate

Each step in the process is reflected by an example search described in the supplementary appendix .

1. Determine a clear and focused question

A systematic search can best be applied to a well-defined and precise research or clinical question. Questions that are too broad or too vague cannot be answered easily in a systematic way and will generally result in an overwhelming number of search results. On the other hand, a question that is too specific will result into too few or even zero search results. Various papers describe this process in more detail [ 10 – 12 ].

2. Describe the articles that can answer the question

Although not all clinical or research questions can be answered in the literature, the next step is to presume that the answer can indeed be found in published studies. A good starting point for a search is hypothesizing what the research that can answer the question would look like. These hypothetical (when possible, combined with known) articles can be used as guidance for constructing the search strategy.

3. Decide which key concepts address the different elements of the question

Key concepts are the topics or components that the desired articles should address, such as diseases or conditions, actions, substances, settings, domains (e.g., therapy, diagnosis, etiology), or study types. Key concepts from the research question can be grouped to create elements in the search strategy.

Elements in a search strategy do not necessarily follow the patient, intervention, comparison, outcome (PICO) structure or any other related structure. Using the PICO or another similar framework as guidance can be helpful to consider, especially in the inclusion and exclusion review stage of the SR, but this is not necessary for good search strategy development [ 13 – 15 ]. Sometimes concepts from different parts of the PICO structure can be grouped together into one search element, such as when the desired outcome is frequently described in a certain study type.

4. Decide which elements should be used for the best results

Not all elements of a research question should necessarily be used in the search strategy. Some elements are less important than others or may unnecessarily complicate or restrict a search strategy. Adding an element to a search strategy increases the chance of missing relevant references. Therefore, the number of elements in a search strategy should remain as low as possible to optimize recall.

Using the schema in Figure 1 , elements can be ordered by their specificity and importance to determine the best search approach. Whether an element is more specific or more general can be measured objectively by the number of hits retrieved in a database when searching for a key term representing that element. Depending on the research question, certain elements are more important than others. If articles (hypothetically or known) exist that can answer the question but lack a certain element in their titles, abstracts, or keywords, that element is unimportant to the question. An element can also be unimportant because of expected bias or an overlap with another element.

An external file that holds a picture, illustration, etc.
Object name is jmla-106-531-f001.jpg

Schema for determining the optimal order of elements

Bias in elements

The choice of elements in a search strategy can introduce bias through use of overly specific terminology or terms often associated with positive outcomes. For the question “does prolonged breastfeeding improve intelligence outcomes in children?,” searching specifically for the element of duration will introduce bias, as articles that find a positive effect of prolonged breastfeeding will be much more likely to mention time factors in their titles or abstracts.

Overlapping elements

Elements in a question sometimes overlap in their meaning. Sometimes certain therapies are interventions for one specific disease. The Lichtenstein technique, for example, is a repair method for inguinal hernias. There is no need to include an element of “inguinal hernias” to a search for the effectiveness of the Lichtenstein therapy. Likewise, sometimes certain diseases are only found in certain populations. Adding such an overlapping element could lead to missing relevant references.

The elements to use in a search strategy can be found in the plot of elements in Figure 1 , by following the top row from left to right. For this method, we recommend starting with the most important and specific elements. Then, continue with more general and important elements until the number of results is acceptable for screening. Determining how many results are acceptable for screening is often a matter of negotiation with the SR team.

5. Choose an appropriate database and interface to start with

Important factors for choosing databases to use are the coverage and the presence of a thesaurus. For medically oriented searches, the coverage and recall of Embase, which includes the MEDLINE database, are superior to those of MEDLINE [ 16 ]. Each of these two databases has its own thesaurus with its own unique definitions and structure. Because of the complexity of the Embase thesaurus, Emtree, which contains much more specific thesaurus terms than the MEDLINE Medical Subject Headings (MeSH) thesaurus, translation from Emtree to MeSH is easier than the other way around. Therefore, we recommend starting in Embase.

MEDLINE and Embase are available through many different vendors and interfaces. The choice of an interface and primary database is often determined by the searcher’s accessibility. For our method, an interface that allows searching with proximity operators is desirable, and full functionality of the thesaurus, including explosion of narrower terms, is crucial. We recommend developing a personal workflow that always starts with one specific database and interface.

6. Document the search process in a text document

We advise designing and creating the complete search strategies in a log document, instead of directly in the database itself, to register the steps taken and to make searches accountable and reproducible. The developed search strategies can be copied and pasted into the desired databases from the log document. This way, the searcher is in control of the whole process. Any change to the search strategy should be done in the log document, assuring that the search strategy in the log is always the most recent.

7. Identify appropriate index terms in the thesaurus of the first database

Searches should start by identifying appropriate thesaurus terms for the desired elements. The thesaurus of the database is searched for matching index terms for each key concept. We advise restricting the initial terms to the most important and most relevant terms. Later in the process, more general terms can be added in the optimization process, in which the effect on the number of hits, and thus the desirability of adding these terms, can be evaluated more easily.

Several factors can complicate the identification of thesaurus terms. Sometimes, one thesaurus term is found that exactly describes a specific element. In contrast, especially in more general elements, multiple thesaurus terms can be found to describe one element. If no relevant thesaurus terms have been found for an element, free-text terms can be used, and possible thesaurus terms found in the resulting references can be added later (step 11).

Sometimes, no distinct thesaurus term is available for a specific key concept that describes the concept in enough detail. In Emtree, one thesaurus term often combines two or more elements. The easiest solution for combining these terms for a sensitive search is to use such a thesaurus term in all elements where it is relevant. Examples are given in the supplementary appendix .

8. Identify synonyms in the thesaurus

Most thesauri offer a list of synonyms on their term details page (named Synonyms in Emtree and Entry Terms in MeSH). To create a sensitive search strategy for SRs, these terms need to be searched as free-text keywords in the title and abstract fields, in addition to searching their associated thesaurus terms.

The Emtree thesaurus contains more synonyms (300,000) than MeSH does (220,000) [ 17 ]. The difference in number of terms is even higher considering that many synonyms in MeSH are permuted terms (i.e., inversions of phrases using commas).

Thesaurus terms are ordered in a tree structure. When searching for a more general thesaurus term, the more specific (narrower) terms in the branches below that term will also be searched (this is frequently referred to as “exploding” a thesaurus term). However, to perform a sensitive search, all relevant variations of the narrower terms must be searched as free-text keywords in the title or abstract, in addition to relying on the exploded thesaurus term. Thus, all articles that describe a certain narrower topic in their titles and abstracts will already be retrieved before MeSH terms are added.

9. Add variations in search terms (e.g., truncation, spelling differences, abbreviations, opposites)

Truncation allows a searcher to search for words beginning with the same word stem. A search for therap* will, thus, retrieve therapy, therapies, therapeutic, and all other words starting with “therap.” Do not truncate a word stem that is too short. Also, limitations of interfaces should be taken into account, especially in PubMed, where the number of search term variations that can be found by truncation is limited to 600.

Databases contain references to articles using both standard British and American English spellings. Both need to be searched as free-text terms in the title and abstract. Alternatively, many interfaces offer a certain code to replace zero or one characters, allowing a search for “pediatric” or “paediatric” as “p?ediatric.” Table 1 provides a detailed description of the syntax for different interfaces.

Field codes in five most used interfaces for biomedical literature searching

PubMedOvidEBSCOhostEmbase.comProQuest
Title/abstract[tiab] ().ab,ti.TI () OR AB () ():ab,tiAB,TI()
All fields[All Fields].af. ALL
Thesaurus term[mesh:noexp]…/MH “…”‘…’/deMESH(…)
Including narrower[mesh]exp …/MH “…+”‘…’/expMESH#(…)
Combined subheading [mesh]exp …/ MH “…+/ ”‘…’/exp/dm_ MESH(… LNK ..)
Free subheading[sh] .xs. or .fs. MW:lnk
Publication type[pt] .pt. or exp / PT:it RTYPE
Proximity ADJnNnNEAR/n-NEXT/nN/n
Exact phrase“double quotes”No quotes needed“double quotes”‘single quotes’“double quotes”
Truncated phraseUse-hyphen*No quote*No quote*‘single quote*’“Double quote*”
TruncationEndEnd/ midEnd/ midEnd/ midEnd / mid / start
Infinite** or $***
0 or 1 character?#$1
1 character#?? ?
Added to database sinceyyyy/mm/dd:yyyy/mm/dd [edat] (or [mhda])limit #N to rd=yyyymmdd-yyyymmdd EM yyyymmdd-yyyymmdd[dd-mm-yyyy]/sdLUPD(yyyymmdd)
Publication period (years)yyyy:yyyy[dp]limit #N to yr=yyyy-yyyy PY yyyy-yyyy[yyyy-yyyy]/pyYR (yyyy-yyyy)
Record sets#11 S1#1S1

Searching for abbreviations can identify extra, relevant references and retrieve more irrelevant ones. The search can be more focused by combining the abbreviation with an important word that is relevant to its meaning or by using the Boolean “NOT” to exclude frequently observed, clearly irrelevant results. We advise that searchers do not exclude all possible irrelevant meanings, as it is very time consuming to identify all the variations, it will result in unnecessarily complicated search strategies, and it may lead to erroneously narrowing the search and, thereby, reduce recall.

Searching partial abbreviations can be useful for retrieving relevant references. For example, it is very likely that an article would mention osteoarthritis (OA) early in the abstract, replacing all further occurrences of osteoarthritis with OA . Therefore, it may not contain the phrase “hip osteoarthritis” but only “hip oa.”

It is also important to search for the opposites of search terms to avoid bias. When searching for “disease recurrence,” articles about “disease free” may be relevant as well. When the desired outcome is survival , articles about mortality may be relevant.

10. Use database-appropriate syntax, with parentheses, Boolean operators, and field codes

Different interfaces require different syntaxes, the special set of rules and symbols unique to each database that define how a correctly constructed search operates. Common syntax components include the use of parentheses and Boolean operators such as “AND,” “OR,” and “NOT,” which are available in all major interfaces. An overview of different syntaxes for four major interfaces for bibliographic medical databases (PubMed, Ovid, EBSCOhost, Embase.com, and ProQuest) is shown in Table 1 .

Creating the appropriate syntax for each database, in combination with the selected terms as described in steps 7–9, can be challenging. Following the method outlined below simplifies the process:

  • Create single-line queries in a text document (not combining multiple record sets), which allows immediate checking of the relevance of retrieved references and efficient optimization.
  • Type the syntax (Boolean operators, parentheses, and field codes) before adding terms, which reduces the chance that errors are made in the syntax, especially in the number of parentheses.
  • Use predefined proximity structures including parentheses, such as (() ADJ3 ()) in Ovid, that can be reused in the query when necessary.
  • Use thesaurus terms separately from free-text terms of each element. Start an element with all thesaurus terms (using “OR”) and follow with the free-text terms. This allows the unique optimization methods as described in step 11.
  • When adding terms to an existing search strategy, pay close attention to the position of the cursor. Make sure to place it appropriately either in the thesaurus terms section, in the title/abstract section, or as an addition (broadening) to an existing proximity search.

The supplementary appendix explains the method of building a query in more detail, step by step for different interfaces: PubMed, Ovid, EBSCOhost, Embase.com, and ProQuest. This method results in a basic search strategy designed to retrieve some relevant references upon which a more thorough search strategy can be built with optimization such as described in step 11.

11. Optimize the search

The most important question when performing a systematic search is whether all (or most) potentially relevant articles have been retrieved by the search strategy. This is also the most difficult question to answer, since it is unknown which and how many articles are relevant. It is, therefore, wise first to broaden the initial search strategy, making the search more sensitive, and then check if new relevant articles are found by comparing the set results (i.e., search for Strategy #2 NOT Strategy #1 to see the unique results).

A search strategy should be tested for completeness. Therefore, it is necessary to identify extra, possibly relevant search terms and add them to the test search in an OR relationship with the already used search terms. A good place to start, and a well-known strategy, is scanning the top retrieved articles when sorted by relevance, looking for additional relevant synonyms that could be added to the search strategy.

We have developed a unique optimization method that has not been described before in the literature. This method often adds valuable extra terms to our search strategy and, therefore, extra, relevant references to our search results. Extra synonyms can be found in articles that have been assigned a certain set of thesaurus terms but that lack synonyms in the title and/or abstract that are already present in the current search strategy. Searching for thesaurus terms NOT free-text terms will help identify missed free-text terms in the title or abstract. Searching for free-text terms NOT thesaurus terms will help identify missed thesaurus terms. If this is done repeatedly for each element, leaving the rest of the query unchanged, this method will help add numerous relevant terms to the query. These steps are explained in detail for five different search platforms in the supplementary appendix .

12. Evaluate the initial results

The results should now contain relevant references. If the interface allows relevance ranking, use that in the evaluation. If you know some relevant references that should be included in the research, search for those references specifically; for example, combine a specific (first) author name with a page number and the publication year. Check whether those references are retrieved by the search. If the known relevant references are not retrieved by the search, adapt the search so that they are. If it is unclear which element should be adapted to retrieve a certain article, combine that article with each element separately.

Different outcomes are desired for different types of research questions. For instance, in the case of clinical question answering, the researcher will not be satisfied with many references that contain a lot of irrelevant references. A clinical search should be rather specific and is allowed to miss a relevant reference. In the case of an SR, the researchers do not want to miss any relevant reference and are willing to handle many irrelevant references to do so. The search for references to include in an SR should be very sensitive: no included reference should be missed. A search that is too specific or too sensitive for the intended goal can be adapted to become more sensitive or specific. Steps to increase sensitivity or specificity of a search strategy can be found in the supplementary appendix .

13. Check for errors

Errors might not be easily detected. Sometimes clues can be found in the number of results, either when the number of results is much higher or lower than expected or when many retrieved references are not relevant. However, the number expected is often unknown, and very sensitive search strategies will always retrieve many irrelevant articles. Each query should, therefore, be checked for errors.

One of the most frequently occurring errors is missing the Boolean operator “OR.” When no “OR” is added between two search terms, many interfaces automatically add an “AND,” which unintentionally reduces the number of results and likely misses relevant references. One good strategy to identify missing “OR”s is to go to the web page containing the full search strategy, as translated by the database, and using Ctrl-F search for “AND.” Check whether the occurrences of the “AND” operator are deliberate.

Ideally, search strategies should be checked by other information specialists [ 18 ]. The Peer Review of Electronic Search Strategies (PRESS) checklist offers good guidance for this process [ 4 ]. Apart from the syntax (especially Boolean operators and field codes) of the search strategy, it is wise to have the search terms checked by the clinician or researcher familiar with the topic. At Erasmus MC, researchers and clinicians are involved during the complete process of structuring and optimizing the search strategy. Each word is added after the combined decision of the searcher and the researcher, with the possibility of directly comparing results with and without the new term.

14. Translate to other databases

To retrieve as many relevant references as possible, one has to search multiple databases. Translation of complex and exhaustive queries between different databases can be very time consuming and cumbersome. The single-line search strategy approach detailed above allows quick translations using the find and replace method in Microsoft Word (<Ctrl-H>).

At Erasmus MC, macros based on the find-and-replace method in Microsoft Word have been developed for easy and fast translation between the most used databases for biomedical and health sciences questions. The schema that is followed for the translation between databases is shown in Figure 2 . Most databases simply follow the structure set by the Embase.com search strategy. The translation from Emtree terms to MeSH terms for MEDLINE in Ovid often identifies new terms that need to be added to the Embase.com search strategy before the translation to other databases.

An external file that holds a picture, illustration, etc.
Object name is jmla-106-531-f002.jpg

Schematic representation of translation between databases used at Erasmus University Medical Center

Dotted lines represent databases that are used in less than 80% of the searches.

Using five different macros, a thoroughly optimized query in Embase.com can be relatively quickly translated into eight major databases. Basic search strategies will be created to use in many, mostly smaller, databases, because such niche databases often do not have extensive thesauri or advanced syntax options. Also, there is not much need to use extensive syntax because the number of hits and, therefore, the amount of noise in these databases is generally low. In MEDLINE (Ovid), PsycINFO (Ovid), and CINAHL (EBSCOhost), the thesaurus terms must be adapted manually, as each database has its own custom thesaurus. These macros and instructions for their installation, use, and adaptation are available at bit.ly/databasemacros.

15. Test and reiterate

Ideally, exhaustive search strategies should retrieve all references that are covered in a specific database. For SR search strategies, checking searches for their recall is advised. This can be done after included references have been determined by the authors of the systematic review. If additional papers have been identified through other non-database methods (i.e., checking references in included studies), results that were not identified by the database searches should be examined. If these results were available in the databases but not located by the search strategy, the search strategy should be adapted to try to retrieve these results, as they may contain terms that were omitted in the original search strategies. This may enable the identification of additional relevant results.

A methodology for creating exhaustive search strategies has been created that describes all steps of the search process, starting with a question and resulting in thorough search strategies in multiple databases. Many of the steps described are not new, but together, they form a strong method creating high-quality, robust searches in a relatively short time frame.

Our methodology is intended to create thoroughness for literature searches. The optimization method, as described in step 11, will identify missed synonyms or thesaurus terms, unlike any other method that largely depends on predetermined keywords and synonyms. Using this method results in a much quicker search process, compared to traditional methods, especially because of the easier translation between databases and interfaces (step 13). The method is not a guarantee for speed, since speed depends on many factors, including experience. However, by following the steps and using the tools as described above, searchers can gain confidence first and increase speed through practice.

What is new?

This method encourages searchers to start their search development process using empty syntax first and later adding the thesaurus terms and free-text synonyms. We feel this helps the searcher to focus on the search terms, instead of on the structure of the search query. The optimization method in which new terms are found in the already retrieved articles is used in some other institutes as well but has to our knowledge not been described in the literature. The macros to translate search strategies between interfaces are unique in this method.

What is different compared to common practice?

Traditionally, librarians and information specialists have focused on creating complex, multi-line (also called line-by-line) search strategies, consisting of multiple record sets, and this method is frequently advised in the literature and handbooks [ 2 , 19 – 21 ]. Our method, instead, uses single-line searches, which is critical to its success. Single-line search strategies can be easily adapted by adding or dropping a term without having to recode numbers of record sets, which would be necessary in multi-line searches. They can easily be saved in a text document and repeated by copying and pasting for search updates. Single-line search strategies also allow easy translation to other syntaxes using find-and-replace technology to update field codes and other syntax elements or using macros (step 13).

When constructing a search strategy, the searcher might experience that certain parentheses in the syntax are unnecessary, such as parentheses around all search terms in the title/abstract portion, if there is only one such term, there are double parentheses in the proximity statement, or one of the word groups exists for only one word. One might be tempted to omit those parentheses for ease of reading and management. However, during the optimization process, the searcher is likely to find extra synonyms that might consist of one word. To add those terms to the first query (with reduced parentheses) requires adding extra parentheses (meticulously placing and counting them), whereas, in the latter search, it only requires proper placement of those terms.

Many search methods highly depend on the PICO framework. Research states that often PICO or PICOS is not suitable for every question [ 22 , 23 ]. There are other acronyms than PICO—such as sample, phenomenon of interest, design, evaluation, research type (SPIDER) [ 24 ]—but each is just a variant. In our method, the most important and specific elements of a question are being analyzed for building the best search strategy.

Though it is generally recommended that searchers search both MEDLINE and Embase, most use MEDLINE as the starting point. It is considered the gold standard for biomedical searching, partially due to historical reasons, since it was the first of its kind, and more so now that it is freely available via the PubMed interface. Our method can be used with any database as a starting point, but we use Embase instead of MEDLINE or another database for a number of reasons. First, Embase provides both unique content and the complete content of MEDLINE. Therefore, searching Embase will be, by definition, more complete than searching MEDLINE only. Second, the number of terms in Emtree (the Embase thesaurus) is three times as high as that of MeSH (the MEDLINE thesaurus). It is easier to find MeSH terms after all relevant Emtree terms have been identified than to start with MeSH and translate to Emtree.

At Erasmus MC, the researchers sit next to the information specialist during most of the search strategy design process. This way, the researchers can deliver immediate feedback on the relevance of proposed search terms and retrieved references. The search team then combines knowledge about databases with knowledge about the research topic, which is an important condition to create the highest quality searches.

Limitations of the method

One disadvantage of single-line searches compared to multi-line search strategies is that errors are harder to recognize. However, with the methods for optimization as described (step 11), errors are recognized easily because missed synonyms and spelling errors will be identified during the process. Also problematic is that more parentheses are needed, making it more difficult for the searcher and others to assess the logic of the search strategy. However, as parentheses and field codes are typed before the search terms are added (step 10), errors in parentheses can be prevented.

Our methodology works best if used in an interface that allows proximity searching. It is recommended that searchers with access to an interface with proximity searching capabilities select one of those as the initial database to develop and optimize the search strategy. Because the PubMed interface does not allow proximity searches, phrases or Boolean “AND” combinations are required. Phrase searching complicates the process and is more specific, with the higher risk of missing relevant articles, and using Boolean “AND” combinations increases sensitivity but at an often high loss of specificity. Due to some searchers’ lack of access to expensive databases or interfaces, the freely available PubMed interface may be necessary to use, though it should never be the sole database used for an SR [ 2 , 16 , 25 ]. A limitation of our method is that it works best with subscription-based and licensed resources.

Another limitation is the customization of the macros to a specific institution’s resources. The macros for the translation between different database interfaces only work between the interfaces as described. To mitigate this, we recommend using the find-and-replace functionality of text editors like Microsoft Word to ease the translation of syntaxes between other databases. Depending on one’s institutional resources, custom macros can be developed using similar methods.

Results of the method

Whether this method results in exhaustive searches where no important article is missed is difficult to determine, because the number of relevant articles is unknown for any topic. A comparison of several parameters of 73 published reviews that were based on a search developed with this method to 258 reviews that acknowledged information specialists from other Dutch academic hospitals shows that the performance of the searches following our method is comparable to those performed in other institutes but that the time needed to develop the search strategies was much shorter than the time reported for the other reviews [ 9 ].

CONCLUSIONS

With the described method, searchers can gain confidence in their search strategies by finding many relevant words and creating exhaustive search strategies quickly. The approach can be used when performing SR searches or for other purposes such as answering clinical questions, with different expectations of the search’s precision and recall. This method, with practice, provides a stepwise approach that facilitates the search strategy development process from question clarification to final iteration and beyond.

SUPPLEMENTAL FILE

Acknowledgments.

We highly appreciate the work that was done by our former colleague Louis Volkers, who in his twenty years as an information specialist in Erasmus MC laid the basis for our method. We thank Professor Oscar Franco for reviewing earlier drafts of this article.

  • Open access
  • Published: 14 August 2018

Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies

  • Chris Cooper   ORCID: orcid.org/0000-0003-0864-5607 1 ,
  • Andrew Booth 2 ,
  • Jo Varley-Campbell 1 ,
  • Nicky Britten 3 &
  • Ruth Garside 4  

BMC Medical Research Methodology volume  18 , Article number:  85 ( 2018 ) Cite this article

211k Accesses

227 Citations

116 Altmetric

Metrics details

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before.

The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies.

A literature review.

Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge.

The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching.

Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process.

Conclusions

Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.

Peer Review reports

Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving review stakeholders clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. This is in contrast to the information science literature, which has developed information processing models as an explicit basis for dialogue and empirical testing. Without an explicit model, research in the process of systematic literature searching will remain immature and potentially uneven, and the development of shared information models will be assumed but never articulated.

One way of developing such a conceptual model is by formally examining the implicit “programme theory” as embodied in key methodological texts. The aim of this review is therefore to determine if a shared model of the literature searching process in systematic reviews can be detected across guidance documents and, if so, how this process is reported and supported.

Identifying guidance

Key texts (henceforth referred to as “guidance”) were identified based upon their accessibility to, and prominence within, United Kingdom systematic reviewing practice. The United Kingdom occupies a prominent position in the science of health information retrieval, as quantified by such objective measures as the authorship of papers, the number of Cochrane groups based in the UK, membership and leadership of groups such as the Cochrane Information Retrieval Methods Group, the HTA-I Information Specialists’ Group and historic association with such centres as the UK Cochrane Centre, the NHS Centre for Reviews and Dissemination, the Centre for Evidence Based Medicine and the National Institute for Clinical Excellence (NICE). Coupled with the linguistic dominance of English within medical and health science and the science of systematic reviews more generally, this offers a justification for a purposive sample that favours UK, European and Australian guidance documents.

Nine guidance documents were identified. These documents provide guidance for different types of reviews, namely: reviews of interventions, reviews of health technologies, reviews of qualitative research studies, reviews of social science topics, and reviews to inform guidance.

Whilst these guidance documents occasionally offer additional guidance on other types of systematic reviews, we have focused on the core and stated aims of these documents as they relate to literature searching. Table  1 sets out: the guidance document, the version audited, their core stated focus, and a bibliographical pointer to the main guidance relating to literature searching.

Once a list of key guidance documents was determined, it was checked by six senior information professionals based in the UK for relevance to current literature searching in systematic reviews.

Identifying supporting studies

In addition to identifying guidance, the authors sought to populate an evidence base of supporting studies (henceforth referred to as “studies”) that contribute to existing search practice. Studies were first identified by the authors from their knowledge on this topic area and, subsequently, through systematic citation chasing key studies (‘pearls’ [ 1 ]) located within each key stage of the search process. These studies are identified in Additional file  1 : Appendix Table 1. Citation chasing was conducted by analysing the bibliography of references for each study (backwards citation chasing) and through Google Scholar (forward citation chasing). A search of PubMed using the systematic review methods filter was undertaken in August 2017 (see Additional file 1 ). The search terms used were: (literature search*[Title/Abstract]) AND sysrev_methods[sb] and 586 results were returned. These results were sifted for relevance to the key stages in Fig.  1 by CC.

figure 1

The key stages of literature search guidance as identified from nine key texts

Extracting the data

To reveal the implicit process of literature searching within each guidance document, the relevant sections (chapters) on literature searching were read and re-read, with the aim of determining key methodological stages. We defined a key methodological stage as a distinct step in the overall process for which specific guidance is reported, and action is taken, that collectively would result in a completed literature search.

The chapter or section sub-heading for each methodological stage was extracted into a table using the exact language as reported in each guidance document. The lead author (CC) then read and re-read these data, and the paragraphs of the document to which the headings referred, summarising section details. This table was then reviewed, using comparison and contrast to identify agreements and areas of unique guidance. Consensus across multiple guidelines was used to inform selection of ‘key stages’ in the process of literature searching.

Having determined the key stages to literature searching, we then read and re-read the sections relating to literature searching again, extracting specific detail relating to the methodological process of literature searching within each key stage. Again, the guidance was then read and re-read, first on a document-by-document-basis and, secondly, across all the documents above, to identify both commonalities and areas of unique guidance.

Results and discussion

Our findings.

We were able to identify consensus across the guidance on literature searching for systematic reviews suggesting a shared implicit model within the information retrieval community. Whilst the structure of the guidance varies between documents, the same key stages are reported, even where the core focus of each document is different. We were able to identify specific areas of unique guidance, where a document reported guidance not summarised in other documents, together with areas of consensus across guidance.

Unique guidance

Only one document provided guidance on the topic of when to stop searching [ 2 ]. This guidance from 2005 anticipates a topic of increasing importance with the current interest in time-limited (i.e. “rapid”) reviews. Quality assurance (or peer review) of literature searches was only covered in two guidance documents [ 3 , 4 ]. This topic has emerged as increasingly important as indicated by the development of the PRESS instrument [ 5 ]. Text mining was discussed in four guidance documents [ 4 , 6 , 7 , 8 ] where the automation of some manual review work may offer efficiencies in literature searching [ 8 ].

Agreement between guidance: Defining the key stages of literature searching

Where there was agreement on the process, we determined that this constituted a key stage in the process of literature searching to inform systematic reviews.

From the guidance, we determined eight key stages that relate specifically to literature searching in systematic reviews. These are summarised at Fig. 1 . The data extraction table to inform Fig. 1 is reported in Table  2 . Table 2 reports the areas of common agreement and it demonstrates that the language used to describe key stages and processes varies significantly between guidance documents.

For each key stage, we set out the specific guidance, followed by discussion on how this guidance is situated within the wider literature.

Key stage one: Deciding who should undertake the literature search

The guidance.

Eight documents provided guidance on who should undertake literature searching in systematic reviews [ 2 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ]. The guidance affirms that people with relevant expertise of literature searching should ‘ideally’ be included within the review team [ 6 ]. Information specialists (or information scientists), librarians or trial search co-ordinators (TSCs) are indicated as appropriate researchers in six guidance documents [ 2 , 7 , 8 , 9 , 10 , 11 ].

How the guidance corresponds to the published studies

The guidance is consistent with studies that call for the involvement of information specialists and librarians in systematic reviews [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 ] and which demonstrate how their training as ‘expert searchers’ and ‘analysers and organisers of data’ can be put to good use [ 13 ] in a variety of roles [ 12 , 16 , 20 , 21 , 24 , 25 , 26 ]. These arguments make sense in the context of the aims and purposes of literature searching in systematic reviews, explored below. The need for ‘thorough’ and ‘replicable’ literature searches was fundamental to the guidance and recurs in key stage two. Studies have found poor reporting, and a lack of replicable literature searches, to be a weakness in systematic reviews [ 17 , 18 , 27 , 28 ] and they argue that involvement of information specialists/ librarians would be associated with better reporting and better quality literature searching. Indeed, Meert et al. [ 29 ] demonstrated that involving a librarian as a co-author to a systematic review correlated with a higher score in the literature searching component of a systematic review [ 29 ]. As ‘new styles’ of rapid and scoping reviews emerge, where decisions on how to search are more iterative and creative, a clear role is made here too [ 30 ].

Knowing where to search for studies was noted as important in the guidance, with no agreement as to the appropriate number of databases to be searched [ 2 , 6 ]. Database (and resource selection more broadly) is acknowledged as a relevant key skill of information specialists and librarians [ 12 , 15 , 16 , 31 ].

Whilst arguments for including information specialists and librarians in the process of systematic review might be considered self-evident, Koffel and Rethlefsen [ 31 ] have questioned if the necessary involvement is actually happening [ 31 ].

Key stage two: Determining the aim and purpose of a literature search

The aim: Five of the nine guidance documents use adjectives such as ‘thorough’, ‘comprehensive’, ‘transparent’ and ‘reproducible’ to define the aim of literature searching [ 6 , 7 , 8 , 9 , 10 ]. Analogous phrases were present in a further three guidance documents, namely: ‘to identify the best available evidence’ [ 4 ] or ‘the aim of the literature search is not to retrieve everything. It is to retrieve everything of relevance’ [ 2 ] or ‘A systematic literature search aims to identify all publications relevant to the particular research question’ [ 3 ]. The Joanna Briggs Institute reviewers’ manual was the only guidance document where a clear statement on the aim of literature searching could not be identified. The purpose of literature searching was defined in three guidance documents, namely to minimise bias in the resultant review [ 6 , 8 , 10 ]. Accordingly, eight of nine documents clearly asserted that thorough and comprehensive literature searches are required as a potential mechanism for minimising bias.

The need for thorough and comprehensive literature searches appears as uniform within the eight guidance documents that describe approaches to literature searching in systematic reviews of effectiveness. Reviews of effectiveness (of intervention or cost), accuracy and prognosis, require thorough and comprehensive literature searches to transparently produce a reliable estimate of intervention effect. The belief that all relevant studies have been ‘comprehensively’ identified, and that this process has been ‘transparently’ reported, increases confidence in the estimate of effect and the conclusions that can be drawn [ 32 ]. The supporting literature exploring the need for comprehensive literature searches focuses almost exclusively on reviews of intervention effectiveness and meta-analysis. Different ‘styles’ of review may have different standards however; the alternative, offered by purposive sampling, has been suggested in the specific context of qualitative evidence syntheses [ 33 ].

What is a comprehensive literature search?

Whilst the guidance calls for thorough and comprehensive literature searches, it lacks clarity on what constitutes a thorough and comprehensive literature search, beyond the implication that all of the literature search methods in Table 2 should be used to identify studies. Egger et al. [ 34 ], in an empirical study evaluating the importance of comprehensive literature searches for trials in systematic reviews, defined a comprehensive search for trials as:

a search not restricted to English language;

where Cochrane CENTRAL or at least two other electronic databases had been searched (such as MEDLINE or EMBASE); and

at least one of the following search methods has been used to identify unpublished trials: searches for (I) conference abstracts, (ii) theses, (iii) trials registers; and (iv) contacts with experts in the field [ 34 ].

Tricco et al. (2008) used a similar threshold of bibliographic database searching AND a supplementary search method in a review when examining the risk of bias in systematic reviews. Their criteria were: one database (limited using the Cochrane Highly Sensitive Search Strategy (HSSS)) and handsearching [ 35 ].

Together with the guidance, this would suggest that comprehensive literature searching requires the use of BOTH bibliographic database searching AND supplementary search methods.

Comprehensiveness in literature searching, in the sense of how much searching should be undertaken, remains unclear. Egger et al. recommend that ‘investigators should consider the type of literature search and degree of comprehension that is appropriate for the review in question, taking into account budget and time constraints’ [ 34 ]. This view tallies with the Cochrane Handbook, which stipulates clearly, that study identification should be undertaken ‘within resource limits’ [ 9 ]. This would suggest that the limitations to comprehension are recognised but it raises questions on how this is decided and reported [ 36 ].

What is the point of comprehensive literature searching?

The purpose of thorough and comprehensive literature searches is to avoid missing key studies and to minimize bias [ 6 , 8 , 10 , 34 , 37 , 38 , 39 ] since a systematic review based only on published (or easily accessible) studies may have an exaggerated effect size [ 35 ]. Felson (1992) sets out potential biases that could affect the estimate of effect in a meta-analysis [ 40 ] and Tricco et al. summarize the evidence concerning bias and confounding in systematic reviews [ 35 ]. Egger et al. point to non-publication of studies, publication bias, language bias and MEDLINE bias, as key biases [ 34 , 35 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Comprehensive searches are not the sole factor to mitigate these biases but their contribution is thought to be significant [ 2 , 32 , 34 ]. Fehrmann (2011) suggests that ‘the search process being described in detail’ and that, where standard comprehensive search techniques have been applied, increases confidence in the search results [ 32 ].

Does comprehensive literature searching work?

Egger et al., and other study authors, have demonstrated a change in the estimate of intervention effectiveness where relevant studies were excluded from meta-analysis [ 34 , 47 ]. This would suggest that missing studies in literature searching alters the reliability of effectiveness estimates. This is an argument for comprehensive literature searching. Conversely, Egger et al. found that ‘comprehensive’ searches still missed studies and that comprehensive searches could, in fact, introduce bias into a review rather than preventing it, through the identification of low quality studies then being included in the meta-analysis [ 34 ]. Studies query if identifying and including low quality or grey literature studies changes the estimate of effect [ 43 , 48 ] and question if time is better invested updating systematic reviews rather than searching for unpublished studies [ 49 ], or mapping studies for review as opposed to aiming for high sensitivity in literature searching [ 50 ].

Aim and purpose beyond reviews of effectiveness

The need for comprehensive literature searches is less certain in reviews of qualitative studies, and for reviews where a comprehensive identification of studies is difficult to achieve (for example, in Public health) [ 33 , 51 , 52 , 53 , 54 , 55 ]. Literature searching for qualitative studies, and in public health topics, typically generates a greater number of studies to sift than in reviews of effectiveness [ 39 ] and demonstrating the ‘value’ of studies identified or missed is harder [ 56 ], since the study data do not typically support meta-analysis. Nussbaumer-Streit et al. (2016) have registered a review protocol to assess whether abbreviated literature searches (as opposed to comprehensive literature searches) has an impact on conclusions across multiple bodies of evidence, not only on effect estimates [ 57 ] which may develop this understanding. It may be that decision makers and users of systematic reviews are willing to trade the certainty from a comprehensive literature search and systematic review in exchange for different approaches to evidence synthesis [ 58 ], and that comprehensive literature searches are not necessarily a marker of literature search quality, as previously thought [ 36 ]. Different approaches to literature searching [ 37 , 38 , 59 , 60 , 61 , 62 ] and developing the concept of when to stop searching are important areas for further study [ 36 , 59 ].

The study by Nussbaumer-Streit et al. has been published since the submission of this literature review [ 63 ]. Nussbaumer-Streit et al. (2018) conclude that abbreviated literature searches are viable options for rapid evidence syntheses, if decision-makers are willing to trade the certainty from a comprehensive literature search and systematic review, but that decision-making which demands detailed scrutiny should still be based on comprehensive literature searches [ 63 ].

Key stage three: Preparing for the literature search

Six documents provided guidance on preparing for a literature search [ 2 , 3 , 6 , 7 , 9 , 10 ]. The Cochrane Handbook clearly stated that Cochrane authors (i.e. researchers) should seek advice from a trial search co-ordinator (i.e. a person with specific skills in literature searching) ‘before’ starting a literature search [ 9 ].

Two key tasks were perceptible in preparing for a literature searching [ 2 , 6 , 7 , 10 , 11 ]. First, to determine if there are any existing or on-going reviews, or if a new review is justified [ 6 , 11 ]; and, secondly, to develop an initial literature search strategy to estimate the volume of relevant literature (and quality of a small sample of relevant studies [ 10 ]) and indicate the resources required for literature searching and the review of the studies that follows [ 7 , 10 ].

Three documents summarised guidance on where to search to determine if a new review was justified [ 2 , 6 , 11 ]. These focused on searching databases of systematic reviews (The Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE)), institutional registries (including PROSPERO), and MEDLINE [ 6 , 11 ]. It is worth noting, however, that as of 2015, DARE (and NHS EEDs) are no longer being updated and so the relevance of this (these) resource(s) will diminish over-time [ 64 ]. One guidance document, ‘Systematic reviews in the Social Sciences’, noted, however, that databases are not the only source of information and unpublished reports, conference proceeding and grey literature may also be required, depending on the nature of the review question [ 2 ].

Two documents reported clearly that this preparation (or ‘scoping’) exercise should be undertaken before the actual search strategy is developed [ 7 , 10 ]).

The guidance offers the best available source on preparing the literature search with the published studies not typically reporting how their scoping informed the development of their search strategies nor how their search approaches were developed. Text mining has been proposed as a technique to develop search strategies in the scoping stages of a review although this work is still exploratory [ 65 ]. ‘Clustering documents’ and word frequency analysis have also been tested to identify search terms and studies for review [ 66 , 67 ]. Preparing for literature searches and scoping constitutes an area for future research.

Key stage four: Designing the search strategy

The Population, Intervention, Comparator, Outcome (PICO) structure was the commonly reported structure promoted to design a literature search strategy. Five documents suggested that the eligibility criteria or review question will determine which concepts of PICO will be populated to develop the search strategy [ 1 , 4 , 7 , 8 , 9 ]. The NICE handbook promoted multiple structures, namely PICO, SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) and multi-stranded approaches [ 4 ].

With the exclusion of The Joanna Briggs Institute reviewers’ manual, the guidance offered detail on selecting key search terms, synonyms, Boolean language, selecting database indexing terms and combining search terms. The CEE handbook suggested that ‘search terms may be compiled with the help of the commissioning organisation and stakeholders’ [ 10 ].

The use of limits, such as language or date limits, were discussed in all documents [ 2 , 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 ].

Search strategy structure

The guidance typically relates to reviews of intervention effectiveness so PICO – with its focus on intervention and comparator - is the dominant model used to structure literature search strategies [ 68 ]. PICOs – where the S denotes study design - is also commonly used in effectiveness reviews [ 6 , 68 ]. As the NICE handbook notes, alternative models to structure literature search strategies have been developed and tested. Booth provides an overview on formulating questions for evidence based practice [ 69 ] and has developed a number of alternatives to the PICO structure, namely: BeHEMoTh (Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory [ 55 ]; SPICE (Setting, Perspective, Intervention, Comparison, Evaluation) for identification of social science and evaluation studies [ 69 ] and, working with Cooke and colleagues, SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [ 70 ]. SPIDER has been compared to PICO and PICOs in a study by Methley et al. [ 68 ].

The NICE handbook also suggests the use of multi-stranded approaches to developing literature search strategies [ 4 ]. Glanville developed this idea in a study by Whitting et al. [ 71 ] and a worked example of this approach is included in the development of a search filter by Cooper et al. [ 72 ].

Writing search strategies: Conceptual and objective approaches

Hausner et al. [ 73 ] provide guidance on writing literature search strategies, delineating between conceptually and objectively derived approaches. The conceptual approach, advocated by and explained in the guidance documents, relies on the expertise of the literature searcher to identify key search terms and then develop key terms to include synonyms and controlled syntax. Hausner and colleagues set out the objective approach [ 73 ] and describe what may be done to validate it [ 74 ].

The use of limits

The guidance documents offer direction on the use of limits within a literature search. Limits can be used to focus literature searching to specific study designs or by other markers (such as by date) which limits the number of studies returned by a literature search. The use of limits should be described and the implications explored [ 34 ] since limiting literature searching can introduce bias (explored above). Craven et al. have suggested the use of a supporting narrative to explain decisions made in the process of developing literature searches and this advice would usefully capture decisions on the use of search limits [ 75 ].

Key stage five: Determining the process of literature searching and deciding where to search (bibliographic database searching)

Table 2 summarises the process of literature searching as reported in each guidance document. Searching bibliographic databases was consistently reported as the ‘first step’ to literature searching in all nine guidance documents.

Three documents reported specific guidance on where to search, in each case specific to the type of review their guidance informed, and as a minimum requirement [ 4 , 9 , 11 ]. Seven of the key guidance documents suggest that the selection of bibliographic databases depends on the topic of review [ 2 , 3 , 4 , 6 , 7 , 8 , 10 ], with two documents noting the absence of an agreed standard on what constitutes an acceptable number of databases searched [ 2 , 6 ].

The guidance documents summarise ‘how to’ search bibliographic databases in detail and this guidance is further contextualised above in terms of developing the search strategy. The documents provide guidance of selecting bibliographic databases, in some cases stating acceptable minima (i.e. The Cochrane Handbook states Cochrane CENTRAL, MEDLINE and EMBASE), and in other cases simply listing bibliographic database available to search. Studies have explored the value in searching specific bibliographic databases, with Wright et al. (2015) noting the contribution of CINAHL in identifying qualitative studies [ 76 ], Beckles et al. (2013) questioning the contribution of CINAHL to identifying clinical studies for guideline development [ 77 ], and Cooper et al. (2015) exploring the role of UK-focused bibliographic databases to identify UK-relevant studies [ 78 ]. The host of the database (e.g. OVID or ProQuest) has been shown to alter the search returns offered. Younger and Boddy [ 79 ] report differing search returns from the same database (AMED) but where the ‘host’ was different [ 79 ].

The average number of bibliographic database searched in systematic reviews has risen in the period 1994–2014 (from 1 to 4) [ 80 ] but there remains (as attested to by the guidance) no consensus on what constitutes an acceptable number of databases searched [ 48 ]. This is perhaps because thinking about the number of databases searched is the wrong question, researchers should be focused on which databases were searched and why, and which databases were not searched and why. The discussion should re-orientate to the differential value of sources but researchers need to think about how to report this in studies to allow findings to be generalised. Bethel (2017) has proposed ‘search summaries’, completed by the literature searcher, to record where included studies were identified, whether from database (and which databases specifically) or supplementary search methods [ 81 ]. Search summaries document both yield and accuracy of searches, which could prospectively inform resource use and decisions to search or not to search specific databases in topic areas. The prospective use of such data presupposes, however, that past searches are a potential predictor of future search performance (i.e. that each topic is to be considered representative and not unique). In offering a body of practice, this data would be of greater practicable use than current studies which are considered as little more than individual case studies [ 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 ].

When to database search is another question posed in the literature. Beyer et al. [ 91 ] report that databases can be prioritised for literature searching which, whilst not addressing the question of which databases to search, may at least bring clarity as to which databases to search first [ 91 ]. Paradoxically, this links to studies that suggest PubMed should be searched in addition to MEDLINE (OVID interface) since this improves the currency of systematic reviews [ 92 , 93 ]. Cooper et al. (2017) have tested the idea of database searching not as a primary search method (as suggested in the guidance) but as a supplementary search method in order to manage the volume of studies identified for an environmental effectiveness systematic review. Their case study compared the effectiveness of database searching versus a protocol using supplementary search methods and found that the latter identified more relevant studies for review than searching bibliographic databases [ 94 ].

Key stage six: Determining the process of literature searching and deciding where to search (supplementary search methods)

Table 2 also summaries the process of literature searching which follows bibliographic database searching. As Table 2 sets out, guidance that supplementary literature search methods should be used in systematic reviews recurs across documents, but the order in which these methods are used, and the extent to which they are used, varies. We noted inconsistency in the labelling of supplementary search methods between guidance documents.

Rather than focus on the guidance on how to use the methods (which has been summarised in a recent review [ 95 ]), we focus on the aim or purpose of supplementary search methods.

The Cochrane Handbook reported that ‘efforts’ to identify unpublished studies should be made [ 9 ]. Four guidance documents [ 2 , 3 , 6 , 9 ] acknowledged that searching beyond bibliographic databases was necessary since ‘databases are not the only source of literature’ [ 2 ]. Only one document reported any guidance on determining when to use supplementary methods. The IQWiG handbook reported that the use of handsearching (in their example) could be determined on a ‘case-by-case basis’ which implies that the use of these methods is optional rather than mandatory. This is in contrast to the guidance (above) on bibliographic database searching.

The issue for supplementary search methods is similar in many ways to the issue of searching bibliographic databases: demonstrating value. The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged [ 37 , 61 , 62 , 96 , 97 , 98 , 99 , 100 , 101 ] but understanding the value of the search methods to identify studies and data is unclear. In a recently published review, Cooper et al. (2017) reviewed the literature on supplementary search methods looking to determine the advantages, disadvantages and resource implications of using supplementary search methods [ 95 ]. This review also summarises the key guidance and empirical studies and seeks to address the question on when to use these search methods and when not to [ 95 ]. The guidance is limited in this regard and, as Table 2 demonstrates, offers conflicting advice on the order of searching, and the extent to which these search methods should be used in systematic reviews.

Key stage seven: Managing the references

Five of the documents provided guidance on managing references, for example downloading, de-duplicating and managing the output of literature searches [ 2 , 4 , 6 , 8 , 10 ]. This guidance typically itemised available bibliographic management tools rather than offering guidance on how to use them specifically [ 2 , 4 , 6 , 8 ]. The CEE handbook provided guidance on importing data where no direct export option is available (e.g. web-searching) [ 10 ].

The literature on using bibliographic management tools is not large relative to the number of ‘how to’ videos on platforms such as YouTube (see for example [ 102 ]). These YouTube videos confirm the overall lack of ‘how to’ guidance identified in this study and offer useful instruction on managing references. Bramer et al. set out methods for de-duplicating data and reviewing references in Endnote [ 103 , 104 ] and Gall tests the direct search function within Endnote to access databases such as PubMed, finding a number of limitations [ 105 ]. Coar et al. and Ahmed et al. consider the role of the free-source tool, Zotero [ 106 , 107 ]. Managing references is a key administrative function in the process of review particularly for documenting searches in PRISMA guidance.

Key stage eight: Documenting the search

The Cochrane Handbook was the only guidance document to recommend a specific reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 9 ]. Six documents provided guidance on reporting the process of literature searching with specific criteria to report [ 3 , 4 , 6 , 8 , 9 , 10 ]. There was consensus on reporting: the databases searched (and the host searched by), the search strategies used, and any use of limits (e.g. date, language, search filters (The CRD handbook called for these limits to be justified [ 6 ])). Three guidance documents reported that the number of studies identified should be recorded [ 3 , 6 , 10 ]. The number of duplicates identified [ 10 ], the screening decisions [ 3 ], a comprehensive list of grey literature sources searched (and full detail for other supplementary search methods) [ 8 ], and an annotation of search terms tested but not used [ 4 ] were identified as unique items in four documents.

The Cochrane Handbook was the only guidance document to note that the full search strategies for each database should be included in the Additional file 1 of the review [ 9 ].

All guidance documents should ultimately deliver completed systematic reviews that fulfil the requirements of the PRISMA reporting guidelines [ 108 ]. The guidance broadly requires the reporting of data that corresponds with the requirements of the PRISMA statement although documents typically ask for diverse and additional items [ 108 ]. In 2008, Sampson et al. observed a lack of consensus on reporting search methods in systematic reviews [ 109 ] and this remains the case as of 2017, as evidenced in the guidance documents, and in spite of the publication of the PRISMA guidelines in 2009 [ 110 ]. It is unclear why the collective guidance does not more explicitly endorse adherence to the PRISMA guidance.

Reporting of literature searching is a key area in systematic reviews since it sets out clearly what was done and how the conclusions of the review can be believed [ 52 , 109 ]. Despite strong endorsement in the guidance documents, specifically supported in PRISMA guidance, and other related reporting standards too (such as ENTREQ for qualitative evidence synthesis, STROBE for reviews of observational studies), authors still highlight the prevalence of poor standards of literature search reporting [ 31 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 ]. To explore issues experienced by authors in reporting literature searches, and look at uptake of PRISMA, Radar et al. [ 120 ] surveyed over 260 review authors to determine common problems and their work summaries the practical aspects of reporting literature searching [ 120 ]. Atkinson et al. [ 121 ] have also analysed reporting standards for literature searching, summarising recommendations and gaps for reporting search strategies [ 121 ].

One area that is less well covered by the guidance, but nevertheless appears in this literature, is the quality appraisal or peer review of literature search strategies. The PRESS checklist is the most prominent and it aims to develop evidence-based guidelines to peer review of electronic search strategies [ 5 , 122 , 123 ]. A corresponding guideline for documentation of supplementary search methods does not yet exist although this idea is currently being explored.

How the reporting of the literature searching process corresponds to critical appraisal tools is an area for further research. In the survey undertaken by Radar et al. (2014), 86% of survey respondents (153/178) identified a need for further guidance on what aspects of the literature search process to report [ 120 ]. The PRISMA statement offers a brief summary of what to report but little practical guidance on how to report it [ 108 ]. Critical appraisal tools for systematic reviews, such as AMSTAR 2 (Shea et al. [ 124 ]) and ROBIS (Whiting et al. [ 125 ]), can usefully be read alongside PRISMA guidance, since they offer greater detail on how the reporting of the literature search will be appraised and, therefore, they offer a proxy on what to report [ 124 , 125 ]. Further research in the form of a study which undertakes a comparison between PRISMA and quality appraisal checklists for systematic reviews would seem to begin addressing the call, identified by Radar et al., for further guidance on what to report [ 120 ].

Limitations

Other handbooks exist.

A potential limitation of this literature review is the focus on guidance produced in Europe (the UK specifically) and Australia. We justify the decision for our selection of the nine guidance documents reviewed in this literature review in section “ Identifying guidance ”. In brief, these nine guidance documents were selected as the most relevant health care guidance that inform UK systematic reviewing practice, given that the UK occupies a prominent position in the science of health information retrieval. We acknowledge the existence of other guidance documents, such as those from North America (e.g. the Agency for Healthcare Research and Quality (AHRQ) [ 126 ], The Institute of Medicine [ 127 ] and the guidance and resources produced by the Canadian Agency for Drugs and Technologies in Health (CADTH) [ 128 ]). We comment further on this directly below.

The handbooks are potentially linked to one another

What is not clear is the extent to which the guidance documents inter-relate or provide guidance uniquely. The Cochrane Handbook, first published in 1994, is notably a key source of reference in guidance and systematic reviews beyond Cochrane reviews. It is not clear to what extent broadening the sample of guidance handbooks to include North American handbooks, and guidance handbooks from other relevant countries too, would alter the findings of this literature review or develop further support for the process model. Since we cannot be clear, we raise this as a potential limitation of this literature review. On our initial review of a sample of North American, and other, guidance documents (before selecting the guidance documents considered in this review), however, we do not consider that the inclusion of these further handbooks would alter significantly the findings of this literature review.

This is a literature review

A further limitation of this review was that the review of published studies is not a systematic review of the evidence for each key stage. It is possible that other relevant studies could help contribute to the exploration and development of the key stages identified in this review.

This literature review would appear to demonstrate the existence of a shared model of the literature searching process in systematic reviews. We call this model ‘the conventional approach’, since it appears to be common convention in nine different guidance documents.

The findings reported above reveal eight key stages in the process of literature searching for systematic reviews. These key stages are consistently reported in the nine guidance documents which suggests consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews.

In Table 2 , we demonstrate consensus regarding the application of literature search methods. All guidance documents distinguish between primary and supplementary search methods. Bibliographic database searching is consistently the first method of literature searching referenced in each guidance document. Whilst the guidance uniformly supports the use of supplementary search methods, there is little evidence for a consistent process with diverse guidance across documents. This may reflect differences in the core focus across each document, linked to differences in identifying effectiveness studies or qualitative studies, for instance.

Eight of the nine guidance documents reported on the aims of literature searching. The shared understanding was that literature searching should be thorough and comprehensive in its aim and that this process should be reported transparently so that that it could be reproduced. Whilst only three documents explicitly link this understanding to minimising bias, it is clear that comprehensive literature searching is implicitly linked to ‘not missing relevant studies’ which is approximately the same point.

Defining the key stages in this review helps categorise the scholarship available, and it prioritises areas for development or further study. The supporting studies on preparing for literature searching (key stage three, ‘preparation’) were, for example, comparatively few, and yet this key stage represents a decisive moment in literature searching for systematic reviews. It is where search strategy structure is determined, search terms are chosen or discarded, and the resources to be searched are selected. Information specialists, librarians and researchers, are well placed to develop these and other areas within the key stages we identify.

This review calls for further research to determine the suitability of using the conventional approach. The publication dates of the guidance documents which underpin the conventional approach may raise questions as to whether the process which they each report remains valid for current systematic literature searching. In addition, it may be useful to test whether it is desirable to use the same process model of literature searching for qualitative evidence synthesis as that for reviews of intervention effectiveness, which this literature review demonstrates is presently recommended best practice.

Abbreviations

Behaviour of interest; Health context; Exclusions; Models or Theories

Cochrane Database of Systematic Reviews

The Cochrane Central Register of Controlled Trials

Database of Abstracts of Reviews of Effects

Enhancing transparency in reporting the synthesis of qualitative research

Institute for Quality and Efficiency in Healthcare

National Institute for Clinical Excellence

Population, Intervention, Comparator, Outcome

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Setting, Perspective, Intervention, Comparison, Evaluation

Sample, Phenomenon of Interest, Design, Evaluation, Research type

STrengthening the Reporting of OBservational studies in Epidemiology

Trial Search Co-ordinators

Booth A. Unpacking your literature search toolbox: on search styles and tactics. Health Information & Libraries Journal. 2008;25(4):313–7.

Article   Google Scholar  

Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing Ltd; 2006.

Book   Google Scholar  

Institute for Quality and Efficiency in Health Care (IQWiG). IQWiG Methods Resources. 7 Information retrieval 2014 [Available from: https://www.ncbi.nlm.nih.gov/books/NBK385787/ .

NICE: National Institute for Health and Care Excellence. Developing NICE guidelines: the manual 2014. Available from: https://www.nice.org.uk/media/default/about/what-we-do/our-programmes/developing-nice-guidelines-the-manual.pdf .

Sampson M. MJ, Lefebvre C, Moher D, Grimshaw J. Peer Review of Electronic Search Strategies: PRESS; 2008.

Google Scholar  

Centre for Reviews & Dissemination. Systematic reviews – CRD’s guidance for undertaking reviews in healthcare. York: Centre for Reviews and Dissemination, University of York; 2009.

eunetha: European Network for Health Technology Assesment Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness 2016. Available from: http://www.eunethta.eu/sites/default/files/Guideline_Information_Retrieval_V1-1.pdf .

Kugley SWA, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: a guide to information retrieval for Campbell systematic reviews. Oslo: Campbell Collaboration. 2017; Available from: https://www.campbellcollaboration.org/library/searching-for-studies-information-retrieval-guide-campbell-reviews.html

Lefebvre C, Manheimer E, Glanville J. Chapter 6: searching for studies. In: JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions; 2011.

Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management.: Environmental Evidence:; 2013. Available from: http://www.environmentalevidence.org/wp-content/uploads/2017/01/Review-guidelines-version-4.2-final-update.pdf .

The Joanna Briggs Institute. Joanna Briggs institute reviewers’ manual. 2014th ed: the Joanna Briggs institute; 2014. Available from: https://joannabriggs.org/assets/docs/sumari/ReviewersManual-2014.pdf

Beverley CA, Booth A, Bath PA. The role of the information specialist in the systematic review process: a health information case study. Health Inf Libr J. 2003;20(2):65–74.

Article   CAS   Google Scholar  

Harris MR. The librarian's roles in the systematic review process: a case study. Journal of the Medical Library Association. 2005;93(1):81–7.

PubMed   PubMed Central   Google Scholar  

Egger JB. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors. PLoS One. 2015;10(5):e0125931.

Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

Article   PubMed   Google Scholar  

McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80.

Rethlefsen ML, Farrell AM, Osterhaus Trzasko LC, Brigham TJ. Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews. J Clin Epidemiol. 2015;68(6):617–26.

Weller AC. Mounting evidence that librarians are essential for comprehensive literature searches for meta-analyses and Cochrane reports. J Med Libr Assoc. 2004;92(2):163–4.

Swinkels A, Briddon J, Hall J. Two physiotherapists, one librarian and a systematic literature review: collaboration in action. Health Info Libr J. 2006;23(4):248–56.

Foster M. An overview of the role of librarians in systematic reviews: from expert search to project manager. EAHIL. 2015;11(3):3–7.

Lawson L. OPERATING OUTSIDE LIBRARY WALLS 2004.

Vassar M, Yerokhin V, Sinnett PM, Weiher M, Muckelrath H, Carr B, et al. Database selection in systematic reviews: an insight through clinical neurology. Health Inf Libr J. 2017;34(2):156–64.

Townsend WA, Anderson PF, Ginier EC, MacEachern MP, Saylor KM, Shipman BL, et al. A competency framework for librarians involved in systematic reviews. Journal of the Medical Library Association : JMLA. 2017;105(3):268–75.

Cooper ID, Crum JA. New activities and changing roles of health sciences librarians: a systematic review, 1990-2012. Journal of the Medical Library Association : JMLA. 2013;101(4):268–77.

Crum JA, Cooper ID. Emerging roles for biomedical librarians: a survey of current practice, challenges, and changes. Journal of the Medical Library Association : JMLA. 2013;101(4):278–86.

Dudden RF, Protzko SL. The systematic review team: contributions of the health sciences librarian. Med Ref Serv Q. 2011;30(3):301–15.

Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol. 2008;61(5):440–8.

Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Academic medicine : journal of the Association of American Medical Colleges. 2011;86(8):1049–54.

Meert D, Torabi N, Costella J. Impact of librarians on reporting of the literature searching component of pediatric systematic reviews. Journal of the Medical Library Association : JMLA. 2016;104(4):267–77.

Morris M, Boruff JT, Gore GC. Scoping reviews: establishing the role of the librarian. Journal of the Medical Library Association : JMLA. 2016;104(4):346–54.

Koffel JB, Rethlefsen ML. Reproducibility of search strategies is poor in systematic reviews published in high-impact pediatrics, cardiology and surgery journals: a cross-sectional study. PLoS One. 2016;11(9):e0163309.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Fehrmann P, Thomas J. Comprehensive computer searches and reporting in systematic reviews. Research Synthesis Methods. 2011;2(1):15–32.

Booth A. Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic Reviews. 2016;5(1):74.

Article   PubMed   PubMed Central   Google Scholar  

Egger M, Juni P, Bartlett C, Holenstein F, Sterne J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health technology assessment (Winchester, England). 2003;7(1):1–76.

Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, et al. Few systematic reviews exist documenting the extent of bias: a systematic review. J Clin Epidemiol. 2008;61(5):422–34.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–5.

Papaioannou D, Sutton A, Carroll C, Booth A, Wong R. Literature searching for social science systematic reviews: consideration of a range of search techniques. Health Inf Libr J. 2010;27(2):114–22.

Petticrew M. Time to rethink the systematic review catechism? Moving from ‘what works’ to ‘what happens’. Systematic Reviews. 2015;4(1):36.

Betrán AP, Say L, Gülmezoglu AM, Allen T, Hampson L. Effectiveness of different databases in identifying studies for systematic reviews: experience from the WHO systematic review of maternal morbidity and mortality. BMC Med Res Methodol. 2005;5

Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):885–92.

Article   PubMed   CAS   Google Scholar  

Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. BMC Med Res Methodol. 2017;17(1):64.

Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G. Language bias in randomised controlled trials published in English and German. Lancet (London, England). 1997;350(9074):326–9.

Moher D, Pham B, Lawson ML, Klassen TP. The inclusion of reports of randomised trials published in languages other than English in systematic reviews. Health technology assessment (Winchester, England). 2003;7(41):1–90.

Pham B, Klassen TP, Lawson ML, Moher D. Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. J Clin Epidemiol. 2005;58(8):769–76.

Mills EJ, Kanters S, Thorlund K, Chaimani A, Veroniki A-A, Ioannidis JPA. The effects of excluding treatments from network meta-analyses: survey. BMJ : British Medical Journal. 2013;347

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Med Res Methodol. 2016;16(1):127.

van Driel ML, De Sutter A, De Maeseneer J, Christiaens T. Searching for unpublished trials in Cochrane reviews may not be worth the effort. J Clin Epidemiol. 2009;62(8):838–44.e3.

Buchberger B, Krabbe L, Lux B, Mattivi JT. Evidence mapping for decision making: feasibility versus accuracy - when to abandon high sensitivity in electronic searches. German medical science : GMS e-journal. 2016;14:Doc09.

Lorenc T, Pearson M, Jamal F, Cooper C, Garside R. The role of systematic reviews of qualitative evidence in evaluating interventions: a case study. Research Synthesis Methods. 2012;3(1):1–10.

Gough D. Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Res Pap Educ. 2007;22(2):213–28.

Barroso J, Gollop CJ, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. West J Nurs Res. 2003;25(2):153–78.

Britten N, Garside R, Pope C, Frost J, Cooper C. Asking more of qualitative synthesis: a response to Sally Thorne. Qual Health Res. 2017;27(9):1370–6.

Booth A, Carroll C. Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Health Info Libr J. 2015;32(3):220–35.

Kwon Y, Powelson SE, Wong H, Ghali WA, Conly JM. An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks. Syst Rev. 2014;3:135.

Nussbaumer-Streit B, Klerings I, Wagner G, Titscher V, Gartlehner G. Assessing the validity of abbreviated literature searches for rapid reviews: protocol of a non-inferiority and meta-epidemiologic study. Systematic Reviews. 2016;5:197.

Wagner G, Nussbaumer-Streit B, Greimel J, Ciapponi A, Gartlehner G. Trading certainty for speed - how much uncertainty are decisionmakers and guideline developers willing to accept when using rapid reviews: an international survey. BMC Med Res Methodol. 2017;17(1):121.

Ogilvie D, Hamilton V, Egan M, Petticrew M. Systematic reviews of health effects of social interventions: 1. Finding the evidence: how far should you go? J Epidemiol Community Health. 2005;59(9):804–8.

Royle P, Milne R. Literature searching for randomized controlled trials used in Cochrane reviews: rapid versus exhaustive searches. Int J Technol Assess Health Care. 2003;19(4):591–603.

Pearson M, Moxham T, Ashton K. Effectiveness of search strategies for qualitative research about barriers and facilitators of program delivery. Eval Health Prof. 2011;34(3):297–308.

Levay P, Raynor M, Tuvey D. The Contributions of MEDLINE, Other Bibliographic Databases and Various Search Techniques to NICE Public Health Guidance. 2015. 2015;10(1):19.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Briscoe S, Cooper C, Glanville J, Lefebvre C. The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation. Res Synth Methods. 2017;8(3):256–7.

Stansfield C, O'Mara-Eves A, Thomas J. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges. Research Synthesis Methods.n/a-n/a.

Petrova M, Sutcliffe P, Fulford KW, Dale J. Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study. Journal of the American Medical Informatics Association : JAMIA. 2012;19(3):479–88.

Stansfield C, Thomas J, Kavanagh J. 'Clustering' documents automatically to support scoping reviews of research: a case study. Res Synth Methods. 2013;4(3):230–41.

PubMed   Google Scholar  

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.

Andrew B. Clear and present questions: formulating questions for evidence based practice. Library Hi Tech. 2006;24(3):355–68.

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012;22(10):1435–43.

Whiting P, Westwood M, Bojke L, Palmer S, Richardson G, Cooper J, et al. Clinical effectiveness and cost-effectiveness of tests for the diagnosis and investigation of urinary tract infection in children: a systematic review and economic model. Health technology assessment (Winchester, England). 2006;10(36):iii-iv, xi-xiii, 1–154.

Cooper C, Levay P, Lorenc T, Craig GM. A population search filter for hard-to-reach populations increased search efficiency for a systematic review. J Clin Epidemiol. 2014;67(5):554–9.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Systematic Reviews. 2012;1(1):19.

Hausner E, Guddat C, Hermanns T, Lampert U, Waffenschmidt S. Prospective comparison of search strategies for systematic reviews: an objective approach yielded higher sensitivity than a conceptual one. J Clin Epidemiol. 2016;77:118–24.

Craven J, Levay P. Recording database searches for systematic reviews - what is the value of adding a narrative to peer-review checklists? A case study of nice interventional procedures guidance. Evid Based Libr Inf Pract. 2011;6(4):72–87.

Wright K, Golder S, Lewis-Light K. What value is the CINAHL database when searching for systematic reviews of qualitative studies? Syst Rev. 2015;4:104.

Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66(9):1051–7.

Cooper C, Rogers M, Bethel A, Briscoe S, Lowe J. A mapping review of the literature on UK-focused health and social care databases. Health Inf Libr J. 2015;32(1):5–22.

Younger P, Boddy K. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG. Health Inf Libr J. 2009;26(2):126–35.

Lam MT, McDiarmid M. Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014. Journal of the Medical Library Association : JMLA. 2016;104(4):284–9.

Bethel A, editor Search summary tables for systematic reviews: results and findings. HLC Conference 2017a.

Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

Adams CE, Frederick K. An investigation of the adequacy of MEDLINE searches for randomized controlled trials (RCTs) of the effects of mental health care. Psychol Med. 1994;24(3):741–8.

Kelly L, St Pierre-Hansen N. So many databases, such little clarity: searching the literature for the topic aboriginal. Canadian family physician Medecin de famille canadien. 2008;54(11):1572–3.

Lawrence DW. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion? Injury Prevention. 2008;14(6):401–4.

Lemeshow AR, Blum RE, Berlin JA, Stoto MA, Colditz GA. Searching one or two databases was insufficient for meta-analysis of observational studies. J Clin Epidemiol. 2005;58(9):867–73.

Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003;56(10):943–55.

Stevinson C, Lawlor DA. Searching multiple databases for systematic reviews: added value or diminishing returns? Complementary Therapies in Medicine. 2004;12(4):228–32.

Suarez-Almazor ME, Belseck E, Homik J, Dorgan M, Ramos-Remus C. Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not enough. Control Clin Trials. 2000;21(5):476–87.

Taylor B, Wylie E, Dempster M, Donnelly M. Systematically retrieving research: a case study evaluating seven databases. Res Soc Work Pract. 2007;17(6):697–706.

Beyer FR, Wright K. Can we prioritise which databases to search? A case study using a systematic review of frozen shoulder management. Health Info Libr J. 2013;30(1):49–58.

Duffy S, de Kock S, Misso K, Noake C, Ross J, Stirk L. Supplementary searches of PubMed to improve currency of MEDLINE and MEDLINE in-process searches via Ovid. Journal of the Medical Library Association : JMLA. 2016;104(4):309–12.

Katchamart W, Faulkner A, Feldman B, Tomlinson G, Bombardier C. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews. J Clin Epidemiol. 2011;64(7):805–7.

Cooper C, Lovell R, Husk K, Booth A, Garside R. Supplementary search methods were more effective and offered better value than bibliographic database searching: a case study from public health and environmental enhancement (in Press). Research Synthesis Methods. 2017;

Cooper C, Booth, A., Britten, N., Garside, R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: A methodological review. (In Press). BMC Systematic Reviews. 2017.

Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ (Clinical research ed). 2005;331(7524):1064–5.

Article   PubMed Central   Google Scholar  

Hinde S, Spackman E. Bidirectional citation searching to completion: an exploration of literature searching methods. PharmacoEconomics. 2015;33(1):5–11.

Levay P, Ainsworth N, Kettle R, Morgan A. Identifying evidence for public health guidance: a comparison of citation searching with web of science and Google scholar. Res Synth Methods. 2016;7(1):34–45.

McManus RJ, Wilson S, Delaney BC, Fitzmaurice DA, Hyde CJ, Tobias RS, et al. Review of the usefulness of contacting other experts when conducting a literature search for systematic reviews. BMJ (Clinical research ed). 1998;317(7172):1562–3.

Westphal A, Kriston L, Holzel LP, Harter M, von Wolff A. Efficiency and contribution of strategies for finding randomized controlled trials: a case study from a systematic review on therapeutic interventions of chronic depression. Journal of public health research. 2014;3(2):177.

Matthews EJ, Edwards AG, Barker J, Bloor M, Covey J, Hood K, et al. Efficient literature searching in diffuse topics: lessons from a systematic review of research on communicating risk to patients in primary care. Health Libr Rev. 1999;16(2):112–20.

Bethel A. Endnote Training (YouTube Videos) 2017b [Available from: http://medicine.exeter.ac.uk/esmi/workstreams/informationscience/is_resources,_guidance_&_advice/ .

Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association : JMLA. 2016;104(3):240–3.

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association : JMLA. 2017;105(1):84–7.

Gall C, Brahmi FA. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly. Medical reference services quarterly. 2004;23(3):25–32.

Ahmed KK, Al Dhubaib BE. Zotero: a bibliographic assistant to researcher. J Pharmacol Pharmacother. 2011;2(4):303–5.

Coar JT, Sewell JP. Zotero: harnessing the power of a personal bibliographic manager. Nurse Educ. 2010;35(5):205–7.

Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No consensus exists on search reporting methods for systematic reviews. J Clin Epidemiol. 2008;61(8):748–54.

Toews LC. Compliance of systematic reviews in veterinary journals with preferred reporting items for systematic reviews and meta-analysis (PRISMA) literature search reporting guidelines. Journal of the Medical Library Association : JMLA. 2017;105(3):233–9.

Booth A. "brimful of STARLITE": toward standards for reporting literature searches. Journal of the Medical Library Association : JMLA. 2006;94(4):421–9. e205

Faggion CM Jr, Wu YC, Tu YK, Wasiak J. Quality of search strategies reported in systematic reviews published in stereotactic radiosurgery. Br J Radiol. 2016;89(1062):20150878.

Mullins MM, DeLuca JB, Crepaz N, Lyles CM. Reporting quality of search methods in systematic reviews of HIV behavioral interventions (2000–2010): are the searches clearly explained, systematic and reproducible? Research Synthesis Methods. 2014;5(2):116–30.

Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE. Analysis of the reporting of search strategies in Cochrane systematic reviews. Journal of the Medical Library Association : JMLA. 2009;97(1):21–9.

Bigna JJ, Um LN, Nansseu JR. A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis. Syst Rev. 2016;5(1):174.

Akhigbe T, Zolnourian A, Bulters D. Compliance of systematic reviews articles in brain arteriovenous malformation with PRISMA statement guidelines: review of literature. Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia. 2017;39:45–8.

Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6(11):e27611.

Wasiak J, Tyack Z, Ware R. Goodwin N. Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. International wound journal: Faggion CM; 2016.

Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open. 2017;7(2):e013905.

Rader T, Mann M, Stansfield C, Cooper C, Sampson M. Methods for documenting systematic review searches: a discussion of common issues. Res Synth Methods. 2014;5(2):98–115.

Atkinson KM, Koenka AC, Sanchez CE, Moshontz H, Cooper H. Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res Synth Methods. 2015;6(1):87–95.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.

Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009;62(9):944–52.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Clinical research ed). 2017;358.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Relevo R, Balshem H. Finding evidence for comparing medical interventions: AHRQ and the effective health care program. J Clin Epidemiol. 2011;64(11):1168–77.

Medicine Io. Standards for Systematic Reviews 2011 [Available from: http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews/Standards.aspx .

CADTH: Resources 2018.

Download references

Acknowledgements

CC acknowledges the supervision offered by Professor Chris Hyde.

This publication forms a part of CC’s PhD. CC’s PhD was funded through the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme (Project Number 16/54/11). The open access fee for this publication was paid for by Exeter Medical School.

RG and NB were partially supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South West Peninsula.

The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Author information

Authors and affiliations.

Institute of Health Research, University of Exeter Medical School, Exeter, UK

Chris Cooper & Jo Varley-Campbell

HEDS, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK

Andrew Booth

Nicky Britten

European Centre for Environment and Human Health, University of Exeter Medical School, Truro, UK

Ruth Garside

You can also search for this author in PubMed   Google Scholar

Contributions

CC conceived the idea for this study and wrote the first draft of the manuscript. CC discussed this publication in PhD supervision with AB and separately with JVC. CC revised the publication with input and comments from AB, JVC, RG and NB. All authors revised the manuscript prior to submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chris Cooper .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Appendix tables and PubMed search strategy. Key studies used for pearl growing per key stage, working data extraction tables and the PubMed search strategy. (DOCX 30 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Cooper, C., Booth, A., Varley-Campbell, J. et al. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18 , 85 (2018). https://doi.org/10.1186/s12874-018-0545-3

Download citation

Received : 20 September 2017

Accepted : 06 August 2018

Published : 14 August 2018

DOI : https://doi.org/10.1186/s12874-018-0545-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Literature Search Process
  • Citation Chasing
  • Tacit Models
  • Unique Guidance
  • Information Specialists

BMC Medical Research Methodology

ISSN: 1471-2288

search strategy of the literature review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

A systematic approach to searching: an efficient and complete method to develop literature searches

Affiliations.

  • 1 Biomedical Information Specialist, Medical Library, Erasmus MC-Erasmus University Medical Centre, Rotterdam, The Netherlands.
  • 2 Medical Library, Erasmus MC-Erasmus University Medical Centre, Rotterdam, The Netherlands.
  • 3 Spencer S. Eccles Health Sciences Library, University of Utah, Salt Lake City, UT.
  • 4 Department of Family Medicine, School for Public Health and Primary Care (CAPHRI), Maastricht University, Maastricht, The Netherlands, and Kleijnen Systematic Reviews, York, United Kingdom.
  • PMID: 30271302
  • PMCID: PMC6148622
  • DOI: 10.5195/jmla.2018.283

Creating search strategies for systematic reviews, finding the best balance between sensitivity and specificity, and translating search strategies between databases is challenging. Several methods describe standards for systematic search strategies, but a consistent approach for creating an exhaustive search strategy has not yet been fully described in enough detail to be fully replicable. The authors have established a method that describes step by step the process of developing a systematic search strategy as needed in the systematic review. This method describes how single-line search strategies can be prepared in a text document by typing search syntax (such as field codes, parentheses, and Boolean operators) before copying and pasting search terms (keywords and free-text synonyms) that are found in the thesaurus. To help ensure term completeness, we developed a novel optimization technique that is mainly based on comparing the results retrieved by thesaurus terms with those retrieved by the free-text search words to identify potentially relevant candidate search terms. Macros in Microsoft Word have been developed to convert syntaxes between databases and interfaces almost automatically. This method helps information specialists in developing librarian-mediated searches for systematic reviews as well as medical and health care practitioners who are searching for evidence to answer clinical questions. The described method can be used to create complex and comprehensive search strategies for different databases and interfaces, such as those that are needed when searching for relevant references for systematic reviews, and will assist both information specialists and practitioners when they are searching the biomedical literature.

PubMed Disclaimer

Schema for determining the optimal…

Schema for determining the optimal order of elements

Schematic representation of translation between…

Schematic representation of translation between databases used at Erasmus University Medical Center Dotted…

Similar articles

  • Evaluation of a new method for librarian-mediated literature searches for systematic reviews. Bramer WM, Rethlefsen ML, Mast F, Kleijnen J. Bramer WM, et al. Res Synth Methods. 2018 Dec;9(4):510-520. doi: 10.1002/jrsm.1279. Epub 2017 Nov 28. Res Synth Methods. 2018. PMID: 29073718 Free PMC article.
  • Searching biomedical databases on complementary medicine: the use of controlled vocabulary among authors, indexers and investigators. Murphy LS, Reinsch S, Najm WI, Dickerson VM, Seffinger MA, Adams A, Mishra SI. Murphy LS, et al. BMC Complement Altern Med. 2003 Jul 7;3:3. doi: 10.1186/1472-6882-3-3. Epub 2003 Jul 7. BMC Complement Altern Med. 2003. PMID: 12846931 Free PMC article.
  • Comparison of Medical Subject Headings and text-word searches in MEDLINE to retrieve studies on sleep in healthy individuals. Jenuwine ES, Floyd JA. Jenuwine ES, et al. J Med Libr Assoc. 2004 Jul;92(3):349-53. J Med Libr Assoc. 2004. PMID: 15243641 Free PMC article.
  • Developing efficient search strategies to identify reports of adverse effects in MEDLINE and EMBASE. Golder S, McIntosh HM, Duffy S, Glanville J; Centre for Reviews and Dissemination and UK Cochrane Centre Search Filters Design Group. Golder S, et al. Health Info Libr J. 2006 Mar;23(1):3-12. doi: 10.1111/j.1471-1842.2006.00634.x. Health Info Libr J. 2006. PMID: 16466494 Review.
  • An evidence-based practice guideline for the peer review of electronic search strategies. Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. Sampson M, et al. J Clin Epidemiol. 2009 Sep;62(9):944-52. doi: 10.1016/j.jclinepi.2008.10.012. Epub 2009 Feb 20. J Clin Epidemiol. 2009. PMID: 19230612 Review.
  • Interpretation of meta-analyses. Clephas PRD, Heesen M. Clephas PRD, et al. Interv Pain Med. 2022 Aug 15;1(Suppl 2):100120. doi: 10.1016/j.inpm.2022.100120. eCollection 2022. Interv Pain Med. 2022. PMID: 39239131 Free PMC article.
  • Cardiovascular outcomes and coronary artery disease prevention secondary to icosapent ethyl: a meta-analysis of randomized clinical trials. Sattar Y, Alraies MC. Sattar Y, et al. Ann Med Surg (Lond). 2024 Jul 3;86(9):4941-4943. doi: 10.1097/MS9.0000000000002215. eCollection 2024 Sep. Ann Med Surg (Lond). 2024. PMID: 39238993 Free PMC article. No abstract available.
  • Biomarkers reflecting the pathogenesis, clinical manifestations, and guide therapeutic approach in systemic sclerosis: a narrative review. Bazsó A, Dsci PSM, Shoenfeld Y, Dsci EKM. Bazsó A, et al. Clin Rheumatol. 2024 Aug 29. doi: 10.1007/s10067-024-07123-y. Online ahead of print. Clin Rheumatol. 2024. PMID: 39210206 Review.
  • If health organisations and staff engage in research, does healthcare improve? Strengthening the evidence base through systematic reviews. Boaz A, Goodenough B, Hanney S, Soper B. Boaz A, et al. Health Res Policy Syst. 2024 Aug 19;22(1):113. doi: 10.1186/s12961-024-01187-7. Health Res Policy Syst. 2024. PMID: 39160553 Free PMC article. Review.
  • Digital Serious Games to Promote Behavior Change in Children With Chronic Diseases: Scoping Review and Development of a Self-Management Learning Framework. Sarasmita MA, Lee YH, Chan FY, Chen HY. Sarasmita MA, et al. J Med Internet Res. 2024 Aug 19;26:e49692. doi: 10.2196/49692. J Med Internet Res. 2024. PMID: 39158952 Free PMC article. Review.
  • Harris MR. The librarian’s roles in the systematic review process: a case study. J Med Libr Assoc. 2005 Jan;93(1):81–7. - PMC - PubMed
  • Higgins JPT, Green S, editors. Wiley Online Library. 2008. Cochrane handbook for systematic reviews of interventions.
  • University of York, N.H.S. Centre for Reviews and Dissemination. Systematic reviews: CRD’s guidance for undertaking reviews in health care. York, UK: CRD, University of York; 2009.
  • Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C. An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009 Sep;62(9):944–52. doi: 10.1016/j.jclinepi.2008.10.012. - DOI - PubMed
  • Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009 Oct;62(10):1006–12. doi: 10.1016/j.jclinepi.2009.06.005. - DOI - PubMed
  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • PubMed Central
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

COMMENTS

  1. How to carry out a literature search for a systematic review ...

    Literature reviews are conducted for the purpose of (a) locating information on a topic or identifying gaps in the literature for areas of future study, (b) synthesising conclusions in an area of ambiguity and (c) helping clinicians and researchers inform decision-making and practice guidelines.

  2. A systematic approach to searching: an efficient and complete ...

    Creating search strategies for systematic reviews, finding the best balance between sensitivity and specificity, and translating search strategies between databases is challenging. Several methods describe standards for systematic search strategies, but a consistent approach for creating an exhaustive search strategy has not yet been fully ...

  3. How to Construct an Effective Search Strategy - Literature ...

    Step 1: Develop a research question or choose a topic. Step 2: Brainstorm your search terms, including MeSH terms, that should be included in your search. Step 3: Use Boolean logic to combine your terms.

  4. Search strategy formulation for systematic reviews: Issues ...

    The resources required to produce systematic reviews can be significant, and a key to the success of any review is the search strategy used to identify relevant literature. However, the methods used to construct search strategies can be complex, time consuming, resource intensive and error prone.

  5. Guide to the search strategy - Cochrane

    Introduction: Developing the search strategy. Part of developing a Cochrane Review involves a preparing a search strategy to help authors identify the studies for their review.

  6. How to undertake a literature search: a step-by-step guide

    Undertaking a literature search can be a daunting prospect. Breaking the exercise down into smaller steps will make the process more manageable. This article suggests 10 steps that will help readers complete this task, from identifying key concepts to choosing databases for the search and saving the …

  7. Defining the process to literature searching in systematic ...

    It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence.

  8. Develop Search Strategies - Literature Reviews - Research ...

    Developing a search strategy is a balance between needing a very precise search that yields fewer highly relevant results or a comprehensive search (high retrieval) with lower precision. The focus of a narrative literature review for a dissertation or thesis is thoroughness, so you should aim for high retrieval. Search Process.

  9. A systematic approach to searching: an efficient and complete ...

    Review Literature as Topic* Vocabulary, Controlled. Creating search strategies for systematic reviews, finding the best balance between sensitivity and specificity, and translating search strategies between databases is challenging.

  10. How to carry out a literature search for a systematic review ...

    Literature reviews are conducted for the purpose of (a) locating informa-tion on a topic or identifying gaps in the literature for areas of future study, (b) synthesising conclu-sions in an area of ambiguity and (c) helping clinicians and researchers inform decision-making and practice guidelines.