A steadily growing number of review articles are being published, but the methods applied to ensure that the best possible knowledge base is used vary strongly. We claim that undertaking literature searches more systematically and with a wider scope could increase the reliability of the conclusions.
Illustration © Stein Løken
In 2010 we submitted a review article to the European Journal of Cardiovascular Prevention and Rehabilitation on the basis for the existence of the «smoker’s paradox», i.e. the lower post-infarction mortality among smokers in adjusted analyses of patients who had suffered a heart infarction. The article was rejected and severely criticised, since the PRISMA recommendations had not been complied with (1 ). We restructured the manuscript accordingly, conducted a systematic search and appended documented search strategies with the number of hits from the databases Embase, Medline and CENTRAL. This new version was almost immediately accepted for publication by BMC Medicine (2 ).
The main finding in the review was that the «smoker’s paradox» was only evident in the pre-thrombolytic and thrombolytic eras, but not in more recent studies of patients with acute coronary syndrome. However, in the original, unsystematic version we had been unable to prove this. This was most likely due to retrieval bias, which refers to a lack of ability to retrieve all research. We have since written two more review articles with the aid of the same method, and these have been published in prestigious international journals without much difficulty (3 , 4 ).
Research-based knowledge is crucial for maintaining high quality of medical diagnostics and treatment, but not all research-based knowledge is equally easily available. Scientific review articles may therefore provide good help to health personnel in their daily activity. The number of review articles published is growing constantly (5 , 6 ). The articles can be prepared with the aid of various methods, and a distinction is drawn between systematic reviews and traditional narrative reviews. The Cochrane Collaboration defines a systematic review as an article with a clearly specified set of objectives, in which systematic and explicit methods are used to identify, select and critically review relevant research and to analyse the data from the studies included (7, Section 1.2.2).
In its guidelines to authors, the Journal of the Norwegian Medical Association says the following about the literature search for a review article (8 ): «We recommend that you undertake a thorough search in source literature and databases, from which you report the following: search terms, sources (databases, review of reference lists etc.), restrictions on the searches (the day when the search was completed, language, type of studies etc.) and the number of hits returned by the search. Irrespective of whether the article is based on literature searches or, for example, a personal literature archive, you must state the criteria for selection and justification of these criteria.»
In practice, this means that the authors of articles in the JNMA are free to choose what kind of knowledge they will base their conclusions on – with all the risks of bias that this implies (9 ). In this article we will elucidate some disadvantages of current practices of publishing mainly unsystematic review articles in this journal, and state our reasons for recommending a more widespread use of systematic searches for the purposes of review articles.
Guidelines for systematic searches
The systematic search is one of the key factors that distinguish systematic review articles from traditional review articles. The objective of a systematic search is to identify all the relevant literature about a topic, and the quality of such a search is a critical point. Several sets of guidelines are available as an aid to preparing and reporting systematic review articles. Currently, the PRISMA recommendations (Preferred Reporting Items for Systematic reviews and Meta-analyses) (1 ) are among the most widely recognised, and they are being used by, for example, the Norwegian Knowledge Centre for the Health Services.
For a more detailed description of searches to be used for summaries of evidence, however, we recommend The Cochrane Handbook , which contains the guidelines established by the Cochrane Collaboration for the preparation of systematic review articles (7 ). The Cochrane Handbook contains a separate chapter on searches, which describes, inter alia , the sources one should use, how to plan the search process and how the search strategies should be designed. It also provides a detailed description of how the search process should be reported in the review (7, Chapter 6).
A precise reporting of the search process accompanied by a copy of the search strategy makes it possible to replicate and evaluate the method used to ensure completeness of the knowledge on which the conclusion in a review article is based. Documented searches in review articles have been evaluated, and the findings show that errors are committed in most of them (10 , 11 ). To help improve the quality of such searches, Sampson and collaborators reviewed existing guidelines for elaboration and evaluation of search strategies. On this basis, they developed evidence-based guidelines referred to as The Peer Review of Electronic Search Strategies (PRESS) (12 ). The authors summarised and evaluated the criteria thus identified, and prepared a validated checklist for assessment of the quality and completeness of electronic search strategies. This checklist, along with The Cochrane Handbook , is in our opinion the best place to start when a systematic search is to be prepared.
The quality of the searches is also related to the qualifications of the searcher (13 – 15 ). To produce a search which is optimal and well-targeted, we recommend a close cooperation between the medical profession and library expertise. These must jointly ensure that the review is prepared in accordance with the recommended guidelines.
Different user interfaces
Since most review articles in the Journal of the Norwegian Medical Association are based on searches in PubMed, we wish to add a few words about various user interfaces and how these may affect the search result. A distinction can be drawn mainly between interfaces that rank the results by relevance and interfaces that are based on Boolean logic. Medline, which is the most widely recognised medical database, is provided with several different interfaces.
The free version PubMed ranks results by relevance. This means that complicated algorithms are embedded in the system, for example synonym control. This simplifies the searching and is intended to place the most relevant articles on top of the list of hits. The challenge inherent in an interface that ranks results by relevance consists in whether the system is able to handle our search terms in the way we want it to. For example, if we enter the abbreviation «asd» in PubMed and expect the system to understand that we are interested in «atrial septal defect» we will be disappointed. In addition to the abbreviation itself, the search captures only «Arthrop Struct Dev», which is the abbreviated name of a journal.
In Norway we have access to Medline Ovid through the Health Library. «Advanced search» in Medline Ovid is an interface based purely on Boolean logic. In such searches, we use operators from George Boole’s symbolic logic to combine search terms and search results. The two most important operators include the logical product AND, which means that all search terms must be present in the result, and the logical sum OR, meaning that only one of the terms must be present. Thus, the searcher maintains total control of the search, and must ensure that all synonyms are included and correctly combined in the search.
Both these types of interfaces provide advantages as well as disadvantages, although searches based on Boolean logic will be more transparent, since they are easier to document.
Literature searches in review articles in the JNMA in 2012
An inspection of articles in the JNMA for 2012 showed that altogether 30 review articles had been published. None of these articles complied with the most important requirements defined by The Cochrane Handbook to searches to be used for systematic review articles, i.e. that searches have been undertaken in Embase, Medline and CENTRAL, that the search strategy for one of the databases is appended and that the search process is fully documented (7 ).
As we see it, reporting from the search process was on the whole adequate, although a complete search strategy for one of the databases was appended to only one of the review articles (16 ). For most articles, searches were restricted to only one database, most often PubMed. We cannot claim that the conclusions in the review articles published in the JNMA for 2012 would have been different if the searches had included other sources and had been made more systematically, but our experience indicates that a systematic search may make a difference. In addition, research shows that the systematic review article will provide a more certain answer (17 ).
Conclusion
Systematic review articles came into being because the scientific criteria for preparing traditional review articles were insufficiently strict. Such review articles were therefore not considered to be reliable (1 , 11 , 18 ). We do not believe that a review article, systematic or not, is able to produce any medical truth (19 ). It only provides a status for a medical problem at a given time, on the basis of available evidence. However, this evidence base should be as good as possible – and we are of the opinion that a well-executed search in a selection of electronic databases will be the best starting point for ensuring that as many good-quality studies as possible on the topic in question have been retrieved. Searches in electronic databases will never be the only method for retrieval of information for review articles, but it is the most effective and the only replicable one.
Like other research, summaries must be subject to certain requirements, and the method must be transparent and replicable (20 , 21 ). In this way, conclusions can be discussed and challenged by others. We claim that the Journal of the Norwegian Medical Association ought to impose stricter requirements that would require authors of review articles to use systematic and well-defined searches in accordance with applicable guidelines (1 , 7 , 22 ) and encourage them to contact their library for any assistance necessary.