Old Drupal 7 Site

Failure to report results of registered trials of treatments for shoulder complaints

Robin Holtedahl About the author
Artikkel

The reliability of research findings may be weakened in several ways. Trials that have positive and statistically significant findings and that support the hypotheses of the authors or editors have a greater probability of being published (publication bias) (1, 2). Other common sources of bias are selective reporting (incomplete reporting of predefined clinical outcome measures) and change of outcome measure between the protocol and the published article (36). This type of dishonest research practice is astonishingly common and may result in exaggerated or simply incorrect assertions about treatment efficacy in systematic reviews, meta-analyses and clinical guidelines (7).

Researchers’ obligation to publish the results of human clinical trials is laid down in both statutory form and in ethical guidelines, amongst others by the World Health Organization (8). In order to remedy underreporting and lack of transparency in medical research, a database, ClinicalTrials.gov (CTG), was established in the USA in the late nineties for mandatory registration of human clinical trials. Several similar registries have been established subsequently.

The two biggest are ISRCTN (9), which came in 2000, and the European Union Clinical Trials Register (EUCTR)/European Clinical Trials Database (EUdraCT), which covers trials in the EU area (10), as from 2004. Since 2007, ClinicalTrials.gov has required that results be published in the registry 12 months at the latest after primary completion, and EUdraCT stipulated similar requirements in 2012 (11, 12). All the registries offer free access to electronic searches.

Shoulder complaints are one of the most common causes of consultations for musculoskeletal complaints, and both surgical and conservative therapy may be indicated. Therapeutic practice is based on evidence to only a limited extent, however, and published studies are characterised by bias of various kinds (13, 14). I wanted to determine the volume of studies of shoulder complaint interventions registered with the aforementioned three study registries, and the percentage of completed studies that had made their results publicly available by reporting directly to the registry and/or in a peer-reviewed publication.

Method

Searches were conducted in the study registries ClinicalTrials.gov, EUdraCT and ISRCTN using the search word ‘shoulder’ in combination with “rotator cuff” “impingement”, “arthritis”, “osteoarthr*”, “fracture”, “labral”, “dislocation” and “adhesive capsulitis” for completed randomised and non-randomised phase-4 trials in the time period 1 January 2000–31 December 2018. Because of the requirement that results be reported to registries within 12 months of completion, studies completed later than 31 December 2018 were excluded. Studies concerning prevention, diagnostics and shoulder pain secondary to other disorders were also excluded. The search cut-off date was 31 May 2020.

The relevant interventions were grouped into four categories: 1 Surgery, 2 Physical therapy/training (possibly in combination with another conservative treatment, including medical treatment), 3 Other types of conservative treatment and 4 Analgesia/anaesthesia. The geographical location (continent) of the trials, type of shoulder complaint (adhesive capsulitis (frozen shoulder), osteoarthritis/dislocation/fracture, subacromial pain and unspecified shoulder pain), number of participants per trial (based on reported data, or information in the protocol), the defined start and completion dates of the trials and type of funding (“industry” versus “other”) were recorded.

The primary outcome measure was number of trials with information about results in the form of registry data and/or publication in a peer-reviewed professional journal. If the registry did not have a link to a published article, PubMed was searched for potential articles based on the contact person given or principal investigator, possibly in combination with key words from the trial description, identification number, recruitment period and/or number of trial subjects in the registry – if necessary in full text. Pearson’s chi-squared test, Kruskal-Wallis’s test and the odds ratio (OR) with 95 % confidence interval (CI) were used for subgroup analyses. MedCalc version 19.2 was used to perform statistical calculations.

Results

A total of 581 trials were identified through searches in the three registries, 457 in ClinicalTrials.gov, 74 in ISRCTN and 50 in EUdraCT. Of these, 233 were excluded in accordance with described criteria, including 49 trials completed after 1 January 2019. Of the remaining 348 trials, 287 were reported to ClinicalTrials.gov, 45 to ISRCTN and 16 to EUdraCT (Table 1). 170 trials were performed in Europe, 131 in North America, 68 in Asia and the Middle East, 22 in South America and six in more than one region. There were 278 randomised and 70 non-randomised trials. The median number of participants per trial was 60 (interquartile range 40–109). The diagnoses were grouped into adhesive capsulitis (n = 42), osteoarthritis/dislocation/fracture (n = 116), subacromial pain (n = 232) and unspecified shoulder pain (n = 214).

Table 1

Overview of number of trials of treatments for shoulder complaints with reported results in the period 1.1.2000–31.12.2018, number of participants and time from completion to the results being made available in a registry and/or published in an article.

Trial registry

Total
(n = 348)

ClinicalTrials.gov
(n = 287)

ISRCTN
(n = 45)

EUdraCT
(n = 16)

Industry-funded trials (n)

44

6

2

52

Trials with reported results, n (%)

153 (53)

22 (49)

2 (13)

177 (51)

Data entered in registry, n (%)

54 (19)

1 (2)

0

55 (16)

Time from completion until data entered in registry (median no. of months, interquartile range)

24 (15–39)

7 (-)

-

24(14–39)

Published data, n (%)

127 (44)

22 (49)

2 (13)

151 (43)

Time from completion until data are published in article (median no. of months, interquartile range)

24 (16–35)

33(20–43)

19 (-)

25(16–37)

Participants in all (n)

25 134

5 057

1 833

32 024

Participants in trials without reported results

11 287

2 032

1 698

15 017

For 171 (49 %) of the included trials, no results could be traced either in a registry or in published form (Table 1). Of the three registries, EUdraCT had the highest percentage of trials without reported results (86 %). A total of 32 024 subjects took part in the included trials, 15 017 (47 %) of them in trials without reported results. Figure 1 shows number of trials with and without reported results by type of intervention.

Figure 1 Number of trials where results were reported to a registry/published in a peer-reviewed journal or were not reported (without result), by type of intervention.

The median trial duration was 21 months (interquartile range 11–37). Of the 55 trials (16 %) that had reported results to a registry, a median period of 24 months passed (14–39) from the time of completion, and only seven trials reported within 12 months. 151 (43 %) of trials were published a median of 25 months after completion (16–37). 29 (8 %) of trials were both published and reported to a registry. A Kaplan-Meier plot shows the cumulative proportion of trials with traceable results in the registry and/or a publication as a function of time since completion (Figure 2).

Figure 2 Cumulative proportion (per cent) of trials which over time had published results or reported results to a registry. The x-axis shows the number of months since the study was completed. The vertical line marks 12 months, which corresponds to the requirements of EUdraCT and ClinicalTrials.gov for reporting of results to a registry.

Analysis of the proportion of trials without traceable results in the periods 2004–04, 2005–09, 2010–14 and 2015–18 found only small signs of improvement (Kruskal-Wallis test, p = 0.44) (Figure 3). For example, 50 % of reported trials from 2005–09 and 53 % from 2010–2014 were published as of 2020. The corresponding figures for data to registries were 8 % and 23 %, respectively.

Figure 3 Total number of completed trials, trials where the results were published and trials where the results were reported to a registry as of 1 May 2020, grouped into the periods 2000–04, 2005–09, 2010–14 and 2015–18.

There were no statistically significant differences between randomised and non-randomised trials in the proportion without traceable results (Pearson’s chi-squared test, p = 0.6), nor among the intervention groups or the four diagnostic groups (Kruskal-Wallis test, p = 0.7 and 0.1, respectively). The proportion without reported results was only non-significantly higher in trials with fewer than a median of 60 participants compared with larger trials (OR 1.2; 95 % CI 0.76–1.78, p = 0.5). Fifty-two of 348 (15 %) of trials were industry-funded. The proportion of these without published results was 60 %, compared with 47 % for other trials (1.7; 0.90–3.00, p = 0.1).

Discussion

Almost half of the total of 347 trials of shoulder interventions that were completed at least one year before the search had no traceable data on results in either study registries or peer-reviewed journals. This means that up to January 2019, over 15 000 subjects took part in shoulder trials without learning the outcome. The absence of results was affected to only a limited extent by type of intervention, trial design or size, and there was no evidence of an improvement in practice over time.

It may be objected that the results of this study cannot necessarily be generalised to other types of musculoskeletal disorders. However, poor reporting practice is described in studies of joint complaints (15), back complaints (16) and orthopaedic traumas (17, 18), as well as in human clinical trial research more generally.

According to an analysis, the proportion of published scientific studies with positive results increased from 70 % to 86 % in the years 1990–2007, and the trend has been particularly pronounced in clinical medicine and pharmacology (19). A preponderance of published studies with positive findings has been described even in journals with a high impact factor (20). The results of original studies often cannot be replicated in subsequent similar studies (21). The significance of commercial funding for reporting practice is controversial (5, 2224). In the included trials, those with commercial funding had a reporting rate 13 percentage points lower than the others, but because there were few trials the difference was not statistically significant.

The majority of the registered trials were small, with a median of 60 subjects per study, and only three had more than 500 subjects. A lower rate of publication of small studies has been described previously (24). It is conceivable that the low rate of publication in the present analysis is a consequence of small studies more often being rejected than large ones, but no significant association with trial size could be demonstrated with respect to either publication or reporting to registries. Nor was study design found to influence reporting, but others have described a higher publication rate for randomised trials (22).

Measures to improve reporting

The reason that so many trials are never published is composite. Some initiated trials, not least within surgery, are never completed, often because of recruitment problems (25). Other financial and logistic restrictions may make the road to publication more difficult. These are obstacles that academic and commercial research institutions with more resources behind them are better equipped to manage (5, 23).

A vision of reducing failure to publish to zero is not realistic. However, reporting to trial registries assures access to results irrespective of acceptance from peer-reviewed journals. These accordingly represent a low-threshold channel for making publicly available the results of studies that do not make the grade for publication. Although registry data may be of variable quality, they are often more complete than the data reported when the study is published, with respect both to efficacy and to adverse events (26).

Registry data therefore function as both a supplement and a corrective to selective reporting. Reporting requirements have been made more stringent in recent years, largely as a consequence of pressure from patient organisations and research communities. With effect from 2005, the International Committee of Medical Journal Editors (ICMJE) made a registry-indexed study protocol prior to the start of the study a requirement for publication. Although the majority of journals, including the Journal of the Norwegian Medical Association, have complied with this requirement, compliance is still poor, even in journals with a high impact factor (5).

With the support of health authorities, ClinicalTrials.gov and EUdraCT have cleared the way for sanctions against researchers and institutions that do not fulfil the transparency requirements. In January 2020 the European Court of Justice decided, despite protests from the pharmaceutical industry, to give researchers and health authorities in the EU access to the European Medicines Agency-owned Clinical Study Reports (CSR), which provide detailed information about the design, analysis and findings of clinical trials (27). In Denmark, prompted by the Medicines Agency, it has been made statutory to penalise trial sponsors who fail to report their results to EUdraCT (28). A federal court in the US recently required all trial sponsors to publish the results of completed trials registered in ClinicalTrials.gov up to 2017, with daily fines for lack of reporting (29).

The AllTrials campaign is working to have previously performed, ongoing and future trials made reportable. In this international initiative, universities, ethical committees and medical institutions are urged to work to ensure that their members comply with the requirements of transparency. Central Norwegian research institutions are also supporting the campaign. In 2018, AllTrials launched a tracking instrument to flag sponsors who fail to publish results, both from trials already completed and from future trials (30).

This has proved effective, primarily at academic institutions. For example, twice as many results were recently reported to EUdraCT by German universities in the course of six months as in the preceding six years (31).

It is pointed out in guidelines from the Norwegian National Research Ethics Committees that research results shall as a general rule be made available, and that researchers have an independent responsibility to ensure that their research can benefit research subjects, relevant groups and society in general (32). At present, however, there are no authorities that ensure that results actually are made publicly available, and failure to do so has no practical consequences either.

The research ethics committees should not restrict themselves to approving protocols, but should also assume responsibility for ensuring that the results are made available. For example, all studies that are approved by a regional ethics committee could be sent to a central archive that sets up automatic alerts about deadline overruns to inform the researchers responsible, the study sponsors and the health authorities.

Strengths and weaknesses

One strength of this study is that in addition to ClinicalTrials.gov, two other registries were included. Possible sources of error are that published studies may have been overlooked because the literature search was limited to PubMed, and that only one person was responsible for search and analysis. No effort was made to obtain missing study results from the person named as principal investigator. It was not possible to trace trials without information about the principal investigator.

Conclusion

The high proportion of trials of treatments for shoulder complaints for which results were neither reported to trial registries nor published reflects a general systemic weakness in the availability and dissemination of findings from human clinical trials. When research subjects consent to take part in an intervention study, this is based on an expectation that the results will be made available, irrespective of outcome, so that their participation can contribute to strengthening the evidence base. Without this openness about outcomes, research subjects could be subjected to risk without reaping any benefits. Failure to report outcomes is therefore ethically unacceptable. It also implies a risk of a fictitious over-representation of trials with “positive” results, and of adverse reactions and events not being acknowledged. Lack of transparency about the outcome of completed studies may lead to waste of limited research funding as a result of the initiation of unnecessary and redundant studies.

It should be a prioritised responsibility of health authorities, research institutions and journal editors to ensure that those who initiate, fund and conduct clinical trials also arrange for timely publication of results, with reporting to a trial registry as a minimum requirement.

Thanks to Knut Arne Holtedahl for useful comments. The article has been peer-reviewed.

Anbefalte artikler