The statement on public disclosure of results from clinical trials published in 2015 by WHO defines reporting timeframes and calls for publication of the results of still unpublished trials. The key outcomes of clinical trials are to be posted in the results section of the clinical trial registry within 12 months of primary study completion (the last visit of the last subject for collection of data on the primary outcome) and the main findings of clinical trials should be published in a peer-reviewed journal (preferably free to access) at most within 24 months of study completion. 15 However, compliance with this and similar requirements 16–19 has been poor. 20 21 Still about 50% of completed trials remain unreported 22 or delay sharing their results. 20 In the European Union, 89% of trials completed between 2004 and 2018 sponsored by academic institutions were not reported within a year of the trial’s end 20 which indicates that the academic community failed to meet requirements of the European Commission guideline 17 and the Clinical Trial Regulation EU No.536/2014. 19

The results of completed clinical trials are crucial for decision-making in evidence-based medicine. 1–5 They also inform patients, 5 6 clinicians, researchers, policy-makers, 5 impact future research 5–7 and play an important role in health technology assessment. 8 9 Non-dissemination or delayed dissemination of trial findings not only negatively affects decisions in healthcare, but is also unethical as the results of all research involving human subjects must be publicly available regardless of whether they are considered positive or negative. 6 10 11 Not reporting trials’ results is unfair to trial participants who often put themselves at risk and burden to contribute to scientific knowledge. Paediatric trials are particularly challenging as they recruit children, thus additional protections are required to avoid their exploitation in the research. 12–14

We used a logistic regression analysis to identify explanatory variables with a possible effect on the timely publication rates ( online supplementary material ). We started with a univariate model, testing all variables individually to identify the variable that leads to the largest increase in the log-likelihood. Then the regression model was built stepwise by including the variable with the largest log-likelihood increase in each step until there was no more variable that could be added to substantially increase the log-likelihood. There were however no strict rules for variable inclusion, as this was an exploratory analysis.

We classified a study as paediatric when all or most participants (over 50%) were less than 18 years. If the study enrolled both adult and paediatric participants but it was impossible to determine whether the majority of study participants were above or under 18 years of age, we classified the trial as mixed population study. Unclear studies lacked data on average/median age of participants, thus it was impossible to assign them to either of these groups. The information on participants’ age was searched on: ClinicalTrials.gov website, in publications or in other sources such as other registries (eg, European Clinical Trials Register) or sponsor websites if any additional study identification numbers were provided on ClinicalTrials.gov.

For each of the included studies, search for a publication was done independently by two researchers (KS, MTW) in a 4-step process between 3 December 2018 and 7 February 2019 on ClinicalTrials.gov, PubMed, Google Scholar and Web of Science ( figure 1 ). We defined a publication as an article with at least 400 words. In case of multiple results publications, we chose the earliest publication. When a study contained the results of two or more trials but reported the results of each trial separately, it was included. Abstracts, study design publications without results, reviews and other background literature were excluded. If there was a disagreement whether to include or exclude a publication, the third person (an arbiter, MW) was involved. The flow chart showing the publication search with reasons for exclusion is presented in online supplementary figure 2 . When a publication was identified, we extracted first publication date, PubMed ID and DOI (if applicable). Only if all searches stayed without results, the study was characterised as ‘no publication found’.

A given trial was assigned to an AMC if the AMC was either mentioned as the responsible party, lead sponsor or collaborator or if the principal investigator, study chair or study director was affiliated with the AMC. The AMC was then considered a ‘lead’ contributor in these trials. If AMC was mentioned only as a facility or a study was conducted in an academic hospital or the principal investigator, study chair or study director was affiliated only with an academic hospital without the name of the AMC, then AMC was considered a ‘facility’ contributor in these trials. One trial could be counted for multiple AMCs. The flow chart presenting the trial selection process with reasons for exclusion is shown in online supplementary figure 1 .

There was no significant difference in dissemination of findings in paediatric trials compared to adult trials, 73.6% (95% CI 59.7% to 84.7%) vs 81.0% (95% CI 75.6% to 85.7%), p=0.22. Among paediatric trials, 21 (39.6%) posted summary results on ClinicalTrials.gov within a year of study completion and/or published them within 2 years of CD (see table 2 ). Among adult trials, the percentage of posted or published results was 43.5%. There was no significant difference in posting summary results within 12 months of CD comparing adult and paediatric trials 7.3% (95% CI 4.4% to 11.2%) vs 9.4% (95% CI 3.1% to 20.7%), p=0.80; neither in disseminating results by both posting them on the website and publication 4.4% (95% CI 2.2% to 7.8%) for adult vs 7.5% (95% CI 0.4% to 14.7%) for paediatric trials, p=0.55.

Discussion

Our cross-sectional analysis of all Polish AMCs revealed low performance rates for timely disseminating the findings of interventional clinical trials completed between 2009 and 2013. Only 36.1% (110/305) of trials were published as a journal article within 2 years of primary study completion or reported results on ClinicalTrials.gov within a year of PCD. Still 20.3% (62/305) of trials remain without their results disseminated. Delayed dissemination and non-dissemination of trial results, among others, negatively affect decisions in healthcare and leave recruited participants without information whether the therapy actually works or not.26 Posting summary results on the registry website allows the research findings to be accessible to everyone. Moreover, the results are presented in the same format, which reduces reporting bias and limitations arising from journals’ various requirements on maximum number of tables or article length. Despite the fact, that sharing study results via ClinicalTrials.gov is not as complicated and time consuming as publishing a peer-reviewed journal article, our analysis showed that the rates of timely reporting of summary results in the clinical trial registry had been particularly low between Polish AMCs, ranging from 0.0% across 4 Polish AMCs to 16.0% (Ludwik Rydygier Collegium Medicum in Bydgoszcz). Overall, 3.9% (12/305) of all trials and 1.3% (1/75) of lead trials had met the criterion of posting the trial findings in the results section of the clinical trial registry within 12 months of primary study completion.15 16 18 Other research confirms such low reporting rates.20–25 This may be due to researchers’ anxiety that posting summary results in the results section on ClinicalTrials.gov will deprive them the chance to publish the same results in the journal. Another possible reason is that researchers in academia are required to publish in peer-reviewed journals and not necessarily in registries and databases. These could be improved by various initiatives at AMCs motivating researchers to disseminate results more broadly. Another solution that could increase reporting in the clinical trial registry would be that all journals do not accept for publication unregistered trials and trials without their results at least submitted. Moreover, study sponsors should enforce timely results reporting.27 When a principal investigator applies for new funding, they may be asked to provide a list of all previous trials and their reporting status.15 The motivation to disseminate the study results may be also the effect of the extensive actions such as public debate, training, conferences, pressure from the scientific community and other. AMCs may also formulate clear and efficient rules that will apply to the persons responsible for disseminating the results of the research.

Despite the broad consensus that all research results should be disseminated, a suggested scope and suggested dissemination time varies.11 13 15–19 Thus in our study we provided the rates of results dissemination following both the study’s primary CD (PCD) and CD. We found that rates of reporting of results following CD are as low as the rates of results reporting following PCD.

We decided to analyse the dissemination results in paediatric trials separately because of a special status of the paediatric population. These trials are required to offer an additional protection by imposing a strict risk threshold or allowing paediatric research only when there is a prospect of direct benefit for the participants. The ethical justification of trials with vulnerable populations hinges also on a social value they are able to offer, that is, the ability to provide generalisable scientific and medical knowledge. If the results of a trial are not available publicly, the trial has no way of providing generalisable knowledge or changing clinical practice, thus robbing the trial of its social value component. Our analysis showed that there was no significant difference in dissemination of results between paediatric and adult trials.

Our analysis has several limitations. First, we searched ClinicalTrials.gov instead of EU Clinical Trials Register as it contained more registered clinical trials conducted in Poland in a specific search period. Thus, we possibly not included a fraction of trials registered only at the EU Clinical Trials Register. Second, our results may be underestimated as in 2009–2013 about 450 new clinical trials were conducted in Poland annually for both academic and non-academic sites, giving a total of 2243 new clinical trials over 5 years.28 We captured 1267 completed trials and excluded almost 76% of them mainly because the name of the research site was not provided (also see online supplementary figure 1). Third, we relied on the recruitment status found in the database which might not had been updated, meaning that some active or recruiting trials could in fact be completed.29 We also relied on the names of AMCs and teaching hospitals provided on the website. We added the city name as the additional search criterion to include all studies conducted by AMCs to avoid misspellings of names and shortcuts on ClinicalTrials.gov. We classified studies as lead trials only when it was clearly reported that AMC was a lead contributor in the trial. If there was a name of the teaching hospital without the AMC name, the trial was then considered as a facility trial. Fourth, we did not asses the accuracy of the dates of study start and completion in the ClinicalTrials.gov database. In some clinical trials the reporting of results and publication dates preceded the PCD and CD. We counted those results as reported within a year of study completion which may overestimate our results. Fifth, we defined publication as an article with at least 400 words. Such publication may be only a hint that such a study was conducted and may not demonstrate full methodology and results. Nevertheless, every journal has different requirements, word and table limits; therefore, it is challenging to present entire study results of large multicentre trials. There is also a well-known problem of the selective results reporting.30 Sixth, despite extensive publication search by two researchers independently we could have missed some relevant publications. Seventh, we did not assess the quality of results reporting and we did not compare protocols and outcomes, we also did not assess the consistency of results reporting in the ClinicalTrials.gov database compared to those published in journal publications. Finally, we could follow trials completed in 2009 for more than 9 years whereas trials competed in 2013 we could follow only for 5 years.